Might found a religion. Might do SuperHappyDevHouse.
Hmmm. I see what you mean.
I suppose that what I had in mind was that either they were:
good solid server OSes (e.g. SCO Xenix – I put a lot of installations of that into various client sites) but barely usable as a client OS, either on the grounds of cost, or required specification, or lack of end-user apps, or terrible UI, or some other reason;
or good client OSes (for their time) but barely usable on servers (e.g. I maintained a bunch of 3Com 3+Share servers: a DOS-based fileserver OS, and not a very good one; classic MacOS is another good example here);
or TBH not very good at either (e.g. OS/2 1.x – solid OS, few native apps, expensive as a client and needed a high-end PC too, but outcompeted on servers);
or proprietary OSes that needed proprietary machines, and therefore not usable on COTS kit.
NT was a very good server OS, with a decent GUI; and it was a good client OS, because it was Windows and could run Windows apps. It ran on almost any well-specced PC. It benefited from bespoke kit in the early days due to a limited HCL, but in theory it could run on most PCs… you just might not get sound, or some of your ports might not work, or it’d be stuck in 640*480 or something.
I put OS/2 on many of my own PCs. It was a pig to install and not very compatible at all and not as stable as its fanboys claimed.
By NT 3.51 it was as stable as PC OSes got, it was easy to install, and compatibility was actually pretty good.
And that was circa 1995, when Linux (or BSD) were barely things at all, and OS/2 was flailing.
Pretty much, yes. And the storages proposal is about going a step further than that, for implementing things like Vec or HashMap with hybrid inline storage, or statically allocated external storage, or whatever else. Storages with no associated allocator could even be used in no_std environments. Currently such specializations of std types are implemented in crates and re-implement all the usual methods themselves, duplicating a lot of code.
I am not sure how far the storages proposal goes, but it would be cool if you could use it to implement things like smartstring or Java’s singletonList, which reuse or optimize out the heap pointer, length, and capacity fields.
So the “generic allocator proposal” is about leveraging default generic type parameters to make allocators pluggable? Just rewriting std to accept an allocator argument in every allocating type?
Rust does have default generic type parameters already. So does C++, demonstrated in the vector example above.
Is the “generic allocator proposal” specific to allocators? If so, I don’t think it’s what I’m describing. I don’t think allocators should be special, I think a default impl should be specifiable for any generic parameter.
Unlike @4ad I do subvocalize when reading code. I even structure my code so it’s more “literate.” I don’t know how exactly to describe that, I suppose in the same way that people have descriptive function names, I try to also have descriptive code structure, so it can be read in 1 pass as often as possible.
I wonder if F# will evolve towards parity
I’m amused it’s in Classical Chinese, and even more amused the name of the language is literally the Mandarin word for Classical Chinese.
Ok, that’s what I was thinking. Glad it wasn’t just me who thought that 😅
Finally conjured some motivation to resume work on Password Store so I’ll be implementing a passphrase cache for PGP keys for parity with what OpenKeychain used to offer.
I recall many many many many years ago before Google reader was killed even someone who worked on a (possibly commercial) rss reader did a post on how rss proved that as a human format was just not workable. Rss was “xml” from day 1, but because of how humans wrote xml it was broken, and fundamentally couldn’t be parsed as xml. This is basically the same problem that xhtml had, which is the xml is simply too structured and too strict to be reasonably written by people.
Apple makes their own SoCs, so SoC vendors don’t typically try to design their SoCs according to what Apple wants. So the fact that Apple uses it isn’t very relevant when it comes to the SoCs you’d want to run Linux on.
Thanks for your response!
(1) Yes, I think we mean the same thing there - the C developer in me doesn’t expect an arena allocator to be type-safe, but just to hand out void *. ;-)
(Sorry, I didn’t check for new comments posted after I started writing. Thanks for the reference!)
(3) I see, thanks for the clarification and extra information! I agree that being logarithmic in the input is probably good enough in practice.
Literally anyone with an Apple device? HEVC is also popular elsewhere - I’m pretty sure the scenes have adopted it.
Yes! I brought Android up because every Android comes with one copy of Linux but probably several copies of sqlite. 😊
The only necessary language support for allocators in Rust is, I believe, the #[global_allocator] attribute. Rust doesn’t have a new operator like C++, it has the Box type. There are other attributes and compiler builtins for core types like Box, but I think those just exist for ergonomics and optimization. Rust has never made a huge effort to decouple std from the compiler, and I see no practical reason it should. I like having compiler builtins explicitly supporting certain use cases, with std providing abstractions for many of those use cases, rather than stupid bullshit exploiting edge case properties of the language like SFINAE.
Also you’re describing the generic allocator proposal, the one I mentioned that already exists in C++. This is the type signature for std::vector in C++:
class Allocator = std::allocator<T>
> class vector;
Thanksgiving with the family on Sunday. Other than that I plan to squeeze in some time playing Elder Scrolls Oblivion for the first time :)
I think this fairly close, but note quite, see this comment.
Yeah, I’d say I used the word “static” completely wrong – this is definitely not static, as you can render bigger scenes on bigger machines. and smaller scenes on smaller machines. Allocating just everything in statics, if you can, is of course the best solution, but that does limit you to working with essentially constant amount of data, without the ability to scale down or up without a rebuild. Which is often fine!
The depth is guaranteed to be logarithmic in size, as at each step we divide triangles into equal halves. So, by the time you run out of stack, you run out of the heap as well (but yeah, “stack” is another thing which prevents truly reliable programs. In the user-space, there’s usually little guarantees about how much stack you have, and how much stack is required for various outside-of-your-control library functions).
Traversal can be worst-case linear, and that’s one of the reasons its written in a non-recursive fashion. It also uses “constant amount of data” approach for the workqueue of items:
Ideally, you’d allocate that from a scratch space and pass that in, but that requires extending the in_parallel API to allow some per-thread setup. This is not too hard, but I ran out of steam by that point :-)