The OOM killer is, IMNSHO, broken as designed. Track how much memory is available, return NULL, let the application deal with it then, when it can still be dealt with, instead of killing a random (I know, not really random) process later. I disable the OOM killer whenever feasible.
In practice though, C++ throws, Rust panics, I think only well-written C code would have a chance of behaving ‘correctly’ in this case? And that’s the kind of low-level process that’s unlikely to be selected by the OOM killer.
So effectively, letting the application deal with it equals letting the application crash. The application that runs into this situation can be whatever application happens to need an allocation at some point. That seems more random than what the OOM killer targets?
That’s not the OS’s decision to make, though. With the OOM killer enabled, C/C++ doesn’t have the option to handle it differently. If Rust or Go ever wants to change how they handle allocation failure in the future, they can’t if the OOM killer is enabled. It’s too strong of a policy decision for such low-level features as allocation and process lifetime.
(Of course, I haven’t written a kernel used by billions, so it’s easy for me to judge.)
Sounds to me like a good opportunity for an opt-in flag asserting that a particular binary handles allocation failures gracefully, so return NULLs to them when appropriate; else deal with it via the OOM killer.
If there were capacity planning done and limits set on processes or process groups, the ones violating their own capacity would be the ones degraded.
OpenVMS used process limits for that reason. Plus accounting purposes like link says. Then, they had both virtualized kernels and clustering to mitigate that level of failure.
Seems fair I guess. They probably made thousands of easy ad dollars off Nintendo’s property, so it’s normal they have a problem with this.
However, is Nintendo actually making profit of the original Zelda, for example? I mean, is there a way for me as a player to get to play the original Zelda without having to search for a second hand NES and fishing for the original cartridge in flea markets? I get that is their intellectual property, but still it’s not like they still sell those games
The current philosophy of the law is that Nintendo has an eternal right to tax Zelda. It was never meant to go into the public domain, will never go into the public domain, and if legislators have funny ideas about this stuff then they’ll use their billions of previous culture tax revenue to bribe (er… “lobby”) them to have the right ideas again.
Anyone who gripes about this state of affairs is obviously a commie trying to steal from them.
In my understanding, in France and probably other countries, works (not sure what, but writings and musics are included for example, probably programs/video games?) enter public domain 70 years after creator’s death.
How can this apply to a living company?
The original author(s) license (indirect in employment contract or direct via a specific one) rights to the work. The ‘death’ clause becomes really gnarly when the actual work of art is an aggregate of many copyright holders.
This becomes more complicated as the licensing gets split up into infinitely small pieces, like “time-limited distribution within country XYZ on the media of floppy discs”. Such time-limit clauses are a probable cause when contents to whole games suddenly disappear, typically sublicensed contents like music.
This, in turn, gets even more complicated by the notion of ‘derivative’ work; fanart or those “HD remakes” as even abstract nuances have to be considered. The stories about Sherlock Holmes are in the public domain, but certain aesthetics, like the deerstalker/pipe/… figure - are still(?) copyrighted. Defining ‘derivative’ work is complex in and of itself. For instance, Blizzard have successfully defended copyright of the linked and loaded process of the World of Warcraft client as such, in the case against certain cheat-bots - and similar shenanigans to take down open source / reversed starcraft servers.
Then a few years pass and nobody knows who owns what or when or where, copyright trolls dive in and threaten extortion fees based on rights they don’t have. Copyright in its current form has nothing to do about the ‘artist’ and is complete, depressing, utter bullshit - It has turned into this bizarre form of mass hypnosis where everyone gets completely and thoroughly screwed.
These aspects, when combined, is part of the reason as to why “sanctioned ROM stores” that virtual console and so on holds have very limited catalogs, the rightsholders are nowhere to be found and can’t be safely licensed.
Yep, Nintendo do still sell these games, and it is possible for you to buy them. I bought one of these last week.
I just got a NES Classic and SNES Classic. They are pretty dope! I think that they are starting to care a lot more now that these are a thing :)
This does, however, have the unfortunate side effect of players not being able to play their favorites unless they are one of the ~60 games on these two classic editions. So, that’s sad. :(
Interesting how even in their very modern approach (compared to the hacking directly on Atari 800s their process is contrasted with) there seems to be no mention of source control, unless I missed it. This seems to point toward there being none:
Crucial to a team approach to software development is the ability to break programs up into separate modules and share access to them among the entire team. The UNIX operating system made this easy although a little more concurrency control in the various editors would have avoided one or two minidisas- ters. Often three people would be working on related parts of a game, passing files back and forth for advice or criticism while a fourth was compiling and testing the results of the others’ changes.
These days, I get antsy just thinking about any little personal-only project above a couple hundred lines not being in source control.
I’m skeptic, but I think they can pull it off.
In the end, they only need to reach half of Intel’s performance, as benchmarks suggest that macOS’ performance is roughly half of Linux’ when running on the same hardware.
With their own hardware, they might be able to get closer to the raw performance offered by the CPU.
they only need to reach half of Intel’s performance, as benchmarks suggest that macOS’ performance is roughly half of Linux’ when running on the same hardware
I’m confused. Doesn’t that mean they need to reach double Intel’s performance?
It was probably worded quite poorly, my calculation was like:
So if they build chips that are half as fast as “raw” Intel, but are able to better optimize their software for their own chips, they can get way closer to the raw performance of their hardware than they manage to do on Intel.
And the PPC → x86 transition was within the past fifteen years and well after they had recovered from their slump of the ‘90s, and didn’t seem to hurt them. They’re one of the few companies in existence with recent experience transitioning microarchitectures, and they’re well-positioned to do it with minimal hiccups.
That said, I’m somewhat skeptical, too; it’s a huge undertaking even if everything goes as smoothly as it did with the x86 transition, which is very far from a guarantee. This transition will be away from the dominant architecture in its niche, which will introduce additional friction which was not present for their last transition.
That’s not much of a transition. They did i386 -> amd64 too then.
(fun fact, I also did that, on the scale of one single Mac - swapped a Core Duo to a Core 2 Duo in a ’06 mini :D)
My understanding is that they’re removing some of the 32-bit instructions on ARM. Any clue if that’s correct?
AArch64 processors implement AArch32 too for backwards compatibility, just like it works on amd64.
As of iOS 11, 32-bit apps won’t load. So if Apple devices that come with iOS 11 still have CPUs that implement AArch32, I’d guess it’s only because it was easier to leave it in than pull it out.
Oh, sure – of course they can remove it, maybe even on the chip level (since they make fully custom ones now), or maybe not (macOS also doesn’t load 32-bit apps, right?). The point is that this transition used backwards compatible CPUs, so it’s not really comparable to 68k to PPC to x86.
I of course agree that this most recent transition isn’t comparable with the others. To answer your question: the version of macOS they just released a few days ago (10.13.4) is the first to come with a boot flag that lets you disable loading of 32-bit applications to, as they put it, “prepare for a future release of macOS in which 32-bit software will no longer run without compromise.”
Have a look at the benchmarks Phoronix has done. Some of them are older, but I think they show the general trend.
This of course doesn’t take GPU performance into account. I could imagine that they take additional hit there as companies (that don’t use triple AAA game engines) rather do …
Application → Vulkan API → MoltenVK → Metal
… than write a Metal-specific backend.
I guess you’re talking about these? https://www.phoronix.com/scan.php?page=article&item=macos-1013-linux
Aside from OpenGL and a handful of other outliers for each platform, they seem quite comparable, with each being a bit faster at some things and a bit slower at others. Reading your comments I’d assumed they were showing Linux as being much faster in most areas, usually ending up about twice as fast.
The things they’re slow at don’t seem to be particularly CPU architecture specific. But the poor performance of their software doesn’t seem to hurt their market share.
Looks to me like discrete tries to create larger clumps of the sameish color, while random is truly random placement.
How many survey participants thought “I hit ‘Tab’ when I indent. So I should answer that I use tabs!”, not realizing their IDE converts them to spaces?
There’s an interesting theory. The question was also selecting for competent vs incompetent (to the extent required to actually answer the question accurately), with incompetent (or at least ignorant) devs lumping into one bucket.
I won’t dispute the whole message, but speaking of disingenuous, it’s been zero days since a curl bug is up there. Apple is just lazy about patching. The fix has been available to other curl users for quite some time.
Apple is just lazy about patching
Apple’s really bad about this 10.11 stopped getting patches because 10.12 was in beta. There’s no roadmap and no defined lifecycles.
it’d be safer to just run Windows instead.
Actual security patches, or just iTunes updates? My friend switched to Windows on his MacBook because Apple left a critical security bug unpatched for a while. (I think it was actually with 10.10, and 10.11 was in beta.)
They’re patching 10.11 now though - I think it’s because 10.12 dropped support for some machines.
For some time now Apple’s policy has been to release patches for the current macOS release and security updates for the preceding two releases. See, for example, the security updates list.
The most recent updates for macOS/OS X (released 27 March 2017):
macOS Sierra 10.12.4, Security Update 2017-001 El Capitan, and Security Update 2017-001 Yosemite
which support:
macOS Sierra 10.12.3, OS X El Capitan v10.11.6, and OS X Yosemite v10.10.5
This story, like almost every one about NeXT these last few years, strikes me as deepy revisionest with regard to objective-C. I have no idea who or what power is astroturfing so hard, but someone is trying to re-write history to make objective-C more important or impactful than it was at the time. When NeXT hit the collective consciousness, they were of remark because of their hardware choices; nobody gave half a thought to their software implementation, despite the somewhat novel visual aesthetic. Yes, we were inspired and yes we lusted after their machines, but I am sorry, everyone was still writing C at the time.
I don’t remember it quite that way — “objects” were the mega-buzzword of the day, even though few people knew what objects and object-orientation meant. I remember it being heavily talked about that so much of NeXTSTEP was written in an object-oriented language, and NeXT/Jobs often used Interface Builder (which heavily relied/relies on the Objective-C runtime) to show off how much more quickly you could allegedly write software on the system.
When someone makes a typo there are 2 possible outcomes. Outcome 1: they are informed of their error and they can correct it or make an alias in their shell. Outcome 2: npm grows a new help option for every reported typo and eventually begins calculating the Levenshtein distance between the nearest valid option (an actual suggestion in the issue). Eventually typos make it into source code and scripts. Good luck grepping your codebase for usages of npm install.
Introducing this “affordance” to the user results in a cascade of increased complexity. Furthermore it’s not “just” complexity in the toolset, but an increased cognitive load on the programmer. What’s next? I can only imagine how frightening using node would be if npm install package installed some other random package off the internet.
Please keep “did you mean” functionality limited to search UIs.
I guess if the user has such a problem with accidentally typing “npm isntall”, “npm unisntall” or “npm verison”, they should just create aliases in their own shell config instead of bloating the software.
Good luck grepping your codebase for usages of npm install.
This is easily fixed by making grep default to matching on small Levenshtein distances too! (Yes, this is sarcasm.)
Well, it probably wouldn’t be so hard to extend regexes to compute a Levenshtein DFA during the compilation stage for fuzzy matching.
[Is it bad that I’m wondering how hard it would be to implement, as a fun regex hack?]
Strange usage of “skeezy” (https://en.wiktionary.org/wiki/skeezy). “Easy peasy” is the form I’ve heard more often. (Yeah, I know, I’m missing the point.)
I’m curious how this works with roaming. My hotel wifi is a little spotty here, and I notice that my phone pops up “roaming is not enabled” quite frequently, even when it’s not totally disconnected from wifi. Apparently if roaming had been enabled it would even fall back and use that without telling me? Disastrous!
From Apple’s doc (https://support.apple.com/en-us/HT205296): “Wi-Fi Assist will not automatically switch to cellular if you’re data roaming.”
To become searchable, entries must be added to a search index. An NSUserActivity can be marked as either public or private. Public activities will be pushed up to Apple and facilitate app discovery. When the amount of public NSUserActivity records submitted from your app reaches a certain threshold, these search results will show up during search even if the user doesn’t have the app installed!
Probably due to deficiency on my part, but I’m having trouble imagining good examples of this and how it’ll work in practice.
I think this is the opposite of something to gripe about. If I were an engineering manager at Adobe, I’d look at this as something for which memory usage is not critical to minimize: it’s a desktop app (so, demand paged), and not one that anyone keeps active for more than a few minutes. I have limited development resources, so I’d rather put them toward work on the product itself, not its installer, as long as the installer works well (including not feeling bad) for my customers. There’s an extremely-broadly-used rendering component we can use that works well and my designers already know how to use it? I’m sold.
(Obviously it should be set to reject drops!)
I think that this is going to be a continual change that will hardly be noticed. In fact, continual change is probably already happening. Let me explain: I’d bet that at least half of these 55-year-old programmers picked COBOL up in the past 15 years. (A 55-year-old programmer, now, was my age during the early 90s. So it’s not like she wasn’t exposed to C and Java and Python and Lisp and Haskell.) It’s not like they’re some dying breed. People can, and of course do, change languages, and people who wouldn’t deign to work in “the enterprise” while young tend to change their opinions as they get older.
One thing that happens with age is that people get more tolerant of boring jobs, not because they’re more mature, but because there’s a secret advantage to being the guy who works on the boring stuff (e.g. the database admin or the COBOL maintainer): your boss finds it boring, too, and leaves you alone. The amount of executive meddling you have to deal with if you work on “the fun stuff” is generally much higher, and that usually means that the politically powerful people take the meat and leave you with the grungy part of the project anyway. I’ve met plenty of very good 50+ year-old programmers who now work as DBAs at “boring” companies because they’re sick of having useless change shoved down their throats by 28-year-old VPs who don’t know what the fuck they are doing. So, I’d bet that most of those 55-year-old programmers picked up COBOL recently, and 20 years from now the 55-year-old programmers working on COBOL systems will people who have never used it as of this time, and everything will be just fine because, guess what, a large number of those 55-year-old programmers are really fucking good at their jobs.
In terms of economic time bombs, I worry more about Java (21st-century COBOL) than about COBOL itself. I really don’t think that the 55-year-old COBOL programmer making $250,000 per year is overpaid at all. That’s less than I expect to be making at that age, if I stay in this game. On the other hand, we have a lot of ridiculously overpaid and mediocre Java programmers running around: I’m talking about people who pull down $500,000 and up and wouldn’t even be 25th percentile if they jumped into a real programming community like Haskell. They get to that level by consistently setting up bidding wars for themselves (and, I mean, good for them; perhaps I’m a bit envious). Haskell programmers don’t get to do that, because Haskell jobs are rare, and you can’t change jobs every 18 months and keep a coherent career. COBOL programmers probably don’t get to do that very often, because I’d imagine that COBOL jobs are rarer than Java jobs. But because there’s such a flood of interchangeable Java jobs, a Java engineer can set up a bidding war every 2 years and, after a few iterations of this, take the whole store. The worst bit of this isn’t that they make a lot of money (again, good for them) but that they parlay this into power; Hooli can’t justify $500,000 for a mediocre Java programmer unless it gives him a VP-level title, and suddenly a guy whose career choices show a mercenary angle and a lack of taste when it comes to programming tools is the ranking engineer.
No, I’m not forecasting a “Java apocalypse”. There’s so much money in software that the ecosystem tolerates a great deal of pain and waste. However, when you have people getting paid huge amounts of money to write crappy line-of-business Java code, the incentive becomes protecting an income rather than career growth and mastery of the discipline. (I mean, who wouldn’t go to many lengths to protect a $500k income?) Then you have people developing systems with no intention of handing them off, instead trying to make themselves as irreplaceable as possible. Many legacy messes are unintentional (deadlines, technical debt) but when you overpay so massively for what is technically the wrong thing (VibratorVisitorFactory decline patterns, “Agile” and scrotum and loser stories) you create the intentional kind of legacy disaster. Those tend to be the hairiest of all because, while these people are mediocre programmers with a lack of taste, it would be a mistake to suppose that they’re not intelligent (they’re sharp as hell, or they wouldn’t have gotten those $400k-2M+ jobs at Bay Area megacorps; they just don’t value good code). They know how to protect an income. So don’t expect documentation.
The worst legacy time bombs that exist right now aren’t in COBOL, because most of those systems are old and Just Work and won’t be replaced so long as that remains the case. (Seriously, why throw out working code just because you don’t like the language that it’s in?) Rather, they’re in Java and most have been written in the past 10 years. This isn’t going to grind society to a halt, though. It’s just going to be an annoying problem that cash-flush businesses throw huge amounts of money at. And maybe there will be demand for static analysis capability and all of us Haskell programmers will be in demand.
Serious question: are there really any significant number of people pulling down $500K/year for coding aside from a tiny handful of very, very niche cases (say, a few HFT finance jobs or Bay Area startups paid mostly in stock that ends up paying off big at IPO)? I don’t think I’ve run into anybody making anything like that in my career… but, how would I know?
It’s obviously not the norm (i.e. you don’t get that kind of salary just by being a mediocre Java programmer) but it’s many more than “a tiny handful of very, very nice cases”. Google and Amazon and Uber aren’t startups and have plenty of people making that kind of money.
It requires playing the game, and you have to be not only mercenary with regard to companies (who isn’t, these days?) but also with regard to technologies, and dedicated to acquiring money and power at any cost. You can’t say, “I only want to do machine learning” or “I want to use Haskell” or “I prefer not to manage more than 5 people”. You have to optimize for job titles, status, and money at the cost of everything else.
Typically, this means that you end up doing a lot of Java. The good news for many such people is that, if you play the game well, you can get other people to do it for you. It does mean that you stop coding and become a non-tech, while continuing to market yourself as a “10x engineer”.
My feelings about it:
I hated tap-to-click on trackpads until I experienced mechanical tap-to-click on Apple trackpads. I also like what I call fast keyboards: quiet, short stroke length, above all a clear and well-defined click, “dry/stiff” (not mushy or wobbly) – i.e. pretty much the scissor switches in contemporary Apple keyboards, except they’re a little mushier than I want (esp. the longer keys). So when the leaks broke about the ultra thin keyboard and trackpad, I was very pessimistic: if they thought they could do away with the clicking, I’d have to pass on their new wares. But on this score, Apple have (probably) just blown me away. I will have to wait to test one of these in person, but I expect it to be better than my current Air 11". Want!
No ports except the one USB-C? There is one reason I seriously dislike this: MagSafe was a real solution to a real problem. Do they have a trick up their sleeves here or are we back in the times of laptops flying off of tables? So the lack of a power plug seems like it could be a serious issue. We’ll have to see. Now as for all the other ports that got combined into just one… I don’t give a single damn, not a one.
CPU. CPU CPU CPU. I noted this as soon as it was mentioned – because my current Air 11" was the fully decked out option of the time, with a 2GHz i7. And I‘d previously made the mistake of disregarding CPU, back when I bought my Samsung NC10 – and came to regret it. (Of course, that one had an Atom… yep.) I’ll have to absorb a serious drop in speed (though I’ll have to see what this turbocharge thing is all about). The question is whether the slowdown will be tolerable, or will it be too much. Won’t be able to tell until I’ve had an opportunity to play with it.
Resolution. Retina on an ultraportable laptop yessssssssssss finally oh man I have been waiting for this forever. Well, for around 4 years, but they sure felt like forever, considering how much else has happened in that time. Buuuuuut… dammit. It’s 2304 × 1440. I.e. a doubled 1152 × 720 display in Retina mode. Compared to 1366 × 768 on my 11" Air. That’s a whopping 26% reduction in area – twentysix percent. That’s gonna hurt… even the 11" Air’s display is already crowded enough. But that one is tolerable for me as a primary display. This one? I reckon that no one will want to use this as their sole machine without an external display for one’s desk. With the 11" Air, the display was just about large enough to not need an external display (which in practice meant using an external display never stuck for me, due to the friction of attaching/detaching).
The last one is by far the biggest conflict for me. I want want want Retina… I’ve been holding out for that for years. But this display may have too few pixels to be a realistic option. :-( I’m completely torn. I did not realise this during the presentation, and I was prepared to absorb the speed hit for all the great built-in peripherals. (I don’t, as I said, give a damn about the single port.) I was drooling. I was all set on buying one as soon as I had the budget. But when I realised the limited resolution, my heart sank. Now I’ll have to wait and see, and I may or may not have to resign myself to years more without Retina.
Do they have a trick up their sleeves here or are we back in the times of laptops flying off of tables?
Yes: all-day battery life, so you probably won’t need to leave your laptop plugged in as often as with laptops in the pre-magsafe era.
It supports scaled resolutions of non-retina equivalents: 1024x640, 1280x800, and 1440x900. That last one is the native resolution of the MacBook Air 13". Presumably it works the same way as the retina MacBook Pro does — render to a larger buffer then scale down — which looks great. Of course it’s yet to be seen what performance is like in that mode on this already relatively low-performance machine.
Do they have a trick up their sleeves here or are we back in the times of laptops flying off of tables?
The light weight + wind resistance should be enough to prevent damage from falling.
On the other hand because it is so light it might make the weight of the cable itself pull it off the table ;-)
wind resistance
So it’s usually not that windy in my office, or home, but… I’m trying to imagine what this means. It floats gently down to the floor?
“Wind resistance” is just another term for drag or air resistance – friction caused by an object moving through air. It doesn’t refer to the phenomenon of bulk movement of air that’s normally referred to by the term “wind”.
A couple of quotes that are perhaps apropos to the article. (Endorsement unintended.)
Andy Warhol talking about Coca-Cola: “You can be watching TV and see Coca-Cola, and you know that the President drinks Coke, Liz Taylor drinks Coke, and just think, you can drink Coke, too. A Coke is a Coke and no amount of money can get you a better Coke than the one the bum on the corner is drinking. All the Cokes are the same and all the Cokes are good. Liz Taylor knows it, the President knows it, the bum knows it, and you know it.”
Steve Jobs talking about a trip to India when he was young: “It was one of the first times that I started to realize that maybe Thomas Edison did a lot more to improve the world than Karl Marx and Neem Karoli Baba put together.”
Best Coke. No, wait, I can’t decide…
Is the rule that every programmer’s conversation eventually trends to alcohol written or unwritten?
A more natural sorting order would be by popularity. Apple might want to prioritize United States, followed by United Kingdom and United Arab Emirates. Whereas a British newspaper may want to put United Kingdom first.
This makes me want HTML6 to have a tag (or attribute or whatever) for country selection with system support for defaulting according to user locale.
Doesn’t seem to work on Mac OS X (Darwin Kernel Version 13.4.0). Compiles with gcc but terminates with segfault 11
Are you going through the steps to generate asm that’s appropriate for OS X or just using the asm section from the article? The numbers for OS X’s syscalls probably differ from the Linux ones used in the article and the segment names in OS X’s Mach-O likely differ from those in Linux/ELF, perhaps among other differences.
The syscalls will definitely be a little different. In addition to write being 4 on OSX (instead of 1), there is the additional complexity of class-specifiers in the upper bits. The code in this article, though, should give you everything you need. https://filippo.io/making-system-calls-from-assembly-in-mac-os-x/
Sadly, the author doesn’t go into much detail. I think that python gets it wrong.
Here’s the thing: I’m a visually oriented person. I imagine a stack as - well - a stack. If you ask me to draw a stack, I will draw it with the last frame on top.
If you would ask me to write down an exception history, I’d write. “A while evaluating B while…”.
If you turn the whole thing upside down, it confuses the hell out of me.
Scrolling is cheap.
Also, bonus points for getting mad through garybernhardt, who I really dislike. Because he gets people to rant without interacting with the issue enough.
Last frame on top also keeps the last frames (usually by far the most important) in roughly the same early character locations in things like log records, allowing you to do things like easily group by the first hundred characters of a collection of stack traces to see what errors/exceptions you’re encountering with the most frequency.
I don’t believe there’s a “correct” way to print tracebacks. Neither method conveys more information than the other and they’re both cheap to implement. Sounds more like a religious thing to me.
I just argued why the “traditional” method conveys information better for me (not more, but better). I probably does better for others - that’s not religious, but a matter of preference. So… a preference would be great :D.
Gnu Hurd it is
Well, so one of my Berlin Rust Hack & Learn regulars is porting rustc to Gnu Hurd. I can switch soon, year of the desktop is 2109.
The fact that I can’t tell if this is a joke or a typo makes it a better joke.
Both. I made the typo and decided to’s too good to be fixed.
If I remember correctly Haiku also has microkernel.
I thought that BeOS was microkernel based on what so many said. waddlespash of Haiku countered me saying it wasn’t. That discussion is here.
Haiku has a hybrid kernel, like Mac OS X or Windows NT.
QNX, Minix 3, or Genode get you more mileage. At least two have desktop environments, too. I’m not sure about Minix 3 but did find this picture.
Don’t MacOS and iOS both use variants of the Mach microkernel?
They’re what’s called hybrid kernels. They have too much running in kernel space to really qualify as microkernel. Using Mach was probably a mistake. It’s the microkernel whose inefficient design created the misconceptions we’ve been countering for a long time. Plus, if you have that much in the kernel, might as well just use a well-organized, monolothic design.
That’s what I thought a long time. CompSci work on both hardware and software has created many new methods that might have implications for hybrid designs. Micro vs something in between vs monolithic is worth rethinking hard these days.
That narrative makes it sound like they took Mach and added BSD back in until it was ready, when the evolution of Mach was that it started as an object-oriented kernel with an in-kernel BSD personality and that was the kernel NeXT took, along with CMU developer and Mach lead Avie Tevanien.
That was Mach 2.5. Mach 3.0 was the first microkernel version of Mach, and that’s the one GNU Mach is based on. Some code changes were backported to the XNU and OSFMK kernels from Mach 3.0, but they were always designed and implemented as full BSD kernels with object-oriented IPC, virtual memory management and multithreading.
Yeah, I didn’t study the development of Mach. Thanks for filling in those details. That they tried to trim a bigger OS into a microkernel makes its failure even more likely.
I don’t follow the reasoning; what failed? They didn’t fail to make a microkernel BSD, as Mach 3 is that. They didn’t fail to get adoption, and indeed it’s easier when you’re compatible with an existing system.
They failed in many ways:
Little adoption. XNU is not Mach but incorporates it. Whereas Windows, Linux, and BSD kernels are used directly by large, install bases.
So slow as a microkernel that people wanting microkernels went with other designs.
Less reliable than some alternatives under fault conditions.
Less maintainable, such as easy swaps of modules, than L4 and KeyKOS-based systems.
Due to its complexity, every attempt to secure it failed. Reading about Trusted Mach, DTMach, DTOS, etc is when I first saw it. All they did was talk trash about the problems they had analyzing and verifying it vs other systems of the time like STOP, GEMSOS and LOCK.
So, it was objectively worse than competing designs then and later in many attributes. It was too complex, too slow, and not as reliable as competitors like QNX. It couldn’t be secured to high assurance either ever or for a long time. So, it was a failure compared to them. It was a success if the goal was to generate research papers/funding, give people ideas, and make code someone might randomly mix with other code to create a commercial product.
All depends on viewpoint of or requirements for OS you’re selecting. It failed mine. Microkernels + isolated applications + user-mode Linux are currently best fit for my combined requirements. OKL4, INTEGRITY-178B, LynxSecure, and GenodeOS are examples implementing that model.
Yes, but with most of a BSD kernel stuck on and running in the same address space. https://en.wikipedia.org/wiki/XNU