Threads for prez

    1. 3

      Does anyone here use Brave? And what do you think about it?

      I’m a happy Brave user. It’s giving me the slickness and “speed” of Chrome without all the tracking.

      1. 1

        Same. I switched a few years ago after seeing how well Brave scored on the browser privacy tests and have had no reason to switch. I still use Firefox occasionally, but my daily driver is Brave and it’s worked well.

      2. 1

        It’s my backup. There are sites where the author only developed for Blink/V8 *grumble* & others where I need to drop my security features a notch & it’s often less of a futz to just open Chromium (sadly, this is often e-commerce sites that are running who knows what).

    2. 1

      Mainly posting this since I am curious if anyone here has any experience on this tool? It looks pretty nice to me.

      1. 3

        I found it to be very pleasant to use for larger projects compared to the alternatives (cmake makes me physically sick). There’s even support for C++20 modules, although I haven’t tried that feature yet. Nowadays I use cargo or POSIX makefiles (for smaller C projects), so the time I spent with xmake was brief.

        If you’re on Void Linux, there’s an open PR to add xmake as a package.

        1. 1

          I just tried it. Seems pretty great, was quite easy to get my project up and running with it. Will stick with cmake for this project however, mostly because better support by IDE’s and such.

          1. 2

            xmake also provide VS IDE/Vscode/Clion/idea/QtCreator/Sublime plugin.

            1. 1

              Oh that’s pretty cool. Maybe I’ll switch to it eventually.

              Edit: Well I did now and having the cmake file generation makes xmake usable with pretty much anything. I wish I had started using this before!

    3. 1

      Then, connect from the source device as normal: $ ssh root@100.100.100.100

      Aren’t some of the advantages of using tailscale ssh lost if they only offer a server, not a client? I would appreciate some more technical detail.

      1. 1

        Do you have a specific loss in mind? My impression is that generally the design is that the client part is transparent for existing apps, and their customization happens mostly on the server side, where there is less variety in software.

        1. 1
          • double encryption
          • still having to deal with ssh keys
          1. 2

            You don’t need to deal with SSH keys unless I am massively misunderstanding the submission.

          2. 1

            You always have double encryption with ssh over wireguard (or any VPN). There are no client SSH keys because it knows what host you’re coming from and uses that information to match the ACL to grant access. As for the other technical details, their docs are pretty good and the code is open source.

    4. 9

      As far as I understand, this thing stores secrets as encrypted files on disk, a daemon sits on top of those files, and clients call the daemon to read and write. The daemon has a notion of being locked or unlocked, and must be unlocked before secrets can be accessed. But it seems like that state isn’t per client or per session but rather global, meaning client A can unlock the daemon and disconnect, and then client B can read secrets without needing to unlock?

      If that’s the right reading, I don’t see how it makes sense. But maybe it’s not the right reading.

      1. 2

        https://git.sr.ht/~sircmpwn/himitsu/tree/master/item/cmd/himitsud/cmd.ha#L76 seems like the right reading… though honestly it seems like you could “quickfix” this by having a client pass in a random UUID as their ID (though you’d want to share that around for certain use cases…)

        This kinda reminds me of going in real deep into sudo and thinking about how that ends up working (well at least the standard plugin setup where it stores processes that successfully authenticated).

      2. 1

        client A can unlock the daemon and disconnect, and then client B can read secrets without needing to unlock?

        Not exactly. From what I understand, the daemon spawn a predifined “granting” program, that the user must accept in order to provide the secret.

        So client A asks for a a secret, and user must first unlock the store with their passphrase. Then the passphrase (or rather, the key derived from it) is stored in memory by the daemon. When client B asks for a secret, the daemon spawns a dialog asking the user to grant access to a secret from client B. If the user accepts, the daemon replies with the secret.

        This is IMO the same thing as ssh-agent or gpg-agent, which hold the key in memory and simply provide it to whoever can read the socket file.

        1. 1

          Not exactly. From what I understand, the daemon spawn a predifined “granting” program, that the user must accept in order to provide the secret.

          Interesting. I’m not familiar with this architecture. Can you point me to the bit in Himitsu which does this? What does it mean for a user to “accept” a granting program?

          So client A asks for a a secret, and user must first unlock the store with their passphrase.

          Is a client the same as a user?

          When client B asks for a secret, the daemon spawns a dialog asking the user to grant access to a secret from client B. If the user accepts, the daemon replies with the secret.

          What does it mean to say “the daemon spawns a dialog”? Does the daemon assume that all clients are on the same host?

          This is IMO the same thing as ssh-agent or gpg-agent, which hold the key in memory and simply provide it to whoever can read the socket file.

          Ah, okay, key bit is here, I guess:

          const sockpath = path::string(&buf);
          const sock = match (unix::listen(sockpath, net::sockflags::NOCLOEXEC)) {
          

          It binds only to a Unix domain socket, which by definition is only accessible from localhost. Then I guess we’re at the second branch of my confusion block ;) in a sibling comment, namely

          If connections can only come from localhost, then I can’t quite see why you’d use a client-server architecture in the first place — AFACT it would be simpler and safer to operate on the filesystem directly.

          Or: what makes the ssh-agent architectural model for secret management the best one?

          1. 2

            Check out the himitsu-prompter(5) manpage for info on the granting program. I didn’t use it at all, and I just read all the documentation because I find it quite interresting. To answer all your questions, the daemon is configured to spawn a program, the prompter, whenever a client (external program) request access to a secret. This program must ask the user for permission to provide the secret (basically a grant/deny dialog window). Once access is granted, the requesting client gets a reply from the server.

            A client is any application that can write to the Unix socket created by the daemon (permission is hereby granted by the user spawning the himitsu daemon.

            Regarding the Unix socket, my guess is that the project is aiming toward a possibility to use the program over the network using TCP, given that it’s heavily inspired by the factotum server on plan9 (which is basically the same thing, but over the net via the 9p protocol).

            1. 1

              the daemon is configured to spawn a program, the prompter, whenever a client (external program) request access to a secret. This program must ask the user for permission to provide the secret (basically a grant/deny dialog window)

              If the daemon accepts requests for secrets on a Unix domain socket, then there is no way for it to know that a dialog window is observable by the client. The client can be on a different host, and its requests shuttled thru a proxy that listens on TCP and sends to the local Unix socket.

              If “locked/unlocked” is a state that’s per-session, or per-connection, then no problem! But I don’t see anything which indicates that’s the case. It seems to be a state that’s per-daemon, which I understand as a singleton per host. Is that true?

              1. 2

                The client (say, a mail client than need access to your IMAP credentials) performs the request via the Unix socket. Upon receiving that request, the daemon itself spawns a runtime configured program (say, /usr/local/bin/himitsu-prompter) and talks to it over stdin/stdout (not a Unix socket this time). Which means that the dialog must run on the same host as the daemon. Then this program can either spawn a dialog window, send a push request to a phone or even ask for a fingerprint check to confirm acceptance by the user. If the program (which is a child of the daemon) returns 0, then the request is accepted, and the daemon delivers the secret to the requesting program (the mail client here). Otherwise, the request is denied.

                You’re right about the store state though, it is locked per-daemon. There can be multiple daemon running on the same host though (like, one per user). There is no knowledge of “session” here, which could be a problem to spawn the prompter program for example (eg, to retrieve the $DISPLAY variable of the current user). But I’m confidemt that improvements will be made over time to improve this situation, like a session-bound prompter for example.

                1. 1

                  Thanks a bunch, this was a great explanation and I learned some things I didn’t know before.

                  If I understand correctly, the daemon receives requests for secrets from clients that connect over a Unix domain socket. Requests cause the daemon to spawn a child process, which it communicates with over stdin/stdout. The daemon will issue commands, basically requests, to the child process, based on the details of the client request, as well as its own internal “locked” or “unlocked” state. The child process is assumed to communicate with “the user” to get authentication details like passwords, and a successful return code is treated as consent by “the user” to authorize the client request.

                  If that’s basically right, then I get it now. It’s a secrets storage manager — for desktop users, managing secrets on the local filesystem. That’s fine! But when I read “secrets storage manager” I think HashiCorp Vault, or maybe Bitwarden; definitely not ssh-agent ;) which was the root cause of my confusion.

                  edit: I guess I’m still confused by the design, though. Interacting with this system requires clients to speak the IPC protocol, which requires code changes in the client. If the receiving daemon is only ever accessible on a Unix domain socket, and therefore that clients will only ever connect to servers on localhost, then, by definition, the clients should have access to the same filesystem as the daemon, right? In that case, it’s not clear to me what value the intermediating daemon is providing. Couldn’t the clients just as easily import some package that did the work of the daemon, spawning the prompter child process and reading the secret data directly from the filesystem, themselves? I guess I see some value in that the daemon architecture avoids the need for programming-language-specific client libraries… is there something more?

                  1. 2

                    The process you describe is correct. This is indeed very different from Vault. It’s more of a password manager that clients can interact with to fill in login credentials.

                    Regarding your idea about using a client library, this would cause a major issue: the client would then have the ability to unlock and read the full keystore, as you’d provide the master password to every single application, and trust them to only read the secrets they need. This would also require to provide the master password to each new application, as there wouldn’t be a « master process » running to keep the keystore unlocked (or rather, in « softlock » state as the documentation puts it. And as I stated earlier, I can see this program move toward using TCP as well for client requests, given its similarities to factotum(4). The latter is an authentication daemon that can be queried over the network to authenticate users over a network, like you’d query an LDAP server for example.
                    I think this would require a bunch of changes to the daemon though to run as standalone mode, like TLS encryption over TCP, and possibly the ability to « switch » from one keystore to another to be able to provide secrets from multiple users. This is risky though so I think that for now, usage in local mode only is more than enough for an initial release.

      3. 1

        My current setup with pass and GPG behaves in the same way. Why is this a problem, in your opinion?

        1. 4

          Say I have two sandboxed applications, one of which I want to grant keyring access to and the other of which I don’t. Perhaps the second doesn’t have any legitimate reason for keyring access and it asking for keyring access is going to make me suspect it’s compromised.

          It would be useful to be able to gate access for these apps differently, without the legitimate app unlocking the keyring “for” the illegitimate app.

          1. 1

            This is exactly how it works. Applications to not get granted keyring access directly, they query the daemon which then asks permission to the user to provide access to a specific key, for a specific application. This is referred to as « softlock » mode in the documentation. So if am illegitimate application requests access to a secret, they won’t get anything without the user’s approval.

        2. 3

          What prevents an arbitrary third party from connecting to the relevant server, issuing a request (without any sort of authentication or authorization) and then receiving sensitive information?

          If connections can come from multiple hosts, then I can’t quite see how a single shared security context for all of those users isn’t a fatal security risk. If connections can only come from localhost, then I can’t quite see why you’d use a client-server architecture in the first place — AFACT it would be simpler and safer to operate on the filesystem directly.

    5. 12

      The crucial part of this is how the password / master key and decrypted secrets are kept secure in memory. I hope the daemon at least stores secrets in pinned RAM and zeroes out memory when it’s freed. Are there mechanisms that keep other processes like debuggers from being able to inspect the daemon’s address space?

      (I’m not familiar with Unix key managers in general, just with Apple’s Keychain, which has pretty tight integration with the kernel and hardware trust module to keep it secure.)

      1. 23

        there is a discussion on lwn about this between the author and mjg59. https://lwn.net/Articles/893327/

        1. 36

          Welp, that’s Drew in a nutshell. He’s a very productive and innovative programmer doing fascinating & crucial work, and also a dick. I keep hoping he’ll tone down his confrontational tone, because I’m a fan of his work, but his stuff won’t last or be widely adopted if he can’t build a strong community around it. Sadly his behavior never changes, and it’s always driving people away who might otherwise be receptive to his projects and messages.

          Here it comes with the additional downside that he can’t process legitimate criticism, which will interfere with his project being as good as it could be.

          For anyone who doesn’t feel like reading the thread, mjg59 points out that the security feature for storing keys securely (keeping them out of memory space) only works on Linux, yet Hare works on other operating systems (like BSD). Drew considers this a feature, not a bug, calling it opportunistic improvements in security. Various people suggest it would be better to refuse to do the thing if it isn’t secure, that opportunistic improvements allow adversaries to target systems that lack the security feature, and it’s very hard for end users to know if a programmer used the library correctly (in this case, only on Linux). The conversation doesn’t really proceed further, in part because Drew calls people asking him to engage with mjg59 and/or his criticisms “hero worship”.

          1. 20

            Thanks very much, I have the feeling you’ve just saved me half an hour of stressful reading. :)

          2. 6

            I think you are being far ruder by calling someone ‘a dick’ on a forum where they can’t defend themselves. Probably better to say nothing.

            1. 9

              In case there’s some variation of English slang causing confusion here, I meant “dick” as shorthand for “not careful with the feelings of others”. I think this is just objectively true, an accurate description of his actions, or the pattern of his actions over time.

              But you raise a valid point: I could have used more polite language, which could improve clarity, and been more gentle with my tone, at the cost of some emotional content. I think often in communication there is a conflict between genuinely communicating your emotions as you’re feeling them, versus realizing that your current emotions may be unhelpful and taking time until you can communicate something else genuinely instead.

            2. 2

              Why can’t he defend himself here?

              https://lobste.rs/u/ddevault oh

          3. 5

            I’m normally pretty biased against Drew for the same reasons, but it seems to me like mjg59 is the aggressor here.

            Drew explained his rationale, and then more-or-less said “let’s agree to disagree on this.” However rather than letting it go, everyone just kept pushing, stating the same points over and over in an incredibly harsh and disrespectful tone.

            They all obviously had some valid concerns (which I agree with), but in this context it’s borderline trolling, and Drew handled it fairly well given the circumstances.

            1. 8

              Maybe it’s ethically derelict to build and release a language which regresses the state of the art in memory safety.

              1. 3

                Everything’s a tradeoff. Rust’s borrow checker is one tool of many for helping programmers write correct code, not a moral imperative for all new systems languages. Plenty of thoughtful programmers are skeptical about the effectiveness of Rust’s approach to memory safety, and the cost of that approach with regard to other things that are important, like comprehensibility. For example, see this HN thread from Ron Pressler. Maybe he’s right; maybe he’s wrong. My point is that the question is by no means settled enough that a language designer rejecting Rust’s approach to memory safety should be considered ethically derelict.

          4. 3

            but his stuff won’t last or be widely adopted if he can’t build a strong community around it. Sadly his behavior never changes, and it’s always driving people away who might otherwise be receptive to his projects and messages.

            And yet: https://drewdevault.com/2022/03/14/It-takes-a-village.html

            I hope to one day be as successful at building a community.

            1. 12

              This is a fair point. Drew’s projects have communities built around them, bigger and more cohesive than anything I’ve built, for sure. Maybe that’s good enough, Drew has hit his goals, and he can afford to antagonize whoever comes across on the internet; perhaps his work has enough reach, and wouldn’t benefit from attracting more or different kinds of people.

              I’m reminded of an old article about Usain Bolt, the fastest man in the world (still? Certainly when it was written), and how in one famous record-setting race, he turned around, saw that no one else was close to him, and coasted the rest of the way to the finish line. The piece suggested that this was representative of his approach to running in general. It then asked (as many have before and since) how fast could Usain Bolt run if he actually tried? https://www.esquire.com/sports/a7058/usain-bolt-bio-0410/

              Well, how much community could Drew build if he, like, stepped away from the keyboard for a few minutes every time he was about to flame, insult, or even threaten people? Does it matter?

              In actuality, we know that Drew has not hit all of his goals, and he is not entirely happy with the status quo. https://drewdevault.com/2022/05/30/bleh.html Drew has been so abrasive to so many people that now even members of his communities who are simply using his stuff get grief for being willing to work with him. He’s unhappy about that, he says he’s working to improve, and he asks for another chance. He recognizes a problem and claims to want to address it. But every time I look at new work from him, there’s new examples of him being a dick.

              In other words, he’s already successful in building community, yes, but I think his abrasiveness is the biggest obstacle to further improvements, and Drew might even agree with that statement.

              1. 7

                If people are harassing his users, perhaps blaming the harassers rather than the victims is in order.

                1. 2

                  Harassing them isn’t good. But I get the point that you try to distance yourself from people which defend (or introduce) the project of a person you really don’t want to get involved with.

          5. 1

            Oh, it’s the Hare guy. I feel like he’s actively harmful to the image of whatever project will employ his aid and just having him there is detrimental regardless of his technical ability.

            1. 1

              Do you know him?

              1. 5

                I know his opinions that get loudly posted on every message board such as here (before he was banned for his self-promotion) and reddit, in addition to his bad behavior such as what is showcased here.

                In short, I would never willingly use his products or work with him after these exposures 🤷🏽‍♂️

                1. 4

                  I know his opinions that get loudly posted on every message board such as here (before he was banned for his self-promotion) and reddit, in addition to his bad behavior such as what is showcased here.

                  And yet, Torvalds got a pass for many, many years. The man has a history of terrible public statements, E.G., referring to OpenBSD developers as masturbating monkeys. He has been downright abusive to many, including contributors to his projects. Examples are numerous. I’m more than happy to cite. I challenge anyone to show me just one example of where Drew Devault has displayed these levels of wanton cruelty.

                  One could be forgiven for thinking that there might be some double standards in the free software community. Torvalds is a darling of the corporate types, and as another horrible, cruel person once said: when you’re a star, they let you do it. You can do anything…

        2. 7

          What a rude comment thread. The module that people are piling in on Drew about is clearly documented to do what it does (with the caveat that it doesn’t say which kernels it provides security on).

          The readme is 4 paragraphs long and the code is like 50 low-density lines, it’s not like this info is hidden.

          1. 16

            One shouldn’t be required to read the program’s source, and the docs of the APIs it calls, to know whether it’s fit for purpose. Especially when this is a security-related feature.

            1. 5

              Even more: If such security can’t be guaranteed I may as well store my secrets in plaintext, that way I don’t get a false sense of security. (At the end of the day I’ll be vulnerable every time I decrypt my secrets. And if they are ever flushed to disk I’m vulnerable forever.)

            2. 2

              The comment thread isn’t about the application himitsu, though. The starting post in that comment thread specifically asks about the API and links to it:

              https://git.sr.ht/~sircmpwn/hare/tree/master/item/crypto/keystore - as an app developer targeting the standard library, how do I know whether or not my keys are going to be stored securely or not?

              The API is specifically documented as being “low-level” and “not recommended for non-experts”. In the context of talking about this API, I think that:

              1. It is reasonable to expect the developer-users of this API to read the documentation and if it is not clear, the source code.
              2. mjg59’s comments are kind of overblown, and other people in the thread are worse. The API doesn’t do what mjg59 would prefer, and maybe mjg59’s core suggestion of refusing to store a value if it cannot be stored securely is better, but I think it is definitely arguable and either way the tone was very heavy handed from the beginning.

              Personally, I think that there should be an API for testing if values can be stored securely, but it doesn’t necessarily have to live in this module for this thing to be useful. Maybe it makes more sense to group those capability-reporting functions somewhere else.

              1. 1

                OK, that makes more sense; I didn’t know the comments were about the API.

          2. 21

            It’s really funny how different your impression of that thread is from mine. I think one comment summed it up well:

            You don’t have to keep engaging with mjg59 if you don’t want to, but belittling people who agree with them as mere hero worshipers is beyond the pale. Remember that in asking us to use your language, you’re asking us to also trust you in your stewardship of that language and how you’ll respond to our concerns and needs as the maintainer. Seeing you attack people so aggressively out of the gate is not a confidence boosting start.

            1. 4

              Yeah, obviously I disagree and I don’t think it’s worth talking about much further. I thought Drew was fairly courteous, and the comment he called out as hero worship certainly seems hyperbolic to me:

              Someone who does not feel intensely motivated to learn from mjg59’s freely offered expertise has no legitimate claim on anyone’s attention.

              Like, that’s a ridiculous claim. We can call attention to someone’s expertise without saying such silly things.

      2. 1

        Are there mechanisms that keep other processes like debuggers from being able to inspect the daemon’s address space?

        I was under the impression (from when I last did some game hacking) that you need root to read another process’s memory (Linux). So it should be fine? Now that you mention debuggers, I don’t remember having to escalate to root for gdb to work - I wonder what the reason is.

        1. 3

          This is true only for other users’ processes. You can gcore your own process just fine, and this is part of the problem - if you’re on an effectively single-user system, like many are, there’s no protection. All your programs are running as the same user anyway.

        2. 1

          Is gdb setuid to root, or does it use a helper tool that is?

          I know on MacOS you need to enter an admin password to authorize Xcode / lldb the first time you start a debugger after rebooting. And there are processes that cannot be attached to even if you run a debugger as root.

          1. 1

            The second constraint is imposed by processes protected by macOS System Integrity Protection. The first I believe has to do with entitlements to attach to another process, but that’s just off the top of my head and I could be wrong.

            Regardless, Linux has neither of these protections. Debuggers run as normal user programs and do not require special authorization.

    6. 4

      Yes, you read that right; I didn’t get the subtyping direction backwards. A polymorphic type is a subtype of a more specialized type.

      I cannot seem to be able to wrap my head around this. Why is this the case? And in which type systems (or is it something that’s independent of the type system?)?

      Am I correct in remembering that in the context of Hindley-Milner, we usually say id : Text -> Text “is an instantiation of” id : 'a -> 'a, and thus the direction is the other way round?

      1. 11

        Going back to the definition of a subtype: the type A is a subtype of another type B if every expression of type A is also an expression of type B

        So in this example an expression of type forall (a : Type) . a -> a is a subtype of Text -> Text, because every expression of type forall (a : Type) . a -> a is also an expression of type Text -> Text. Whether or not either type is instantiated from the other type does not come into play

        1. 3

          Thank you, now it clicked.

          So I’m guessing saying something like “we can instantiate the type of id to Text -> Text” really just means that whatever we instantiate to is a valid supertype? Or does it have a different meaning (it being “instantiate”, or whatever word people use when talking about turning polymorphic types into concrete ones), since you said it does not come into play?

          1. 2

            Yes, if you instantiate a universally quantified type then the result is a valid supertype

            I should not have said “does not come into play”. Rather, I should have said “the definition of the subtype relation does not explicitly reference instantiation”, but you can derive that an instantiated type is a supertype as a consequence of the definition of subtypes

      2. 3

        Classifying expressions is the mechanism of a type system, but its ultimate purpose is to constrain values. In particular, subtyping ensures that values can be used in legal ways, even when the relevant types do not match exactly.

        So code which expects a value of type Text -> Text can be given the polymorphic identity function and nothing will go wrong at runtime. Thus the subtyping relation is crafted to make the polymorphic identity type be a subtype of its concrete instantiation. The converse is not true; a context which holds a value of type a -> a is free to pass it non-Text values, which would blow up when passed to a Text-manipulating function.

        This might also give some intuition for function type contravariance. If we have a context (e.g. a function parameter) of type H -> G and we pass it a value of type A -> B, the flow of values at runtime will be H ==> A (the caller is constrained by the context to pass a value of type H, which flows to the bound variable of the function we pass) and B ==> G (for the return value).

      1. 1

        ahhhh exactly, thanks! :)

    7. 1

      I use vis sometimes when dealing with binary/corrupted files in my terminal text editor, which makes moving around hard when some characters aren’t visible.

      A modern version of this utility that preserves unicode symbols would be handy.

      1. 1

        This is also useful also to view spaces vs tabs or type of newlines. Undisguised self advertising, I made 2 related programs:

        • vhd, the Visual HexDump, which is sort of hexdump but respecting newlines, not for full-binary files then
        • univisible, can compose/decompose unicode characters (NFKC/NFKD), display verbosely every code point
    8. 4

      I value external inputs such as technical discussions and user experience returns immensely

      I, for one, really hope this iteration of s6 will not depend on the authors personal collection of useful functions / djb-style NIH libc reimplementation (“skalibs”).

      Otherwise I have nothing but appreciation for the great amount of work put into s6. s6 is art. Together with apkv3 I will finally have enough reasons to switch all my machines from Void to Alpine.

      1. 2

        hope this iteration of s6 will not depend on the authors personal collection of useful functions / djb-style NIH libc reimplementation (“skalibs”)

        Why not? Many parts of libc are fraught.

        1. 1

          Because I don’t want every application authors’ take on which parts are fraught to end up on my system in the form of a shared library (distributions still haven’t figured out that static linking is the way to go).

      2. 2

        s6 is art

        Maybe I’m not good at appreciating art, but every time I look at s6 I’m like “why is this so much more complex than runit??”

    9. 4

      TempleOS only only runs one app at a time, the logic being that a human can only concentrate at only one thing at a time. There is no need to have multitasking, but sometimes an app would benefit additional processing power, so TempleOS can do multi-core processing by having a master-slave model: The main CPU can control the other CPU’s and hand out tasks to them.

      And while we think about crazy OS ideas: Why not run multiple independent kernels - one on every CPU? So while a rouge process could corrupt and take down one kernel the rest of the system would continue working.

      1. 6

        TempleOS can do multi-core processing by having a master-slave model: The main CPU can control the other CPU’s and hand out tasks to them.

        Classic Mac OS did this.

        And while we think about crazy OS ideas: Why not run multiple independent kernels - one on every CPU? So while a rouge process could corrupt and take down one kernel the rest of the system would continue working.

        This kinda exists already; galaxies in VMS achieve virtualization this way, I believe. Running multple OS instances could be useful for the Erlang OS mentioned though, especially since the concepts could make it transparent.

        1. 2

          Classic Mac OS did this.

          I forgot that the MDD dual G4 models could still boot Mac OS 9. I’m reasonably certain those shipped after OS X came out of beta, though, and could only boot OS 9 because it was (only slightly) too early to stop that.

          Were there other multi CPU macs that booted classic Mac OS?

          1. 7

            They made a bunch of SMP addons and even systems in the 90s. It was pretty much entirely to speed up gaussian blurs in Photoshop.

      2. 2

        You are describing unikernels.

        From my perspective it’s less about the human and more about the fact that most companies don’t run one computer or even one database - they run thousands. We are long past the “one operating system / computer” phase. Even the smallest companies are load balancing their webservers amongst multiple vms. We need new operating systems to facilitate this.

        1. 3

          I think on a personal level, nobody has only one device anymore (desktop/laptop/phone/tablet/smartwatch/e-reader/smart-TV/home-automation-stuff/home-server-possibly) and we need a good unified system for handling this, instead of pretending they’re all islands that just happen to communicate sometimes, with integration an afterthought.

      3. 2

        Why not run multiple independent kernels - one on every CPU?

        This has been explored in research. Check out http://www.barrelfish.org/

      4. 1

        And while we think about crazy OS ideas: Why not run multiple independent kernels - one on every CPU? So while a rouge process could corrupt and take down one kernel the rest of the system would continue working.

        Check out rump kernels in NetBSD probably not same idea but it can be achieved

      5. 1

        And while we think about crazy OS ideas: Why not run multiple independent kernels - one on every CPU? So while a rouge process could corrupt and take down one kernel the rest of the system would continue working.

        HydrOS did this (and it was a BEAM OS).