I’m loving the capabilities arcan brings to the world. One thing the Marvel movies got right is the user interfaces: I want Tony Stark OS to be the way we move applications around from screen to screen to device to device, and move data within those.
I’ve come around to this given all the supply chain attacks and just general complexity of dependency trees. The one big exception, for me, is that I’m fine with a testing or assertion dev-time dependency.
I made that switch last week, and it’s been very good (besides one bug that prevented my ci working, fixed since). The CLI is much better and the repl experience has been great, too! Can recommend, it’s really smooth so far.
It’s more like I’m excited to leave Nix. I was disappointed by how RFC98 fizzled and have been waiting for a principled fork ever since. The recent disagreement over Nix’s ties to weapons manufacturing only confirms the feeling that it’s time for me to go.
I found it very difficult to understand the website as to what the exact differences to nix are and why I would care. There is a lot of “getting started” but little that explains the difference.
I’m a little surprised at the number of comments defending design decisions that have clearly frustrated at least one user who was vocal about it – which likely means there are plenty more that were frustrated but didn’t write anything about it.
I agree with the author: if software knows what I want to do but refuses to do it, that absolutely sucks.
If you type exit into a py3 repl, you don’t see <built-in function exit>, or a NameError like any other undefined variable, no; it special-case prints an admonishment. There’s no way to spin that where it’s user-friendly or a case of the repl behaving consistently.
I don’t accept things like Microsoft Word reflowing an entire document when an image is slightly moved; I don’t accept websites that disable right-click or pasting into text/password fields; and we shouldn’t accept developer tooling that’s hostile towards its users, either.
It was incredibly user-friendly the first time I tried to use the Python repl and didn’t understand how Python worked. In fact, I wish this warning was in vim, too.
I was surprised too - I think a lot of people have a weird form of stockholm syndrome about this stuff. Stockholm syndrome might carry too many other implications.. idk.. but I can’t think of a better term to describe the level of pain people put up with from what is basically tools for getting a job done..
Python has a turtle graphics module in its standard library.
This should hint that Python’s baseline isn’t 10x developers, but instead, is kids. To a degree, kids are more important users to Python than most other groups.
My school’s tech class heavily used Logo (specifically MicroWorlds EX. In 5th grade I wrote a platformer game engine in it - everyone else wrote top-down maze type games, which is what I should have done. The syntax to declare a function is to <function name>, but I didn’t realize that it was supposed to look like a verb until later, so I named my main game loop function engine so the declaration was to engine, which was pretty silly. Also due to some technical limitation you had to manually type engine into the command window and hit Enter to start the game loop. Logo can’t take more than one keyboard input at once, so I had to use Q and E for side jumping but I ran out of time on the assignment and couldn’t redesign the levels, so W would make you jump a reasonable amount and Q/E would send you halfway across the entire screen.
We had to make a trailer in iMovie too. Other kids were in the tech room at lunch finishing their trailers while I was still working on my engine. I shot the trailer the night before it was due and edited it at lunch the day of, and when the project was over I specifically told my tech teacher never to let someone write a platformer in Logo again.
This seems like a great talk for beginners. My feedback is not aimed at beginners, but at Nix evangelists.
Because there were no other options at the time, Nix uses its own programming language also named Nix to define packages.
This doesn’t quite sound right to me. While Eelco’s thesis doesn’t directly refute this claim (say, on p69), it does explain that the Nix expression language is the simplest possible language which has all of the necessary features, and it also explains why each individual feature (laziness, recursive definitions, attrsets, functions, etc.) are necessary for defining packages. The thesis also explains why languages like Make are unsuited to the task.
A NixOS module is a function that takes the current state of the world and returns things to change in it.
This is perhaps too imperative to be true. NixOS configuration is generally monoidal, and often commutative (or configuring underlying systems which commute, e.g. networking.firewall.allowedTCPPorts which is sorted by the iptables subsystem.) It’s important to not give folks the false impression that adding a NixOS module is like adding a SysV init script.
yeah a lot of good language design is saying no to features or syntactic decisions because of concerns that seem very abstract, because past experience has shown they have far-reaching negative impact
with is in that category and should be removed, imo. I find it super convenient but that does not outweigh the loss of referential transparency. there have been a few proposed replacements, some of which are kind of okay although nothing perfect, but at the end of the day it’s a damaging feature and even just removing it entirely ought to be on the table
it’s hard to do that when there’s existing code, but it’s possible, and it does pay dividends down the line
Right, I mean, I’m a known nix partisan as it were, but in my view I’d say that there is no better expression language for the thing nix is doing today, and there also wasn’t at the time it was new. :)
Sure, but these are written once somewhere deep inside lib, and then used declaratively. You won’t really be writing functions as you would in a regular PL, and I would argue that most package descriptions can actually be read by humans if you parse them more or less as you would a JSON, which is the important mental model.
People that try and degrade or belittle the things you do should always be ignored! If you had fun it wasn’t a wast of time! Bonus points if you learned something from it (and it seems like you did! :D)!
It seems you missed how much was achieved here. Darling would be a great environment to test Darwin builds without access to Mac hardware. It’s not viable right now, but wine was far from perfect in the past too.
But step 1 is to find the missing parts and issues and document them. This is valuable to people who will pick it up in the future.
I wish every lang package manager / tool / whatever had a sumdb backing it.. I can’t tell you how many times I have seen a lib get re-published with the same exact version because of a “quick bug fix” or “oops.. I forgot to add…”..
I hope to have some spare time over the next six months to look a bit more seriously at Arcan. Do you have any pointers for people wanting to poke at the code?
After the next release* I promised some people to finish the exercise series with the more advanced stuff, as quite a few has finished the beginner ones: https://github.com/letoram/arcan/wiki/Exercises
*a week or two away depending on my progress with getting Xarcan to allow old window managers to be reused
I’m surprised that there are so many patches required for go on OpenBSD. Why can’t these be upstreamed? Is it because these would break the go compat promise?
How would replacing bash with nushell play with bootstrapping of nix and nixpkgs? When comparing guix and nix, guix did quite a good job on that topic and there is really a minimal set of packages to build everything from scratch. I’m wondering if bringing Rust in, just to build nushell, just to build stdenv based on it, would make bootstrapping nearly impossible.
Adopting this as a default stdenv would require to push this massive bindist to the Nixpkgs bootstrap seed. That seed is already massive compared to what Guix has, I don’t think we want to degrade this further.
Rust is definitely source-bootstrapable, as a matter of fact, Guix manages to do it, there’s no reason we can’t do the same. The bootstrap chain is pretty long though. On top of what we already bootstrap (gcc, openssl, etc.), we’d need to bootstrap llvm, mrustc, then rust 1.54 -> 55 -> 56 -> 57 -> 58 -> 60 -> 61 -> 62 -> 63 -> 64 -> 65.
So yeah, pushing this to stdenv would considerably degrade the bootstrap story in any case.
From my perspective, Bash is a local optimum, I personally wouldn’t change it, it’s certainly a good balance between a language that is easy to bootstrap and a good-enough expressiveness to express builds.
If we really want to move to something more modern, Oil could be a more serious contender, they seem to take bootstrapping seriously. There’s a drafted RFC wrt. Oil adoption.
[Edit]: The question is eluded, but I don’t think the author expects this to replace stdenv, at least it’s not mentionned in the article. Don’t take this comment as an overwhelming negative “this project is worthless”. Cool hack!
This made me realize that Rust is fundamentally non-bootstrapable. It’s definitely going to produce a release every six weeks for quite a number of years, and Rust’s release N needs release N-1 to build, so the bootstrap chain, by design, grows quickly and linearly with time. So it seems that, in the limit, it is really a choice between:
Is there a reference interpreter, perhaps? I imagine that that can’t be a complete solution since LLVM is a dependency, but it would allow pure-Rust toolchains to periodically adjust their bootstraps, so to speak.
There is mrustc which is written in C++ and allows you to bootstrap Rust. There is also GCC Rust implementation in the works, that will allow bootstrapping.
In my defense, I do use the term “experimental” several times and I don’t make any suggestion of replacing stdenv. I could be wrong, but I think that flakes are going to decentralize the Nix ecosystem quite a bit. While the Nixpkgs/stdenv relationship is seemingly ironclad, I don’t see why orgs or subsets of the Nix community couldn’t adopt alternative builders. Any given Nix closure can in principle have N builders involved; they’re all just producing filesystem state after all.
As for bootstrapping, yes, the cost of something like Nushell/Nuenv is certainly higher than Bash, but it’s worth considering that (a) you don’t need things like curl, jq, and coreutils and (b) one could imagine using Rust feature flags to produce lighter-weight distributions of Nushell that cut out things that aren’t germane to a Nix build environment (like DataFrames support).
The resulting binary is about 1.3 MB now. I seem to recall that the nushell binary is something like 10 MB or 50 MB, which is typical for Rust binaries. rustc is probably much larger.
There was some debate about whether Oil’s code gen needs to be bootstrapped. That is possible, but lots of people didn’t seem to realize that the bash build in Nix is not.
It includes yacc’s generated output, which is less readable than most of Oil’s generated code.
Nushell is certainly larger! Although I would be curious how much smaller Nushell could be made. Given that it’s Rust, you could in principle use feature flags to bake in only those features that a Nix builder would be likely to use and leave out things like DataFrames support (which is quite likely the “heaviest” part of Nushell).
For sure it would make bootstrapping much harder on things like OpenBSD. Good luck if you are on an arch that doesn’t have rust or LLVM. That said, I don’t think this would replace stdenv.. for sure not any time soon!
Also the article does mention exactly what you are pointing out:
it’s nowhere near the range of systems that can run Bash.
My question is orthogonal to this, and maybe I should have specify what I mean by bootstrapping. It’s “how many things I have to build first, before I can have working nixpkgs and build things users ask for”. So if we assume that nushell runs wherever bash runs, how much more effort is to build nushell (and rust and llvm) than bash? I would guess order of magnitude more, thus really complicating the initial setup of nixpkgs (or at least getting them to install without any caches).
I can’t know for sure, and I’ll take any tips/hints. (:
I am using it for my own email, so at least there’s that incentive to keep it going. It would certainly help to be with more!
I also want to keep the maintainer burden low. No separate website. Releasing is mostly just adding a tag. The tests should help keep the code base in working order. And I wondered early on how to keep all the standards/RFCs in my head, and decided to heavily cross-reference the code with the RFCs, which helped a lot.
Also, email is not evolving at a high pace… Once functionality is working it may not require all that much ongoing development.
This looks really impressive. The only thing on the not-yet-implemented list that I would miss is Sieve support. Do you have some documentation on your privilege-separation model?
this is all one process. go is supposed to do a good part of the protection. i imagine resource (ab)use could be a issue: memory and file descriptors. i’m aware of openbsd privsep principles. would you have ideas on where separations would be good to have?
and about sieve: i’ve never used it. how does one use it with current mail stacks? from memory, i think it is a way to match messages and take action on them, like moving them to a mailbox, or possibly set flags? how does one configure the rules? just editing a text file on a server, in a web interface, or in a mail client?
go is supposed to do a good part of the protection
That protects you against most memory safety bugs (though Go is not memory safe in the presence of concurrency - data races on slices in objects shared between goroutines can break memory safety), but that doesn’t protect you against logic bugs. A lot of these can be prevented by threading a capability model through your system and respecting the principle of intentionality everywhere, but that doesn’t mean that the principle of least privilege is something to ignore. Mail servers are among the most aggressively attacked systems on the Internet so it’s a good idea to aggressively pursue both.
would you have ideas on where separations would be good to have?
At a minimum, I’d consider separating the pre- and post-authentication steps. If an attacker compromises the pre-auth process but doesn’t have valid credentials then they should find that they’ve compromised a completely unprivileged process.
The authentication should also then be a separate process. This may need some restricted filesystem access (or limited database connectivity), depending on how you store credentials (or possibly they’re loaded before the process starts before it drops privileges and the auth process is restarted whenever they change - with a target deployment of <10 users, that should be fairly simple), but it shouldn’t be allowed to create any network or IPC connections, or access most of the local filesystem.
The post-auth process should be confined to being able to inherit the network connection created when it is started and having access only to the mail store for the specific user. This ensures that no bug in the code that communicates with a user can perform filesystem accesses for other users. If the backing store is a database, the same applies, just use the database’s ACLs instead.
Some of the other services would also benefit from being compartmentalised. For example, you have spam filtering. A significant proportion of spam emails are trying to ship malware to the user, and compromising the mail server is a great way of doing this (and may avoid the need to compromise the client). Even the simple case of exhausting resources so that the next spam email gets through the filters or all email processing stops need to be in scope for this kind of threat model, so you probably want to process each inbound email in a separate process that returns a single value (spam probability) to the parent and runs with tight resource limits, so the worst that an attacker can do is push an email past the filter.
The component that does Let’s Encrypt / ACME things almost certainly needs to be isolated - anyone who compromises that can sign arbitrary private keys. I don’t know how much of ACME you’re implementing, mail servers often use the DNS-based variant since a mail server may be pointed to by MX records that it does not have an A record for. A thing that can create and update DNS records is a very high-value target.
Similarly, DMARC keys are high value (if they can be compromised then an attacker can send email that is indistinguishable from email that you sent). Signing should be done in a separate process that just does the signing and so even an attacker who compromises a client connection can do an online attack but can’t exfiltrate the keys.
As I recall, Ben Laurie added support for Capsicum to the Go standard library some years back, so these kinds of thing are fairly easy to add in Go programs.
This is just off the top of my head without thinking things through in too much detail. You can probably do a lot better understanding the shape of your code. I’d encourage you to think about three things:
What is your threat model? What is an attacker trying to do and any given point and what should you assume that they can do? This helps frame the next steps. For example, email is often used for password-reset links, so one attack to consider is someone intercepting a password reset link. What components do they need to compromise to get there? If they compromise one user, have they compromised all of them?
How do you enforce the principle of least privilege? For every component, what is the absolute minimum that it needs to have access to? Can you make that set smaller without losing functionality?
How do you enforce the principle of intentionality? If a component needs to have access to two bits of important state (for example, two users’ credentials or mail boxes, two different mail boxes, a mail box and a different folder on the system), then how do you ensure that it is exercising the one that it intends to?
The last high-profile Exchange bug was a violation of the Principle of Intentionality: Exchange had access to a system location, but intended to write a configuration file into the configuration-file directory and did not use anything vaguely like a capability system to prevent this. Capsicum and similar systems make this easy: you would have a directory descriptor for the configuration directory and use it with openat and so be unable to create a config file anywhere else.
and about sieve: i’ve never used it. how does one use it with current mail stacks? from memory, i think it is a way to match messages and take action on them, like moving them to a mailbox, or possibly set flags? how does one configure the rules? just editing a text file on a server, in a web interface, or in a mail client?
ManageSieve is a protocol for sending Sieve scripts from the client to the server. Clients expose it in different ways. I’ve seen a couple of things that look like the rule editor in Outlook or Mail.app but I tend to use a Thunderbird plugin that exposes the scripts directly. It is a bit nicer than just editing the script on the server because ManageSieve doesn’t let you install a script with syntax errors and lets the server report the error to the client, so the client doesn’t need to know every extension that the server supports (and there are a lot)
Dovecot also has support for IMAPSieve, which runs sieve scripts in response to events. This is most commonly used for detecting things being added to or removed from a spam folder to trigger learning. I think you have built-in support for that? It can also be used for things like providing a virtual mailbox that auto-files email according to your latest rules if you drop mails there, or running external scripts so you can copy an email with a calendar attachment into a mail box and have the attachment passed to your calendar, and other automation workflows.
valid point about the logic bugs. i’m wondering how difficult it is to take over a go process. whatever the answer, having separated privileges as a layer of defence will certainly make it safer. pre-auth and post-auth, and per-logged-in-user-processes, and key-managing-processes all sound right. i’m going to put it on the todo list.
I don’t know how much of ACME you’re implementing, mail servers often use the DNS-based variant since a mail server may be pointed to by MX records that it does not have an A record for. A thing that can create and update DNS records is a very high-value target.
mox uses tls-alpn-01, which is why it needs port 443 (along with for mta-sts and autoconfig).
but managing dns records is an interesting topic. i would like to be able to do that, mostly to make it easier to set up/manage mox (i believe many potential mox admins would be pasting dns records in some web interface zone import field. if they are lucky. and creating records one by one in a web interface otherwise. also, with dns management, mox could automatically rollover dkim keys in phases, update mtasts policy ids, etc). but i don’t know of a commonly implemented dns server api i would use. i don’t want to make it harder to set up mox. if anyone knows there is a way, please let me know!
ManageSieve is a protocol for sending Sieve scripts from the client to the server
make sense, although yet another protocol to implement… i personally am probably fine going to a web page and editing the script there. mox already has a web page you can manage (some of) your account settings in. it currently only has basic rules for moving messages to a mailbox when they are delivered, see Rulesets in https://pkg.go.dev/github.com/mjl-/mox/config. these can be edited in the accounts page.
Dovecot also has support for IMAPSieve, which runs sieve scripts in response to events. This is most commonly used for detecting things being added to or removed from a spam folder to trigger learning. I think you have built-in support for that?
yeah, i recently added a simple approach for setting (non)junk flags based on the mailbox a message is delivered/moved/copied into. see https://github.com/mjl-/mox/blob/ad51ffc3652ff19a1265fe2831c83ebf669ecdc3/config/config.go#L210. i looked at mail clients, but did not see behaviour to set those flags conveniently, e.g. “archive” in thunderbird does not mark a message as nonjunk, etc.
It can also be used for things like providing a virtual mailbox that auto-files email according to your latest rules if you drop mails there, or running external scripts so you can copy an email with a calendar attachment into a mail box and have the attachment passed to your calendar, and other automation workflows.
interesting. this is certainly not possible in mox. there isn’t even the notion of a user (uid) to run such scripts as. sounds like adding useful sieve support may need that. this is going a bit lower on the todo list. (:
valid point about the logic bugs. i’m wondering how difficult it is to take over a go process. whatever the answer, having separated privileges as a layer of defence will certainly make it safer.
I recommend not thinking about “Go processes” as a process is a process is a process.
Another good example of real world recent vulnerability in mail servers is CVE-2020-7247 and its resulting security errata: a remote code exploit in OpenSMTPD from 2020.
Priv-sep specifically didn’t mitigate against that vulnerability and it had nothing to do with memory safety, but being able to have a mental model of which parts of your program are operating with specific capabilities will make it easier to audit and easier to respond to inevitable vulnerabilities. It’s not a matter of “if”, but “when”.
It should really be underscored that without doing the extra legwork with capabilities or bubblewrap or putting different apps under different users or really anything (there are a ton of methods here) that merely deciding to have extra functionality in a separate process owned by the same user offers absolutely no extra security protection and in the case of “process each inbound email in a separate process” has considerable downsides. You do mention restricting fs/ipc/net in the auth paragraph though. I guess what I’m saying is that end-users need to be aware there is extra work to be done there if they take that route. Sure different stack/heap so you aren’t hitting footgun issues, but once that process is owned the rest don’t matter. In this example mox suggests creating a mox user with seven additional setup commands. That’s great and I bet a lot of people ignore that and just sudo su their way to freedom cause it’s not enforced. If mjl decides to break this into separate processes there is going to be a lot more setup involved then. This is pointed out because there is a lot of language online about “just put in another process”, and then the end-user is not told or is unaware that they need to do all the extra work that is required to reap the benefits.
Please ignore the above message. None of this needs to impact the end-user experience. Dropping privilege for a child process without creating different users is supported on all major operating systems.
Almost none of what you say is true with any vaguely modern *NIX system operating system. They all provide mechanisms for a process to drop privileges and run with less than the ambient authority of the user that started as. FreeBSD has Capsicum, XNU has the sandbox framework, OpenBSD has pledge, Linux has seccomp-bpf / Drawbridge / whatever they are doing this week.
I didn’t claim that. If you look at my very first sentence I’m pretty clear that there are plenty of methods to deal with this, however merely spawning a new process does not give you inherent added security which gets implied in many articles and discussions and which was my whole point.
I don’t know much about licenses so if you have any suggested reading, im open to it. I did put MIT in the package.json but not because of any deep understanding of the differences. I would like it open as possible.
2 clause BSD or MIT seem to be the permissive licenses of choice, unless you’re a company in which case Apache 2. IANAL, I personally prefer copylefts but you do you.
I’m a fan of MPL-2.0 because it requires open sourcing changes (per file) but allows the file to be bundled/minified and to be hosted. It works very practically for front-end while not putting too much burden on the maintainer to merge in all commits right away–but like copyleft requires changes to open for others to read, learn from, and use.
It seems me that is a new challenger to openports.pl?
Openports.pl is create by one OpenBSD team member, solene@. (as Solène Rapenne). It’s hosted on OpenBSD.amsterdam.
This one was made my be, it’s also hosted on openbsd.ams :D
Not specifically a challenger - it would be trivial to add the FTS stuff to openports.pl - I made openbsd.app to show the advantages of searching DESCR (not just COMMENT).
Awesome, this (for me) really helps to grok the capabilities of
pipewirepipeworld!Not pipewire ;) Pipeworld
Yep :P
I’m loving the capabilities arcan brings to the world. One thing the Marvel movies got right is the user interfaces: I want Tony Stark OS to be the way we move applications around from screen to screen to device to device, and move data within those.
Arcan gets us closest, I think.
We only really lack the ‘move data’ part and NLnet sponsor that in a new grant.
I guess that means I win?
Actual image banner ads are a thing of the past? Huh.
Dunno, I have uBO set to block JS except for a specific set of sites.
I can’t remember seeing one in years.
Heroes of the internet! I hope this happens!
I appreciate that this html component lib has 0 dependencies. Anymore that’s a top priority for me to consider using something.
I’ve come around to this given all the supply chain attacks and just general complexity of dependency trees. The one big exception, for me, is that I’m fine with a testing or assertion dev-time dependency.
Very excited to switch my setup to Lix
I made that switch last week, and it’s been very good (besides one bug that prevented my ci working, fixed since). The CLI is much better and the repl experience has been great, too! Can recommend, it’s really smooth so far.
How are the repl and cli different?
Mainly some conveniences: a better pretty-printer in the repl, better error reporting,
nix flake updatenow takes an argument to specify the input to update… they have a pretty neat list of changes here: https://git.lix.systems/lix-project/lix/src/branch/main/doc/manual/rl-nextI’m also a Nix user. Why are you excited to switch to Lix?
It’s more like I’m excited to leave Nix. I was disappointed by how RFC98 fizzled and have been waiting for a principled fork ever since. The recent disagreement over Nix’s ties to weapons manufacturing only confirms the feeling that it’s time for me to go.
I found it very difficult to understand the website as to what the exact differences to nix are and why I would care. There is a lot of “getting started” but little that explains the difference.
I feel this way every time I use
brew updateinstead ofbrew upgrade. I also seem rarely get it right the first try.So you knew exactly what I wanted, even gave me the command to do it instead, just didn’t do it!?!
I found another one yesterday that has always bugged me:
cp somedir someotherdir:cp: -r not specified; omitting directory 'somedir'AHHH!!!I’m a little surprised at the number of comments defending design decisions that have clearly frustrated at least one user who was vocal about it – which likely means there are plenty more that were frustrated but didn’t write anything about it.
I agree with the author: if software knows what I want to do but refuses to do it, that absolutely sucks.
If you type
exitinto a py3 repl, you don’t see<built-in function exit>, or aNameErrorlike any other undefined variable, no; it special-case prints an admonishment. There’s no way to spin that where it’s user-friendly or a case of the repl behaving consistently.I don’t accept things like Microsoft Word reflowing an entire document when an image is slightly moved; I don’t accept websites that disable right-click or pasting into text/password fields; and we shouldn’t accept developer tooling that’s hostile towards its users, either.
It was incredibly user-friendly the first time I tried to use the Python repl and didn’t understand how Python worked. In fact, I wish this warning was in vim, too.
I was surprised too - I think a lot of people have a weird form of stockholm syndrome about this stuff. Stockholm syndrome might carry too many other implications.. idk.. but I can’t think of a better term to describe the level of pain people put up with from what is basically tools for getting a job done..
Python has a turtle graphics module in its standard library. This should hint that Python’s baseline isn’t 10x developers, but instead, is kids. To a degree, kids are more important users to Python than most other groups.
Its not stockholm, its empathy for new users.
You just made my day! Some of the very first things I learned about programming were in the Logo turtle graphics environment on an Apple IIe.
What a nice blast from the past:
https://pasteboard.co/PYp1RB2ppkpO.png
My school’s tech class heavily used Logo (specifically MicroWorlds EX. In 5th grade I wrote a platformer game engine in it - everyone else wrote top-down maze type games, which is what I should have done. The syntax to declare a function is
to <function name>, but I didn’t realize that it was supposed to look like a verb until later, so I named my main game loop functionengineso the declaration wasto engine, which was pretty silly. Also due to some technical limitation you had to manually typeengineinto the command window and hit Enter to start the game loop. Logo can’t take more than one keyboard input at once, so I had to use Q and E for side jumping but I ran out of time on the assignment and couldn’t redesign the levels, so W would make you jump a reasonable amount and Q/E would send you halfway across the entire screen.We had to make a trailer in iMovie too. Other kids were in the tech room at lunch finishing their trailers while I was still working on my engine. I shot the trailer the night before it was due and edited it at lunch the day of, and when the project was over I specifically told my tech teacher never to let someone write a platformer in Logo again.
…but man was it fun, and man did I learn a lot.
One for the Hall of Shame: if you try and exit the scheme48 repl with ctrl-D, like most-any other prompt, you get:
Absolutely infuriating.
Have you ever gone all the way through?!
Once or twice out of spite, yep!
Scheme48 is so lovely in most other respects, one has to wonder why they decided on this particular behavior.
This seems like a great talk for beginners. My feedback is not aimed at beginners, but at Nix evangelists.
This doesn’t quite sound right to me. While Eelco’s thesis doesn’t directly refute this claim (say, on p69), it does explain that the Nix expression language is the simplest possible language which has all of the necessary features, and it also explains why each individual feature (laziness, recursive definitions, attrsets, functions, etc.) are necessary for defining packages. The thesis also explains why languages like Make are unsuited to the task.
This is perhaps too imperative to be true. NixOS configuration is generally monoidal, and often commutative (or configuring underlying systems which commute, e.g.
networking.firewall.allowedTCPPortswhich is sorted by theiptablessubsystem.) It’s important to not give folks the false impression that adding a NixOS module is like adding a SysV init script.The
withkeyword certainly could’ve been removed from the language without removing expressiveness. I’d even say the language would be better for it.yeah a lot of good language design is saying no to features or syntactic decisions because of concerns that seem very abstract, because past experience has shown they have far-reaching negative impact
withis in that category and should be removed, imo. I find it super convenient but that does not outweigh the loss of referential transparency. there have been a few proposed replacements, some of which are kind of okay although nothing perfect, but at the end of the day it’s a damaging feature and even just removing it entirely ought to be on the tableit’s hard to do that when there’s existing code, but it’s possible, and it does pay dividends down the line
Right, I mean, I’m a known nix partisan as it were, but in my view I’d say that there is no better expression language for the thing nix is doing today, and there also wasn’t at the time it was new. :)
TBH if I was remaking Nix today, I’d probably use JavaScript as the expression language to make people’s lives easier
It’s basically JSON for all practical purposes. Also, you want to keep computations to a minimal, it is more of a data description language.
Functions are a super important part. Recursion is used heavily to do things like overrides. It’s not really JSON.
Sure, but these are written once somewhere deep inside lib, and then used declaratively. You won’t really be writing functions as you would in a regular PL, and I would argue that most package descriptions can actually be read by humans if you parse them more or less as you would a JSON, which is the important mental model.
Try it and see what happens. There’s several projects like Nickel and Zilch already exploring the space, too.
I understand the impulse but I still feel it would be the wrong call. it’s not like it’s something really exotic, it uses infix and curly braces, heh.
Thanks for pointing this out!
Except it’s not like OpenBSD. OpenBSD is more than a set of features. It’s an approach to code quality, security, and design philosophy.
Granted, if I had to use Linux over a BSD variant, any distro that doesn’t use Systemd would be where I start looking. Alpine looks slick.
Pretty sure @gonzalo (of the OpenBSD project) is saying that Alpine has overlap on those very points you are making. :)
Whoops, yeah I must have missed that point 😅
It’s so wonderful to have that much free time to colossally waste on nothing
Hey, I’m still in school and on break, so I have nothing but time.
Besides, writing is fun!
People that try and degrade or belittle the things you do should always be ignored! If you had fun it wasn’t a wast of time! Bonus points if you learned something from it (and it seems like you did! :D)!
I enjoyed it.
Enjoy the free time while you have it, because it goes away… /new-parent-here
It seems you missed how much was achieved here. Darling would be a great environment to test Darwin builds without access to Mac hardware. It’s not viable right now, but wine was far from perfect in the past too.
But step 1 is to find the missing parts and issues and document them. This is valuable to people who will pick it up in the future.
Tell me you’re also a parent without telling me you’re also a parent /lol /also sigh
I wish every lang package manager / tool / whatever had a sumdb backing it.. I can’t tell you how many times I have seen a lib get re-published with the same exact version because of a “quick bug fix” or “oops.. I forgot to add…”..
5+ years earlier .. https://arcan-fe.com/2018/04/25/towards-secure-system-graphics-arcan-and-openbsd/
@qbit added the wayland service transport to the port last fall if I recall correctly, with twitter memory hole:ing things it’s hard to know.
I hope to have some spare time over the next six months to look a bit more seriously at Arcan. Do you have any pointers for people wanting to poke at the code?
https://arcan-fe.com/2018/10/31/walkthrough-writing-a-kmscon-console-like-window-manager-using-arcan
I’d go up to / including part 4 to understand enough of the server end, then work from the client side - https://github.com/letoram/arcan/blob/master/tests/frameservers/counter/counter.c
After the next release* I promised some people to finish the exercise series with the more advanced stuff, as quite a few has finished the beginner ones: https://github.com/letoram/arcan/wiki/Exercises
*a week or two away depending on my progress with getting Xarcan to allow old window managers to be reused
!! Exercises! sweet! I didn’t know about these :D - Thanks!
I was semarie@ who enabled it : https://github.com/openbsd/ports/commit/0150291b1a3b6581d1519c18d130d81b43e52d16 :D
I’m surprised that there are so many patches required for go on OpenBSD. Why can’t these be upstreamed? Is it because these would break the go compat promise?
Most of them are for other-architecture-support and are pending various changes upstream.
How would replacing bash with nushell play with bootstrapping of nix and nixpkgs? When comparing guix and nix, guix did quite a good job on that topic and there is really a minimal set of packages to build everything from scratch. I’m wondering if bringing Rust in, just to build nushell, just to build stdenv based on it, would make bootstrapping nearly impossible.
100% agree, this article completely eludes this central question. Bash is semi-trivial to bootstrap!
Nixpkgs does not bootstrap rustc, we’re currently using a binary distribution: https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/compilers/rust/rustc.nix#L28
Adopting this as a default stdenv would require to push this massive bindist to the Nixpkgs bootstrap seed. That seed is already massive compared to what Guix has, I don’t think we want to degrade this further.
Rust is definitely source-bootstrapable, as a matter of fact, Guix manages to do it, there’s no reason we can’t do the same. The bootstrap chain is pretty long though. On top of what we already bootstrap (gcc, openssl, etc.), we’d need to bootstrap llvm, mrustc, then rust 1.54 -> 55 -> 56 -> 57 -> 58 -> 60 -> 61 -> 62 -> 63 -> 64 -> 65.
So yeah, pushing this to stdenv would considerably degrade the bootstrap story in any case.
From my perspective, Bash is a local optimum, I personally wouldn’t change it, it’s certainly a good balance between a language that is easy to bootstrap and a good-enough expressiveness to express builds.
If we really want to move to something more modern, Oil could be a more serious contender, they seem to take bootstrapping seriously. There’s a drafted RFC wrt. Oil adoption.
[Edit]: The question is eluded, but I don’t think the author expects this to replace stdenv, at least it’s not mentionned in the article. Don’t take this comment as an overwhelming negative “this project is worthless”. Cool hack!
This made me realize that Rust is fundamentally non-bootstrapable. It’s definitely going to produce a release every six weeks for quite a number of years, and Rust’s release N needs release N-1 to build, so the bootstrap chain, by design, grows quickly and linearly with time. So it seems that, in the limit, it is really a choice between:
Is there a reference interpreter, perhaps? I imagine that that can’t be a complete solution since LLVM is a dependency, but it would allow pure-Rust toolchains to periodically adjust their bootstraps, so to speak.
There is mrustc which is written in C++ and allows you to bootstrap Rust. There is also GCC Rust implementation in the works, that will allow bootstrapping.
In my defense, I do use the term “experimental” several times and I don’t make any suggestion of replacing stdenv. I could be wrong, but I think that flakes are going to decentralize the Nix ecosystem quite a bit. While the Nixpkgs/stdenv relationship is seemingly ironclad, I don’t see why orgs or subsets of the Nix community couldn’t adopt alternative builders. Any given Nix closure can in principle have N builders involved; they’re all just producing filesystem state after all.
As for bootstrapping, yes, the cost of something like Nushell/Nuenv is certainly higher than Bash, but it’s worth considering that (a) you don’t need things like curl, jq, and coreutils and (b) one could imagine using Rust feature flags to produce lighter-weight distributions of Nushell that cut out things that aren’t germane to a Nix build environment (like DataFrames support).
Yes the new Oil C++ tarball is 375 kilobytes / 90K lines of compressed, readable C++ source :)
https://www.oilshell.org/release/0.14.2/
The resulting binary is about 1.3 MB now. I seem to recall that the nushell binary is something like 10 MB or 50 MB, which is typical for Rust binaries. rustc is probably much larger.
There was some debate about whether Oil’s code gen needs to be bootstrapped. That is possible, but lots of people didn’t seem to realize that the bash build in Nix is not.
It includes yacc’s generated output, which is less readable than most of Oil’s generated code.
Nushell is certainly larger! Although I would be curious how much smaller Nushell could be made. Given that it’s Rust, you could in principle use feature flags to bake in only those features that a Nix builder would be likely to use and leave out things like DataFrames support (which is quite likely the “heaviest” part of Nushell).
For sure it would make bootstrapping much harder on things like OpenBSD. Good luck if you are on an arch that doesn’t have rust or LLVM. That said, I don’t think this would replace stdenv.. for sure not any time soon!
Also the article does mention exactly what you are pointing out:
My question is orthogonal to this, and maybe I should have specify what I mean by bootstrapping. It’s “how many things I have to build first, before I can have working nixpkgs and build things users ask for”. So if we assume that nushell runs wherever bash runs, how much more effort is to build nushell (and rust and llvm) than bash? I would guess order of magnitude more, thus really complicating the initial setup of nixpkgs (or at least getting them to install without any caches).
What’s your plan for sustainability? This is a gigantic project; how do I know it will be maintained 5-10 years from now?
I can’t know for sure, and I’ll take any tips/hints. (: I am using it for my own email, so at least there’s that incentive to keep it going. It would certainly help to be with more! I also want to keep the maintainer burden low. No separate website. Releasing is mostly just adding a tag. The tests should help keep the code base in working order. And I wondered early on how to keep all the standards/RFCs in my head, and decided to heavily cross-reference the code with the RFCs, which helped a lot. Also, email is not evolving at a high pace… Once functionality is working it may not require all that much ongoing development.
This looks really impressive. The only thing on the not-yet-implemented list that I would miss is Sieve support. Do you have some documentation on your privilege-separation model?
this is all one process. go is supposed to do a good part of the protection. i imagine resource (ab)use could be a issue: memory and file descriptors. i’m aware of openbsd privsep principles. would you have ideas on where separations would be good to have?
and about sieve: i’ve never used it. how does one use it with current mail stacks? from memory, i think it is a way to match messages and take action on them, like moving them to a mailbox, or possibly set flags? how does one configure the rules? just editing a text file on a server, in a web interface, or in a mail client?
That protects you against most memory safety bugs (though Go is not memory safe in the presence of concurrency - data races on slices in objects shared between goroutines can break memory safety), but that doesn’t protect you against logic bugs. A lot of these can be prevented by threading a capability model through your system and respecting the principle of intentionality everywhere, but that doesn’t mean that the principle of least privilege is something to ignore. Mail servers are among the most aggressively attacked systems on the Internet so it’s a good idea to aggressively pursue both.
At a minimum, I’d consider separating the pre- and post-authentication steps. If an attacker compromises the pre-auth process but doesn’t have valid credentials then they should find that they’ve compromised a completely unprivileged process.
The authentication should also then be a separate process. This may need some restricted filesystem access (or limited database connectivity), depending on how you store credentials (or possibly they’re loaded before the process starts before it drops privileges and the auth process is restarted whenever they change - with a target deployment of <10 users, that should be fairly simple), but it shouldn’t be allowed to create any network or IPC connections, or access most of the local filesystem.
The post-auth process should be confined to being able to inherit the network connection created when it is started and having access only to the mail store for the specific user. This ensures that no bug in the code that communicates with a user can perform filesystem accesses for other users. If the backing store is a database, the same applies, just use the database’s ACLs instead.
Some of the other services would also benefit from being compartmentalised. For example, you have spam filtering. A significant proportion of spam emails are trying to ship malware to the user, and compromising the mail server is a great way of doing this (and may avoid the need to compromise the client). Even the simple case of exhausting resources so that the next spam email gets through the filters or all email processing stops need to be in scope for this kind of threat model, so you probably want to process each inbound email in a separate process that returns a single value (spam probability) to the parent and runs with tight resource limits, so the worst that an attacker can do is push an email past the filter.
The component that does Let’s Encrypt / ACME things almost certainly needs to be isolated - anyone who compromises that can sign arbitrary private keys. I don’t know how much of ACME you’re implementing, mail servers often use the DNS-based variant since a mail server may be pointed to by MX records that it does not have an A record for. A thing that can create and update DNS records is a very high-value target.
Similarly, DMARC keys are high value (if they can be compromised then an attacker can send email that is indistinguishable from email that you sent). Signing should be done in a separate process that just does the signing and so even an attacker who compromises a client connection can do an online attack but can’t exfiltrate the keys.
As I recall, Ben Laurie added support for Capsicum to the Go standard library some years back, so these kinds of thing are fairly easy to add in Go programs.
This is just off the top of my head without thinking things through in too much detail. You can probably do a lot better understanding the shape of your code. I’d encourage you to think about three things:
The last high-profile Exchange bug was a violation of the Principle of Intentionality: Exchange had access to a system location, but intended to write a configuration file into the configuration-file directory and did not use anything vaguely like a capability system to prevent this. Capsicum and similar systems make this easy: you would have a directory descriptor for the configuration directory and use it with
openatand so be unable to create a config file anywhere else.ManageSieve is a protocol for sending Sieve scripts from the client to the server. Clients expose it in different ways. I’ve seen a couple of things that look like the rule editor in Outlook or Mail.app but I tend to use a Thunderbird plugin that exposes the scripts directly. It is a bit nicer than just editing the script on the server because ManageSieve doesn’t let you install a script with syntax errors and lets the server report the error to the client, so the client doesn’t need to know every extension that the server supports (and there are a lot)
Dovecot also has support for IMAPSieve, which runs sieve scripts in response to events. This is most commonly used for detecting things being added to or removed from a spam folder to trigger learning. I think you have built-in support for that? It can also be used for things like providing a virtual mailbox that auto-files email according to your latest rules if you drop mails there, or running external scripts so you can copy an email with a calendar attachment into a mail box and have the attachment passed to your calendar, and other automation workflows.
thanks, that’s a lot of good info!
valid point about the logic bugs. i’m wondering how difficult it is to take over a go process. whatever the answer, having separated privileges as a layer of defence will certainly make it safer. pre-auth and post-auth, and per-logged-in-user-processes, and key-managing-processes all sound right. i’m going to put it on the todo list.
mox uses tls-alpn-01, which is why it needs port 443 (along with for mta-sts and autoconfig).
but managing dns records is an interesting topic. i would like to be able to do that, mostly to make it easier to set up/manage mox (i believe many potential mox admins would be pasting dns records in some web interface zone import field. if they are lucky. and creating records one by one in a web interface otherwise. also, with dns management, mox could automatically rollover dkim keys in phases, update mtasts policy ids, etc). but i don’t know of a commonly implemented dns server api i would use. i don’t want to make it harder to set up mox. if anyone knows there is a way, please let me know!
make sense, although yet another protocol to implement… i personally am probably fine going to a web page and editing the script there. mox already has a web page you can manage (some of) your account settings in. it currently only has basic rules for moving messages to a mailbox when they are delivered, see Rulesets in https://pkg.go.dev/github.com/mjl-/mox/config. these can be edited in the accounts page.
yeah, i recently added a simple approach for setting (non)junk flags based on the mailbox a message is delivered/moved/copied into. see https://github.com/mjl-/mox/blob/ad51ffc3652ff19a1265fe2831c83ebf669ecdc3/config/config.go#L210. i looked at mail clients, but did not see behaviour to set those flags conveniently, e.g. “archive” in thunderbird does not mark a message as nonjunk, etc.
interesting. this is certainly not possible in mox. there isn’t even the notion of a user (uid) to run such scripts as. sounds like adding useful sieve support may need that. this is going a bit lower on the todo list. (:
I recommend not thinking about “Go processes” as a process is a process is a process.
Another good example of real world recent vulnerability in mail servers is CVE-2020-7247 and its resulting security errata: a remote code exploit in OpenSMTPD from 2020.
Priv-sep specifically didn’t mitigate against that vulnerability and it had nothing to do with memory safety, but being able to have a mental model of which parts of your program are operating with specific capabilities will make it easier to audit and easier to respond to inevitable vulnerabilities. It’s not a matter of “if”, but “when”.
It should really be underscored that without doing the extra legwork with capabilities or bubblewrap or putting different apps under different users or really anything (there are a ton of methods here) that merely deciding to have extra functionality in a separate process owned by the same user offers absolutely no extra security protection and in the case of “process each inbound email in a separate process” has considerable downsides. You do mention restricting fs/ipc/net in the auth paragraph though. I guess what I’m saying is that end-users need to be aware there is extra work to be done there if they take that route. Sure different stack/heap so you aren’t hitting footgun issues, but once that process is owned the rest don’t matter. In this example mox suggests creating a mox user with seven additional setup commands. That’s great and I bet a lot of people ignore that and just sudo su their way to freedom cause it’s not enforced. If mjl decides to break this into separate processes there is going to be a lot more setup involved then. This is pointed out because there is a lot of language online about “just put in another process”, and then the end-user is not told or is unaware that they need to do all the extra work that is required to reap the benefits.
noted. ideally i would prefer do all the separation in mox itself, not requiring extra tools like bubblewrap.
about the additional commands, i probably should validate permissions at startup. mox currently only checks that it doesn’t run as root.
Please ignore the above message. None of this needs to impact the end-user experience. Dropping privilege for a child process without creating different users is supported on all major operating systems.
Almost none of what you say is true with any vaguely modern *NIX system operating system. They all provide mechanisms for a process to drop privileges and run with less than the ambient authority of the user that started as. FreeBSD has Capsicum, XNU has the sandbox framework, OpenBSD has pledge, Linux has seccomp-bpf / Drawbridge / whatever they are doing this week.
I didn’t claim that. If you look at my very first sentence I’m pretty clear that there are plenty of methods to deal with this, however merely spawning a new process does not give you inherent added security which gets implied in many articles and discussions and which was my whole point.
You put your skin in the game and contribute.
“OpenBSD Gets”.. in the sense that this tool runs on OpenBSD..
It isn’t built into OpenBSD and there isn’t even a port of it yet.
What’s it licensed as? :D
I think MIT but a LICENSE file would be nice.
I don’t know much about licenses so if you have any suggested reading, im open to it. I did put MIT in the package.json but not because of any deep understanding of the differences. I would like it open as possible.
2 clause BSD or MIT seem to be the permissive licenses of choice, unless you’re a company in which case Apache 2. IANAL, I personally prefer copylefts but you do you.
I’m a fan of MPL-2.0 because it requires open sourcing changes (per file) but allows the file to be bundled/minified and to be hosted. It works very practically for front-end while not putting too much burden on the maintainer to merge in all commits right away–but like copyleft requires changes to open for others to read, learn from, and use.
I am a fan of MIT / BSD licenses.
Bootstrap uses MIT fwiw
It seems me that is a new challenger to openports.pl? Openports.pl is create by one OpenBSD team member, solene@. (as Solène Rapenne). It’s hosted on OpenBSD.amsterdam.
This one was made my be, it’s also hosted on openbsd.ams :D
Not specifically a challenger - it would be trivial to add the FTS stuff to openports.pl - I made openbsd.app to show the advantages of searching DESCR (not just COMMENT).
“not just COMMENT” - this is within the context of OpenBSD’s default
pkg_*tools.ATM the search syntax is limited to SQLite’s FTS5 syntax and invalid syntax crashes it :D
I might move to postgresql’s fts stuff here pretty soon.