Programmers have a long and rich history with C, and that history has taught us many lessons. The chief lesson from that history must surely be that human beings, demonstrably, cannot write C code which is reliably safe over time. So I hope nobody says C is simple! It’s akin to assembly, appropriate as a compilation target, not as an implementation language except in extreme circumstances.
Which human beings? Did history also teach us that operating a scalpel on human flesh cannot be done reliably safe over time?
Perhaps the lesson is that the barrier of entry for an engineering job was way higher 40 years ago. If you would admit surgeons to a hospital after a “become a gutt-slicer in four weeks” program, I don’t think I need to detail what the result would be.
There’s nothing wrong with C, just like there’s nothing wrong with a scalpel. We might have more appropriate tools for some of its typical applications, but iC s still a proven useful tool.
Those who think their security burns will be solved by a gimmick such as changing programming language, are in for a very unpleasant surprise.
Perhaps the lesson is that the barrier of entry for an engineering job was way higher 40 years ago
Given the number of memory safety bugs that have been found in 40-year-old code, I doubt it. The late ‘90s and early 2000s exposed a load of these bugs because this C code written by skilled engineers was exposed to a network full of malicious individuals for the first time. In the CHERI project, we’ve found memory safety bugs in code going back to the original UNIX releases. The idea that there was some mythical time in the past when programmers were real men who never introduced security bugs is just plain wrong. It’s also a weird attitude: a good work an doesn’t blame his tools because a good work an chooses good tools. Given a choice between a tool that can be easily operated to produce good results and one that, if used incredibly carefully, might achieve the same results, it’s not a sign of a good engineer to choose the latter.
Given the number of memory safety bugs that have been found in 40-year-old code, I doubt it.
Back then, the C programmers didn’t know about memory safety bugs and the kind of vulnerabilities we have since two decades. Similar, Javascript and HTML are surely two programming languages which are somewhat easier to write than C and doesn’t suffer from the same class of vulnerabilities. However, 20 years ago people wrote code in these two languages that suffer from XSS and other web based vulns. Heck, XSS and SQLi is still a thing nowadays.
What I like about C is that it forces the programmer to understand the OS below. Writing C without knowing about memory management, file descriptors, processes is doomed to fail. And this is what I miss today and maybe @pm in their comment hinted at. I conduct job interviews with people who consider themself senior and they only know the language and have little knowledge about the environment they’re working in.
Yes, and what we have now is a vast trove of projects written by very smart programmers, who do know the OS (and frequently work on it), and do know how CPUs work, and do know about memory safety problems, and yet still cannot avoid writing code that has bugs in it, and those bugs are subsequently exploitable.
Knowing how the hardware, OS (kernel and userspace), and programming language work is critical for safety or you will immediately screw up, rather than it being an eventual error.
People fail to understand that the prevalence of C/C++ and other memory unsafe languages has a massive performance cost: ASLR, Stack and heap canaries, etc and then in hardware: PAC, CFI, MTE, etc all have huge performance costs in modern hardware, are all necessary solely due to the need for the platform to mitigate the terrible safety of the code being run. That’s now all sunk cost of course: if you magically shifted all code today to something that was memory safe, the ASLR and various canaries costs would still be there - if you were super confident your OS could turn ASLR off, and you could compile canary free, but the underlying hardware is permanently stuck with those costs.
Forcing the programmer to understand the OS below could (and can) happen languages other than C. The main reason it doesn’t happen is that OS APIs, while being powerful, are also sharp objects that are easy to get wrong (I’ve fixed bugs in Janet at the OS/API level, I have a little experience there), so many languages that are higher level end up with wrappers that help encode assumptions that need to not be violated.
But, a lot of those low level functions are simply the bottom layer for userland code, rather than being The Best Possible Solution as such.
Not to say that low level APIs are necessarily bad, but given the stability requirements, they accumulate cruft.
The programmer and project that I have sometimes used as a point of comparison is more recent. I’m now about the same age that Richard Hipp was when he was doing his early work on SQLite. I admire him for writing SQLite from scratch in very portable C; the “from scratch” part enabled him to make it public domain, thus eliminating all (or at least most) legal barriers to adoption. And as I mentioned, it’s very portable, certainly more portable than Rust at this point (my current main open-source project is in Rust), though I suppose C++ comes pretty close.
Do you have any data on memory safety bugs in SQLite? I especially wonder how prone it was to memory safety bugs before TH3 was developed.
Did history also teach us that operating a scalpel on human flesh cannot be done reliably safe over time?
I think it did. It’s just that the alternative (not doing it) is generally much much worse.
There’s nothing wrong with C, just like there’s nothing wrong with a scalpel.
There is no alternative to the scalpel (well, except there is in many circumstances and we do use them). But there can be alternatives to C. And I say that as someone who chose to write a new cryptographic library 5 years ago in C, because that was the only way I could achieve the portability I wanted.
C does have quite a few problems, many of which could be solved with a pre-processor similar to CFront. The grammar isn’t truly context free, the syntax has a number of quirks we have since learned to steer clear from. switch
falls though by default. Macros are textual instead of acting at the AST level. Everything is mutable by default. It is all too easy to read uninitialised memory. Cleanup could use some more automation, either with defer
or destructors. Not sure about generics, but we need easy to use ones. There is enough undefined behaviour that we have to treat compilers like sentient adversaries now.
When used very carefully, with a stellar test suite and sanitisers all over the place, C is good enough for many things. It’s also the best I have in some circumstances. But it’s far from the end game even in its own turf. We can do better.
And I say that as someone who chose to write a new cryptographic library 5 years ago in C, because that was the only way I could achieve the portability I wanted.
I was wondering why the repo owner seemed so familiar!
Those who think their security burns will be solved by a gimmick such as changing programming language, are in for a very unpleasant surprise.
I don’t think that moving from a language that e.g. permits arbitrary pointer arithmetic, or memory copy operations without bounds checking, to a language that disallows these things by construction, can be reasonably characterized as a gimmick.
There’s nothing wrong with C, just like there’s nothing wrong with a scalpel.
This isn’t a great analogy, but let’s roll with it. I think it’s uncontroversial to say that neither C nor scalpels can be used at a macro scale without significant (and avoidable) negative outcomes. I don’t know if that means there is something wrong with them, but I do know that it means nobody should be reaching for them as a general or default way to solve a given problem. Relatively few problems of the human body demand a scalpel; relatively few problems in computation demand C.
That’s a poor analogy.
What we would consider “modern” surgery had a low success rate, and a high straight up fatality rate.
If we are super generous, let’s say C is a scalpel. In that case we can look at the past and see a great many deaths were caused by people using a scalpel, long after it was established that there was a significant differences in morbidity when comparing a scalpel, to a sterilized scalpel.
What we have currently is a world where we have C (and similar), which will work significantly better than all the tools the preceded it, but is also very clearly less safe than any modern safe language.
There is an update from Theo which explains the whole design and its benefits in great detail: https://marc.info/?l=openbsd-tech&m=166874067828564&w=2
It’s confirmed that LibreSSL is not vulnerable: https://www.openwall.com/lists/oss-security/2022/10/29/2
So much for the effort to remove it from CPython: https://peps.python.org/pep-0644/
I am using it every day on OpenBSD and this client is just awesome! Supports all features I need and just needs a handful of system resources.
@henrik: Thanks for your work on this!
We’re disabling HTTP/3 for the time being, which is hopefully picked up automatically upon restart.
Restarting the browser should just help. If it doesn’t, disable HTTP3 manually.
Edit: The bug was in HTTP/3, but not in “all of HTTP/3”. We solved this on the server-end. A post-mortem will be held and I’ll make sure the outcome lands on lobste.rs
Do I understand from this that Mozilla can just update my browser settings remotely without my updating‽
Thanks for posting it here with your official hat on and being so honest! Mistakes can happen and exactly this behavior gives me confidence in the FF crew.
I promised I’d get back to this thread. The retrospective is at https://hacks.mozilla.org/2022/02/retrospective-and-technical-details-on-the-recent-firefox-outage/ (also discussed here at https://lobste.rs/s/m1oprf/retrospective_technical_details_on)
Yes. This is part of the “Remote settings” service, which we can use to ship or unship features (or gradually) or recover from breakage (like here!). We mostly use it as a backend for Firefox Sync and certificate revocation lists. Technically, we could also ship a new Firefox executable and undo / change settings, but that would realistically take many more hours. Needless to says, every “large” software project has these capabilities.
BTW I’d encourage you not to disable remote settings, because it also contains certificate revocation updates and (obviously) helps with cases like this here. I understand that this is causing some concern for some people. If you’re one of those, please take a look at https://support.mozilla.org/en-US/kb/how-stop-firefox-making-automatic-connections
Edit: I’m told sync is using a different backend and some of this is inaccurate. It seems that lots of folks are linking to this thread, which is why I will leave this comment, but strike-through.
CRL updates are part of the payload that the “remote settings” service provides. So, I’m not sure what you are asking. I only know of the all-or-nothing switch.
I think driib is asking “If I control the DNS for a public wifi point, can I use an NXDOMAIN for use-application-dns.net and a spoof of aus5.mozilla.org to force an update to my own (possibly evil) version of Firefox; and if so, how do I defend against that?”. But I could be wrong.
I promised I’d get back to this thread. The retrospective is at https://hacks.mozilla.org/2022/02/retrospective-and-technical-details-on-the-recent-firefox-outage/ (also discussed here at https://lobste.rs/s/m1oprf/retrospective_technical_details_on)
Is it just me, or is unveil
a terrible choice of name? It normally means “remove a veil”, “disclose” or “reveal”. Its function is almost exactly the opposite - it removes access to things! As the author says:
Let’s start with unveil. Initially a process has access to the whole file system with the usual restrictions. On the first call to unveil it’s immediately restricted to some subset of the tree.
Reading the first line of the man page I can see how it might make sense in some original context, but this is the opposite of the kind of naming you want for security functions…
Is it just me, or is unveil a terrible choice of name? It normally means “remove a veil”, “disclose” or “reveal”. Its function is almost exactly the opposite - it removes access to things!
It explicitly grants access to a list of things, starting from the empty set. If it’s not called, everything is unveiled by default.
I am not a native speaker, so I cannot comment if the verb itself is a good choice or not :)
As a programmer who uses unveil() in his own programs, the name makes total sense. You basically unveil selected path to the program. If you then change your code to work with other files, you also have to unveil these files to your program.
OK, I understand - it’s only for the first usage it actually restricts, and immediately also unveils, after that it continues to unveil.
“Veiling” is not a standard idea in capability theory, but borrowed from legal practice. A veiled fact or object is ambient, but access to it is still explicit and tamed. Ideally, filesystems would be veiled by default, and programs would have to statically register which paths they intend to access without further permission. (Dynamic access would be delegated by the user as usual.)
I think that the main problem is that pledges and unveiling are performed as syscalls after a process has started, but there is no corresponding phase before the process starts where pledges are loaded from the process’s binary and the filesystem is veiled.
Doing it as part of normal execution implements separate phases of pledge/unveil boundaries in a flexible way. The article gives the example of opening a log file, and then pledging away your ability to open files, and it’s easy to imagine a similar process for, say, a file server unveiling only the public root directory in between loading its configuration and opening a listen socket.
I think that the main problem is that pledges and unveiling are performed as syscalls after a process has started, but there is no corresponding phase before the process starts where pledges are loaded from the process’s binary and the filesystem is veiled.
Well the process comes from somewhere. Having a chain-loader process/executable that sanitises the inherited environment and sets up for the next fits well with the established execution model. It’s explicitly prepared for this in pledge(, execpromises).
You could put it in e.g. an elf header, or fs-level metadata (like suid). Which also fits well with the existing execution model.
Suid is a good comparison, despite being such an abomination, because under that model the same mechanism can double as a sandbox.
Chainloader approach is good, but complexity becomes harder to wrangle with explicit pledges if you want to do djb-style many communicating processes. On the other hand, file permissions are distant from the code, and do not have an answer for ‘I need to wait until runtime to figure out what permissions I need’.
Not going too far into the static/dynamic swamp shenanigans (say setting a different PT_INTERP and dlsym:ing out a __constructor pledge/unveil) - there’s two immediate reasons why I’d prefer not to see it as a file-meta property.
I prefer to see this type of project that builds upon what it considers the good parts of systemd, instead of systemic refusal and dismissal that I’ve seen mostly.
Same. Too often I see “critiques” of systemd that essentially boil down to personal antipathy against its creator.
I think it makes sense to take in to account how a project is maintained. It’s not too dissimilar to how one might judge a company by the quality of their support department: will they really try to help you out if you have a problem, or will they just apathetically shrug it off and do nothing?
In the case of systemd, real problems have been caused by the way it’s maintained. It’s not very good IMO. Of course, some people go (way) to far in this with an almost visceral hate, but you can say that about anything: there are always some nutjobs that go way too far.
Disclaimer: I have not paid close attention to how systemd has been run and what kind of communication has happened around it.
But based on observing software projects both open and closed, I’m willing to give the authors of any project (including systemd) the benefit of the doubt. It’s very probable that any offensive behaviour they might have is merely a reaction to suffering way too many hours of abuse from the users. Some people have an uncanny ability to crawl under the skin of other people just by writing things.
There’s absolutely a feedback loop going on which doesn’t serve anyone’s interests. I don’t know “who started it” – I don’t think it’s a very interesting question at this point – but that doesn’t really change the outcome at the end of the day, nor does it really explain things like the casual dismissal of reasonable bug reports after incompatible changes and the like.
I think that statements like “casual dismissal” and “reasonable bug reports” require some kind of example.
tbf, Lennart Poettering, the person people are talking about here is a very controversial personality. He can come across as an absolutely terrible know-it-all. I don’t know if he is like this in private, but I have seen him hijacking a conference talk by someone else. He was in the audience and basically got himself a mic and challenged anything that was said. The person giving the talk did not back down, but it was really quite something to see. This was either at Fosdem or at a CCC event, I can’t remember. I think it was the latter. It was really intense and over the top to see. There are many articles and controversies around him, so I think it is fair that people take that into account, when they look at systemd.
People are also salty because he basically broke their sound on linux so many years ago, when he made pulseaudio. ;-) Yes, that guy.
Personally I think systemd is fine, what I don’t like about it is the eternal growth of it. I use unit files all the time, but I really don’t need a new dhcp client or ntp client or resolv.conf handler or whatever else they came up with.
tbf, Lennart Poettering, the person people are talking about here is a very controversial personality.
In my experience, most people who hate systemd also lionize and excuse “difficult” personalities like RMS, Linus pre-intervention, and Theo de Raadt.
I think it’s fine to call out abrasive personalities. I also appreciate consistency in criticism.
Seems illogical to say projects that use parts of systemd are categorically better than those that don’t, considering that there are plenty of bad ideas in systemd, and they wouldn’t be there unless some people thought they were good.
Seems illogical to say projects that use parts of systemd are categorically better than those that don’t
Where did I say that though?
I prefer to see this type of project that builds upon what it considers the good parts of systemd
Obviously any project that builds on a part of system will consider that part to be good. So I read this as a categorical preference for projects that use parts of systemd.
There have been other attempts at this. uselessd (which is now abandoned) and s6 (which still seems to be maintained)
I believe s6 is more styled after daemontools rather than systemd. I never looked at it too deeply, but that’s the impression I have from a quick overview, and also what the homepage says: “s6 is a process supervision suite, like its ancestor daemontools and its close cousin runit.”
A number of key concepts are shared, but it’s not like systemd invented those.
s6 I saw bunch of folks using s6 in docker, but afaik that’s one of most not user friendly software i’ve been used.
Since there was a bit of a pricing discussion at the end of the article, there is even a cheaper alternative if you want to try self-hosting.
I have my complete email infrastructure running on IONOS VPS (https://www.ionos.com/servers/vps#packages). Setup is similar to the one in the article (OpenBSD, OpenSMTPD/Postfix, …) and it’s all running on 1EUR/$2 p.m. VPS in different data centers.
I haven’t used vultr, however, IONOS uses VMWare ESXI as hypervisor. Compared to the previous cloud/VPS providers I used (they use mostly KVM), the VMs are blazing fast. Disk I/O for even the cheap 1EUR VPS is wroooom…
The book The UNIX Programming Environment by Kernighan and Pike is also a great read on much the same topic. It’s expensive to buy new but I found a cheap second-hand copy.
The thing is, I find the argument really compelling… but it doesn’t seem to have caught on. Some people really like this style of computing - see the Plan 9 holdouts, for example - but UNIX moved away from it pretty quickly once it left Bell Labs (hence this paper, which uses BSD and SysV changes as examples). GNU is definitely not UNIX, and Windows never was.
It seems like the UNIX style appeals strongly to a few people, but has no mass appeal. Maybe it appeals strongly to developers whose main focus is building software systems, but less to end users - and hence less to developers whose main focus is building software for those end users? Contrast Kernighan and Pike’s paper with Jamie Zawinski’s Unity of Interface, which he described as, “An argument for why kitchen-sink user interfaces (such as web browsers, and emacs) are a good idea.”
Modularity is great if it comes with composability. UNIX originally justified its existence with a typesetting system and the UNIX kind of modularity where you consume a text file and produce a text file was great for that. The only stage that wasn’t plain text was the set of printer control commands that were sent to the printer device (over a stream interface, which looked like a text stream if you squinted a bit). It doesn’t actually work for anything non-trivial.
Here’s a toy example: Consider ls -Slh
. In a UNIX purist world, -l
is the only one of these that you actually need in ls
and actually you don’t need that because if -l
is the default you can get the non-verbose output with ls -l | cut -f 9 -w
and you can make that a shell alias if you use it often enough. You don’t need -S
because sort
can sort things, so you can do ls -l | sort -k5
. Except that now you’ve gone away from the UNIX spirit a bit because now both sort
and cut
have the functionality to split a line of text into fields. Okay, now that you’ve done that how do you add the -h
bit as a separate command? You can do it with some awk
that splits the input and rewrites that one column, but now you’ve added an entire new Turing-complete language interpreter to the mix. You could, in fact, just do ls | awk
and implement the whole sort part in awk
as well. If you do this, you’ll discover that awk
scripts compose a lot better than UNIX pipelines because they have functions that can take structured values.
Many years ago, I sent a small patch to OpenBSD’s du
to add the -d
option, which I use with around 90% of du
invocations (du -h -d1
is my most common du
invocation). The patch was rejected because you can implement the -d
equivalent with -c
and a moderately complex find
command. And that’s fine in isolation, but it doesn’t then compose with the other du
flags.
The UNIX-Haters Handbook explained the problem very well: We’ve implemented some great abstractions in computing for building reusable components: they’re functions that consume and produce rich data types. Composing systems implemented in terms of these primitives works great. Witness the huge number of C/C++ libraries, Python packages, and so on in the world. In contrast, plumbing together streams of text works well for trivial examples, can be forced to work for slightly more complex examples, and quickly becomes completely impossible for more complex ones.
Even some of the most successful kitchen-sink UIs have been successful because they support composition. Word and Photoshop, for example, each have rich ecosystems of extensions and plugins that add extra features and owe their success in a large part to these features. You can build image-processing pipelines in UNIX with ImageMagick and friends but adding an interactive step in the middle (e.g. point at the bit you want to extract) is painful, whereas writing a Photoshop workflow that is triggered from a selection is much easier. Successful editors, such as vim, emacs, and VS Code, are popular because of their extensions far more than their core functionality: even today, NeoVim and Vim are popular, yet nvi has only a few die-hard users.
Yeah I agree the sort -k5
thing is annoying. Several alternative shells solve that to some extent, like PowerShell, nushell, and maybe Elvish (?). They have structured data, and you can sort by a column by naming it, not by coming up with a sort
invocation that re-parses it every time (and sort
is weirder than I thought.)
However I want a Bourne shell to have it, and that’s Oil, although it’s not done yet. I think what PowerShell etc. do has a deficiency in that it creates problems of composition. It doesn’t follow what what I’m calling the Perlis-Thompson principle (blog posts forthcoming).
So Oil will support an interchange format for tables, not have an in-memory representation of tables: https://github.com/oilshell/oil/wiki/TSV2-Proposal (not implemented)
I also think the shell / awk split is annoying, and Shell, Awk, and Make Should Be Combined.
Unfortunately, it’s not really something that you can solve in a shell because the problem is the set of communication primitives that it builds on. PowerShell is incredibly badly named, because it’s not really a shell (a tool whose primary purpose is invoking external programs), it’s a REPL. It has the converse problem: there’s no isolation between things in PowerShell.
Something like D-BUS would be a reasonably solid building block. If the shell managed a private D-BUS namespace for the current session then you could write composed commands that run a bunch of programs with RPC endpoints in both directions and some policy for controlling how they go together. You could then have commands that exposed rich objects to each other.
’d be quite curious to know what a shell based around the idea of exporting D-BUS objects would look like, where if you wrote foo | bar
then it would invoke both with D-BUS handles in well-known locations for each, rather than file descriptors for stdin / stdout. For example, a shell might provide a /shell/pipeline/{uuid}
namespace and set an environment variable with that UUID and another with the position in the pipeline for each process so that foo
would expose something in /shell/pipeline/3c660423-ee16-11eb-b8cf-00155d687d03/1
and look for something exposed as /shell/pipeline/3c660423-ee16-11eb-b8cf-00155d687d03/2
, expose the /0
and /3
things for input from and output to the terminal (and these could be a richer set of interfaces than a PTY, but might also provide the file descriptor to the PTY or file directly if desired). Each shell session might run its own D-BUS bus or hook into an existing one and the shell might also define builtins for defining objects from scripts or hooking in things from other places in a D-BUS system.
(very late reply, something reminded me of this comment recently)
I don’t see why you need DBus or anything similar to solve the sort -k5
problem? I think you just need something like TSV over pipes, and then a sort-by
tool that takes a column name to sort by. An enhancement would be to attach types to column names, so you don’t have to choose whether to do sort
(lexicographic) or sort -n
(numeric) – that would be in the data rather than the code.
There are already people who use CSV/TSV, XML/HTML, and JSON over pipes. And entire shell-based toolkits and languages around them, like jq, csvkit, etc.
Projects listed at the end of this page: https://github.com/oilshell/oil/wiki/Structured-Data-in-Oil
So I think there is a lot of room to improve the shell without changing anything about the kernel. Sure you don’t have metadata for byte streams, but these tools should be strict about parse errors, and it’s very easy to tell a TSV vs. HTML vs. JSON document apart. I think better parse errors would go a long way, and that can be done in user space.
Interestingly enough, OpenBSD added the -d option to du
some years ago: https://cvsweb.openbsd.org/src/usr.bin/du/du.c?rev=1.26&content-type=text/x-cvsweb-markup
And the reason is to be compatible with other BSDs and Linux and so we’re back to the original topic ;)
I think I submitted my patch to add -d
in 2004 or 2005, so it took OpenBSD over a decade to actually add the feature. I was running a locally patched du
for the entire time I was using OpenBSD. That interaction put me off contributing to OpenBSD - the reaction of ‘this feature is a waste of time, you can achieve the same thing with this complicated find
expression’ and the hostility with which the message was delivered made me decide not to bother with writing any other code that would require that I interacted with the same people.
Sorry to hear that. In general I find it is extremely difficult to reject patches gracefully, even when you are trying to. It’s one of those things where text loses nuance that would do the job in real life, and so you have to be extra enthusiastic about it. I usually try to start on a positive statement ‘cause that’s what people will see first, something like “I love this, thanks for submitting it. But, there’s no way I can accept this because…”. If it’s a (smallish) bug fix rather than a feature addition I often try to accept it anyway, even if it kinda sucks, and then clean it up myself.
I’m all for technical rigor, but if a project wants to attract contributions, it’s nice to have a reputation for being easy to work with. And it’s better for everyone if people are encouraged to submit patches without them fearing a nasty rejection. You never know when someone sending a patch is a 13 year old self-taught hacker with more enthusiasm than sense, just starting to play with getting involved in the world outside their own head.
Interesting, found the original mailing list response where the “rejection” was given: https://marc.info/?l=openbsd-tech&m=115649277726459&w=2
How frustrating to receive that response, “it is not being added as it is not part of POSIX”, since -d
got added some years later, with the commit message explicitly acknowledging its omission from POSIX standards at the time. :\
On the upside, looks like you had the right intentions all along so I send some kudos your way for trying :)
How frustrating to receive that response, “it is not being added as it is not part of POSIX”, since -d got added some years later, with the commit message explicitly acknowledging its omission from POSIX standards at the time. :\
To be fair, eight years is a long time and the project might have changed its mind about POSIX.
I started this comment by writing that I was not sure the reasoning was inconsistent. David’s patch was rejected in part due to portability concerns. And schwarze’s commit message does mention compatibility with other BSDs and GNU as justification.
But on the other hand, support for -d
appeared in NetBSD around the same time as David’s patch and in FreeBSD even before that. Soooo… you’re probably right :-)
haha, yeah I mean, it’s fair to stick to POSIX, but I guess in David’s case it was a matter of the person reviewing the submission being more stringent about that than the person/people who adopted the change later.
Out of curiosity I checked the FreeBSD du.c
commit history and found that the -d option was added in Oct 1996! Glad they kept the full commit history upon each transition to a new versioning technology (since I’ve definitely encountered projects where that history was lost). Ah well, anyway, that’s more than I expected to learn about the history of du
this week! haha :)
To me, the closest thing to the Unix philosophy today is the Go standard library. It has lots of functions with simple APIs that take io.Readers or io.Writers and lets you compose them into whatever particular tool you need. I guess this makes sense, since it’s from Rob Pike.
The thing about the Unix philosophy is it’s not just about “small” tools that do one thing. It’s about finding the right abstraction that covers a wider range of cases by being simpler. Power comes from having fewer features.
For example, file systems before Unix were more complicated and featureful. We see today that for example, iOS makes files complicated in an old school mainframe like way where some data is in the Photos library, some is locked in a particular app, some needs iTunes to be extracted, and some is actually in the Files app. This might be the right choice in terms of creating a lot of small seemingly simple interfaces instead of one actually simple interface that can do everything, but it makes it much harder to extend things in ways not envisioned by the software developers.
To me, the closest thing to the Unix philosophy today is the Go standard library. It has lots of functions with simple APIs that take io.Readers or io.Writers and lets you compose them into whatever particular tool you need. I guess this makes sense, since it’s from Rob Pike.
Yeah, he mentioned this in “Less is Exponentially More”:
Doug McIlroy, the eventual inventor of Unix pipes, wrote in 1964 (!):
We should have some ways of coupling programs like garden hose–screw in another segment when it becomes necessary to massage data in another way. This is the way of IO also.
That is the way of Go also. Go takes that idea and pushes it very far. It is a language of composition and coupling.
It’s fun to think of Go programs like that. Like a website could be a bunch of http handler functions slotted into a mux, and an HTML template parser is wired up to the handers’ response witers. It’s nice to think like that in terms of packages too. My favorite packages are those that I can think of as a “part” that I can hook up to other parts.
The thing about the Unix philosophy is it’s not just about “small” tools that do one thing. It’s about finding the right abstraction that covers a wider range of cases by being simpler. Power comes from having fewer features.
Definitely. I think the “do one thing…” etc sayings kind of miss the point. And io.Writers and io.Readers are definitely that kind of abstaction. They have one method and can therefore easily cover a wide range of types. I think Go’s interfaces work that way in general too, since they grow more specific and “narrow” with more methods and the number of types that implement them shrinks.
General feedback from a long-time OpenBSD user. Very well done and one of the sources (besides the official FAQ) I recommend to new users!
The article says “Now you can use two additional key types: ecdsa-sk and ed25519-sk, where the “sk” suffix is short for “security key.”
I know that sr.ht does, GitHub probably does as well. There’s no real reason why it wouldn’t be supported by services, I believe that the reason only ecdsa-sk
is shown is that most keys on the market don’t support ed25519
yet.
Maybe there is more, here is what I saw on my system so far:
Another thing I forget to mention and very much like about all software written by florian@. The daemon requires no configuration at all. You start it up and it does exactly what it should without turning any knobs, writing config files, etc
dhclient(8)
is just that: a client process. dhcpleased(8)
is a daemon that will handle all interfaces on the host that are set for auto configuration of ipv4 addresses and acquire addresses for them.
Other platforms do similar things, one example being dhcpcd: https://wiki.archlinux.org/index.php/Dhcpcd
I was giving this a shot earlier after it first entered -current snapshots and was unable to get the default IPv4 route set with the gateway from my dhcp server. This is after removing dhcp
from my hostname.if and setting AUTOCONF4 on the interface. (I’m not certain if removing dhcp is correct, but /etc/netstart will still use dhclient if you specify that, and so I tried without.)
You need to remove ‘dhcp’ and add ‘inet autoconf’ to your /etc/hostname.if file. Then the system uses dhcpleased. If ‘dhcp’ is present, dhclient will be started
Perhaps lobsters is not the best place to report bugs.. but I did both of these and it was still not setting the route. Currently on:
kern.version=OpenBSD 6.9-beta (GENERIC.MP) #368: Sun Feb 28 21:10:13 MST 2021
deraadt@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
When I up the interface, I see logged:
Mar 1 09:13:02 drifter dhcpleased[73796]: failed to send route message: No such process
Mar 1 09:13:06 drifter dhcpleased[73796]: failed to send route message: Network is unreachable
Even stranger is that I cannot manually add the route anymore.
# route add default 10.0.0.1
add net default: gateway 10.0.0.1: Network is unreachable
e: just saw this, and latest snap i’m on isn’t build with this commit yet: https://marc.info/?l=openbsd-cvs&m=161458351925096&w=2
Thinkpad T450s running OpenBSD -current with cwm: https://files.mastodon.social/media_attachments/files/006/832/655/original/4e68c1e962b5fbea.jpeg
I no longer use i3bar as status bar since I decided to go without a bar.
Self-hosted on OpenBSD with OpenSMTPD and dovecot. Self-hosting my emails for over a decade so I’ve been through all ups and downs. I like to run my own stuff, have a maximum level of privacy and always learn new stuff. On the downside, I nearly lost my complete inbox twice (restored from backups, so take backups!), learned very fast that having a primary and a backup MX is different from having two primaries.
I am also self-hosting using OpenBSD, OpenSMTPD and dovecot for a number of years. I’ve got a primary and a secondary server with SPF and DKIM. My netblock was blacklisted by outlook.com but was easy enough to fix by filling into an online form.
I also recommend to get yourself onto whitelists like https://www.dnswl.org/.
I think it’s really cool that you are self-hosted but I have to ask; how are your delivery rates? Do you have DKIM and SPF records? I know it’s quite the challenge to develop a good sending reputation so I am always curious to see how others fare.
I have SPF records (mainly to make google happy) but no DKIM. However, DKIM is not a hassle to set up. There are plenty of good howtos out there.
I cannot complaint about reputation, it seems all my email reach the recipient (and yes, also the ones at gmail). I once had some trouble with outlook.com and German Telekom when I had a system at Hetzner because their IP addresses have a very bad reputation. Once I moved away, everything works fine.
Did the same 4/5 years ago. Never looked back and would not go back to a third-party provider for a million bucks.
With a xterm and ksh on OpenBSD I just see ^T and nothings happens. When I manually send a SIGINFO it works as expected. What am I doing wrong?
Edit, solved: I needed to run “stty status ^T” on ksh invocation.
Seems isopenbsdsecu.re is already on it.
Yeah, they’re obsessed with doling out poorly worded opinions about everything OpenBSD does.
Why would you call it poorly worded? It seems like a fairly level-headed assessment of OpenBSD’s security features. There’s praise and disapproval given based on the merits of each, comparing to other platforms as well.
If your takeaway from reading that website is a fairly level-headed assessment of anything then I’m not sure what to tell you. It’s my personal opinion that it’s anything but that.
The person who’s maintaining the website is one of the persons who’s doing the talk but not walking the walk, i.e. a blabbermouth.
Qualys on the other hand is actively trying to exploit the latest OpenSSH vulnerability and found some valid shortcomings in OpenBSD’s malloc. otto@ who wrote otto-malloc, acknowledged them and is already working on an improved version.