How about opening Terminal (on MacOS) and typing “bc”. It’s like the google calculator but without internet. You probably want to specify “-l” and also bump “scale=25”
FYI: it’s the frontend for “dc” (desk calculator).
The nicest thing about the google calculator to me is that it has a bunch of constants and unit conversions.
https://www.google.com/search?q=speed+of+light+in+feet+per+second
for example, quite convenient.
That’s what the cli programm “units” is for ;-) granted this one is usually not in the default install, although it apparently is on MacOS?
For those potential “Motorola 68k powered IoT lightbulb that must boot to Dwarf Fortress in 69ms or a kitten dies” style situations FreeBSD just offers a tunable kern.random.initial_seeding.bypass_before_seeding
that makes those super-early random calls return all zeroes (and emit a console warning) instead of blocking. Normal users who don’t try to do any of this just get the good behavior.
That seems nice and clean.
OpenBSD writes a new random seed to disk for that initialization. Solves the same problem I think.
Many Linux distros also preserve some seed across reboots. See https://systemd.io/RANDOM_SEEDS/ and the “bootctl random-seed” part for example
systemd is not “early boot”. On OpenBSD the Bootloader is reading /etc/random.seed and if available RDRAND/RDSEED values which it passes to the kernel. It has reasonable “good” random entropy even before the random subsystem initializes and mixes in more entropy.
But you can only do this if you make the bootloader, the kernel, and userland work together.
Systemd has its own bootloader and sticks the seed in EFI area, so it can pass it to the kernel before anything’s running. This is exactly the bootloader, kernel, and userland working together as you described.
Yeah, that’s a thing too in the default FreeBSD installation, but that assumes persistent writable storage which might not be the case in embedded.
It’s also important to not enable this until the second boot if you don’t have sufficient early entropy, or to seed it during device partitioning. I recall a vulnerability from a certain router manufacturer that didn’t have a real-time clock or a hardware RNG, so their only entropy source was the cycle time at which devices appeared on the bus and the MAC address. With an SoC, there was something on the order of 8 bits of entropy here, so each device generated private keys from a set of a couple of hundred. If you were able to get the MAC address (e.g. by being within WiFi range) then you could guess the private key in well under a second of CPU time.
Sounds a lot like got ( http://gameoftrees.org/ ), where i have to take a 2nd look at. I hear it’s progressing nicely.
It uses the git repo, but promotes a different development cycle. Main branch with linear history and feature branches which need to be rebases on main (HEAD) before merge, to get a linear main branch again. So very similar to “branchless git”
My take: In “modern” OSs, the abstraction presented by malloc breaks down when you allocate huge amounts of memory. At those scales, you can’t keep pretending memory is free and just comes out of a tap like water. You have to take into account swap space, overcommit, your OS’s naughty-process killer, and such factors.
It’s nice that we have this abstraction — I speak as someone who spent decades coding on systems that didn’t have it — it’s just not perfect.
I’d much rather have malloc return NULL, then overcommiting memory, fearing the OOM-Killer, and running something like getFreeMemory(&how_much_memory_can_my_app_waste); in a loop.
But isn’t this only an issue in a process that allocates “huge” amounts of memory? Where today on a desktop OS “huge” means “tens/hundreds of gigabytes”? If you’re doing that, you can take responsibility for your own backing store by creating a big-enough (and gapless) file, mmap’ing it, then running your own heap allocator in that address space.
(Pre-apologies if I’m being naive; I don’t usually write code that needs more than a few tens of MB.)
Basically creating your own swap file. It’s a fun concept, but here’s some things you may have to consider in practice:
tmpfs
, and it has to be a fast disk with enough spacemmap
was designed for I/O, not this, it would slow you down by flushing your memory to disk unnecessarily.. but okay, you’ve found the non-standard MAP_NOSYNC
flag to turn that offOh, and in something like a desktop app, there’s a good chance users will hate you for hogging the disk space :)
I don’t really write those big applications, also. But Java (Tomcat), Browsers and other proprietary business apps are memory hogs. And because they are used to malloc pretty much always returning success, they employ various techniques(ugly hacks) to find out how much RAM there really is. Instead of backing off once they hit a malloc error.
Rolling your own allocator, sometimes can be the answer, but most of the time its just dangerous to overwrite your systems malloc (debuggability, bug prone, security risks)
But Java (Tomcat), Browsers and other proprietary business apps are memory hogs.
The JVM preallocates heap memory, though direct byte buffers are allocated outside of this heap. Generally this means it’s rare for the JVM to continue allocating. You can also force the JVM to commit the memory so it doesn’t hit a copy-on-write fault. As such it shouldn’t have much of an issue if the system runs out of available memory.
That’s exactly what I do in a production DB (single 32TB mmap) and it works very well. It does freak out customers when they run top
though.
Yeah, it’s a really weird choice to conclude what is ultimately no more than a tutorial on setting up syntax highlighting in nano with a comment about how you’ve proven nano is as capable an editor as vim or emacs. It is and has for years been beyond me how nano could ever be useful outside of making trivial config file changes in a system you don’t have root access on – these days it seems more ubiquitous than vim or ed. I was hoping this article would clear that up.
Then again, maybe there’s nothing to clear up; maybe there really are people who have no further requirements for an editor than being able to type text and save it to a file. I don’t know.
Some people can work perfectly fine with a minimal editor. For example Linus Torvalds with MicroEMACS.
When I learned C, I decided to only use vi (not vim) without colors and without any custom config.
It’s a little weird at first, but the brain adapts (quickly) and recognizes the patterns. Now I don’t care which editor is on a system, or how it’s formatted on the web or in an e-mail.
Instead of vi
, I use vis
. But, in there, I do the same: I disable the syntax highlight, and I only use the default settings of the editor.
I read somewhere, someday, that working with disabled syntax highlight makes the programmer more attentive to the code, and consequently make less mistakes.
I actually never measured it, but I instinctively feel that I read more carefully the code base, and therefore I learned the code base I work on better than before.
I also started to appreciate the [Open|Net]BSD code style, because it helps to work on this style, and to use default UNIX tools to find parts of the code I am interested at.
In other words, it leverages UNIX as IDE.
I am thinking about switching from vim
to vi
+ tmux
for Go.
So far the most challenging was:
Especially copy/paste. It turns out I heavily relied on yanking from one vim tab to another.
It’s ubiquitous because it’s just what I’d expect from a debian system that some non-vim professional might have to administrate via CLI. And for anything that isn’t changing configs on remote systems / rescue mode I’ve got an IDE or Kate if it’s supposed to be simpler.
I know that’s not the point of the article, but my “Unix” doesn’t have seq or shuf. So i propose jot -r 1 1 6
I’ve found a lot of “Unix philosophy” arguments online rely heavily on GNU tools, which is sort of ironic, given what the acronym “GNU” actually stands for.
The “Unix” in GNU isn’t the ideal of an operating system like Unix (everything’s a file, text-based message passing, built on C etc. etc.), it’s the “Unix” of proprietary, locked-in commercial Unix versions. You know, the situation that forced the creation of the lowest common-denomination POSIX standard. The ones without a working free compiler. The ones which only shipped with ED.
BSD shipped with vi and full source code before the GNU project existed, and by the 1980s there were already several flavors of Unix. But AT&T retained ownership over the name Unix, which is never something that should have happened - it was always used as a genericized trademark, and led to travesties like “*nix”.
RMS is a Lisp (and recursive acronyms) guy who never seemed to care much about Unix beyond viewing it as a useful vehicle and a portable-enough OS to be viable into the future (whereas the Lisp Machine ecosystem died). Going with Unix also allowed GNU to slowly replace tools of existing Unix systems one by one, to prove that their system worked. GCC was in many cases technically superior to other compilers available at the time, so it replaced the original C compiler in BSD.
I found jot
to be more intuitive than seq
and I miss it. Not enough to move everything over to *BSD though.
I’m pretty sure it’s available (installed by default) on Linux systems (depending on distribution).
On my VPS (Ubuntu 20.04 LTS)
$ jot
Command 'jot' not found, but can be installed with:
sudo apt install athena-jot
On my RPi 4 (Raspbian GNU/Linux 10 (buster))
$ jot
-bash: jot: command not found
I first learned about it from the book Unix Power Tools, at which time I was running a couple of BSDs, so I kind of got used to it then…
However it still helps in faster execution as that is one less program in the pipeline.
I don’t see a problem with shuf
containing the ability to output a certain number of lines as that still feels like it pertains to the subject matter of the program and it is quite useful.
At least from what I’ve seen used with shuf
it is probably the most used option for it too.
Sure, in practice I wouldn’t pipe cat
into grep
or whatever. Whatever the purists say, flags are useful. But in a demonstration of how the pipeline works, I think it makes more sense to use one tool to shuffle and another tool to snip the output, than the shuffling tool to snip, that’s all I meant.
In practice, I probably wouldn’t be simulating a dice roll in the shell, but if I was, my aim would be to get what I want as fast as possible. To that end, I’d probably use tail
instead of head
, as that’s what I use most often if I want to see part of a file. I’d probably use sort -R
instead of shuf
, because I use sort more often. That hasn’t dropped any of the parts of the pipeline, but it also doesn’t represent the “one thing well” spiel because randomizing is kind of the opposite of sorting.
I guess that’s what I was getting at :)
I don’t know why, but it always amuses me that its a nice looking binary number.
11000000.10101000. (192.168.)
10101100.00010000 (172.16.)
192 is a nice number like that because it’s at the bottom of the Class C space, and the classes were delimited in nice ways (they had prefixes 0, 10, 110, and 1110, leaving 1111 as “experimental” / future use).
The other ones are, judging from this comment, coincidental: the lowest class B network was 128.0.0.0/12, but evidently everything from 128 up to 172-and-change (705 networks or so) had already been given out, as had the first 165 class Cs (back then, no one had much reason to stick themselves with just a /24).
This might be a weird use case but for internal mails you don’t even need the @domain part.
So “user1” or “root” is a valid email address. Verify that!?
Ok, honest question. Except for the fixed testkey/testiv, is there something wrong with this ChaCha encoder/decoder from LibreSSL/libcrypto an it’s usage.
ChaCha_set_key(&ctx, CRY_KEY, 256);
ChaCha_set_iv(&ctx, CRY_IV, CRY_COUNTER);
while ((i = read(STDIN_FILENO, buf, sizeof(buf))) > 0) {
ChaCha(&ctx, bufc, buf, i);
write(STDOUT_FILENO, bufc, i);
}
Yes please! Once again it’s a non issue on OpenBSD.
Standards insist that this interface return deterministic results. Unsafe usage is very common, so OpenBSD changed the subsystem to return non-deterministic results by default. If the standardized behavior is required srandom_deterministic() can be used.
Modern PCs/Workstations are just scaled down Mainframes, or on the way there. I’m still waiting for optical cables connecting the CPU to drives, extension cards, network cards, maybe even RAM.
https://youtu.be/fE2KDzZaxvE?t=2102 Good talk overall, URL with timestamp where it’s about DRAM
In 2008-2012:
The FreeBSD source repository switched from CVS to Subversion on May 31st, 2008. The first real SVN commit is r179447.
The FreeBSD doc/www repository switched from CVS to Subversion on May 19th, 2012. The first real SVN commit is r38821.
The FreeBSD ports repository switched from CVS to Subversion on July 14th, 2012. The first real SVN commit is r300894.
Can still use OpenBSD if you’re nostalgic about forgetting -P
in cvs co
or having your binary files mangled by forgetting to mark them as such 🙃
I used to run a really big CVS server, and while CVS is a huge PITA to use, I pretty much had the entire source memorized, and it was very simple to administer at a scale where a git repo would melt down in a puddle of goo.
OpenBSD offers a copy in git, so unless you are an OpenBSD committer, chances are you never have to actually play in cvs land… thankfully :)
agreed, but that’s not necessarily the workflow of a committer, just a contributor.
I don’t mind CVS for the basic use cases of keeping track of a few files here and there. Maintenance of CVS is an entirely different thing however, especially in a distributed scenario.
We still use Visual SourceSafe at work. The binary is 20+ years old and is distributed from a network share.
Still can’t shrink pools
Device removal exists for some usecases, specifically mirrored vdevs.
The CDDL flaming is so predictable at this point that it hurts to argue, so I’ll hold off for the most part. Yes, Oracle is bad because they haven’t relicensed ZFS under the GPL. However, the CDDL enabled the open source components of Solaris to be extricated from Oracle and allowed innovation to continue to happen in the open when Oracle closed off Solaris.
One example of OpenZFS’ innovation: we finally have an open source encryption alternative to LUKS that can do snapshot backups to untrusted devices. It’s totally changed my backup workflow. I patiently waited for ZFS encryption to start setting up encrypted-by-default Linux machines with snapshots and transparent backups, and my patience was rewarded. OpenZFS 0.8 changed how I set up machines.
Would you prefer if people didn’t complain about CDDL whenever ZFS is brought up? Because Oracle and the CDDL is literally the main thing which takes an otherwise super impressive project and turns it into a project which has absolutely no practical applicability.
Is it even a good thing at this point that innovation is “allowed to continue” on a DoA filesystem, rather than focusing effort on relevant filesystems?
The bias in this comment is just so painful I don’t even know where to start. I’ve been using ZFS on FreeBSD happily for nearly a decade now, the idea that the filesystem is DoA is just propagandist nonsense.
Not really, since I’m not the only one using FreeBSD for a storage server running ZFS, and haven’t been for a very long time. Just because you aren’t using it doesn’t mean it’s not widely used. FreeNAS is very popular among home NAS builders, mostly because of ZFS. Get outside your bubble.
I wonder if there’s a misconception that supporting OpenZFS is supporting Oracle, which is explicitly not the case considering that OpenZFS deliberately has diverged from Oracle to implement things like non proprietary encryption.
I think there were a few reasons cited for that in the GitHub discussions. One was that the specs for Oracle’s ZFS encryption weren’t available, another was that Oracle’s key management was too complex.
Personally, I know that supporting ZFS isn’t necessarily supporting Oracle. However, continuing development on ZFS means continuing development on a project which is intentionally license poisoned by Oracle to cripple Linux, which is bad enough in itself.
intentionally license poisoned by Oracle to cripple Linux
Do you mean the fact that it originally came out of the Solaris codebase? That was Sun’s call, not Oracle’s. That “Fork Yeah!” video I linked in the top level comment has a nice overview of that bit of history.
FWIW, it also explains that the majority of the ZFS team (and teams for many other Solaris subsystems) immediately quit after Oracle acquired Sun and closed off Solaris. It seems like most of the Solaris team wanted development to be in the open, which is orthogonal to how Oracle does business.
Personally, I’m not seeing the harm in supporting the project. The license is not libre, but this is probably a historical artifact of the competition between Solaris and Linux. Linux won in a lot of regards, but the team behind ZFS doesn’t seem like they’re carrying that historical baggage. If they were, ZFS would have died with Solaris and really would be irrelevant.
There’s no (legal) problem using it on Windows or macOS either. The problem is not the CDDL, it’s the GPL. The CDDL does not impose any restrictions on what you can link it with. The GPL does.
That’s not entirely fair IMO. You can’t GPL-licensed code and integrate it into a license which is more restrictive than the GPL, which is entirely reasonable. The issue is that the CDDL is more restrictive than the GPL, so CDDL-licensed code can’t use GPL-licensed code, so ZFS can’t use Linux code.
turns it into a project which has absolutely no practical applicability.
I’m really confused, are you arguing that ZFS doesn’t work on widely used operating systems? FreeBSD and Linux are pretty widely used.
I was also shooting for a technical discussion about the filesystem instead of bikeshedding the license. There’s a lot of technically interesting things that zero-trust data storage enables, such as cloud storage providers that can’t see your data at rest. I think that’s much more interesting to discuss than this CDDL vs. GPL boilerplate. For example, I’ve got some ideas for web-based ZFS replication projects to make sharing files between different people with ZFS pools easier.
The CDDL flaming is so predictable at this point that it hurts to argue
I don’t care to argue about it, but I think the camp that’s unhappy about the CDDL is pretty huge.
Yes, Oracle is bad because they haven’t relicensed ZFS under the GPL.
I’m not even that picky. I’d settle for MIT, BSD, or even MPL.
The cddl is similar to the MPL in that it is weak file based copy left. The sizeable difference is the MPLv2 has an explicit exception to allow it to be relicensed as GPL.
Device removal exists for some usecases, specifically mirrored vdevs.
I recently tried removing a mirrored vdev from my pool and it worked flawlessly. Pretty nice feature - all data was migrated to the other vdevs in the pool. I’m currently going through my pool and replacing old drives with newer drives after testing them. I am tempted to go from 3 mirrored vdevs (2 TB each) to 2 mirrored vdevs (8TB each) without losing anything but the time required for testing, or going with 3 vdevs again.
Are you a current ZFS user, or are those particular reasons that you don’t use ZFS?
After years of waiting, with the release of ZoL 0.8.0, I finally moved all-but-one of my machines from LUKS+btrfs to encrypted ZFS. Four out of five, and so far so good. I am close to, but not yet at the point of, flat-out recommending it as a default to my friends who run desktop Linux. The only features I miss so far are:
I am very thankful than RAID-Z expansion is in the works, and I hope my faith in the OpenZFS team will be rewarded the way it was with encryption. But much like how so much of ZFS feels “right”, the way btrfs handles adding drives feels like the way it should have always been, with all file systems.
I like that NixOS makes it pretty clear that my ZFS module is properly built for the exact kernel version I’m running, FWIW. I’ve had lots of success deploying on the order of tens of currently reliable NixOS machines with ZFS.
Someone needs to go full RMS on this project and just reimplement the whole thing from scratch. No more CDDL, but all the benefits of ZFS. A man can dream…
That would be btrfs. The length people go to because of “wrong open source license” or “not-invented-here-syndrome” is mind bending. More power to them, but it’s non-trivial.
I already spent too much time reading this thread about licensing. That’s why I like MIT/ISC/BSD style licenses.
“I wrote it, do whatever you want with it, don’t sue me.”
True “do whatever you want with it” is Unlicense/0BSD/WTFPL. MIT/ISC/2-3BSD also add “I want the clout” clauses :)
Those specific licenses have flaws, too, the biggest one being failing to defend against patent trolls. The latest technology in permissive licenses is the Blue Oak Model License 1.0.0 (now SPDX-approved). It was written by a group of lawyers to plug the holes in those other licenses while being as simple as possible given current legal requirements.
I don’t understand any of these “patent trolls” clauses. If I’m “violating” a patent, the court is going to find against me, no matter what license my violating code has.
You’re right, the patent protection clause doesn’t protect you in that case. It’s meant to defend against a different type of threat.
Say a patent owner contributes code to your software that uses their patent. Later, after all your users have upgraded to a version including that code, they realize that they don’t want other people using their secret sauce, and they start suing your users for violating their patent. The Blue Oak Model License prevents this by forcing patent owners to license all relevant patents.
Thus, the license allows maintainers to spend less time evaluating whether code contributions are legally safe and more time evaluating whether code contributions are desirable and correct. The license also gives users peace of mind that their right to run the software won’t be revoked due to a patent threat.
That makes more sense, although I’m not sure it’d be legally enforceable? All they have to do is stop using your software, then they’re no longer subject to your license and can sue who they please, surely?
It’s not “your” license at that point; their contributions were released under that license too. They have licensed their patents.
Interestingly, macos still includes Perl as the most prominent scripting language (in terms of number of executables in the base install). On a just installed catalina there’s many more perl scripts than shell and python scripts altogether:
# count number of programs of each type
for j in perl pyton shell mach; do
echo $j $(for i in `echo $PATH|tr : \ `; do file $i/*|grep -i $j; done|wc -l)
done
perl 224
python 21
shell 104
mach 1106
I just noticed this when a colleague was bugging me than his mac was a really slick modern unix, with no bizarre legacy like perl scripts and whatnot.
Thanks for these stats, very interesting.
Is “mach” the compiled system executables? I’m not familiar with these parts of macOS.
Yes, Mach-O is the native executable format of the Mach microkernel, which is part of the macOS kernel.
Apparently. I just found a string that is unique, the complete “file” output is this:
$ file /bin/ls
/bin/ls: Mach-O 64-bit executable x86_64
EDIT: there’s also a few other types:
zsh 1
php 2
ruby 7
dtrace 23
From OpenBSD 6.7 (PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/X11R6/bin
):
perl 38
python 0
shell 40
ELF 696
Exactly, it’s still in base because there are base utilities which require perl. And changing that would be quite the hassle, i.e. rewriting the package management tools (pkg_*) and some other scripts.
Ahh, this is fun.
$ (uname -om;for j in Perl Python shell ELF; do echo $j $(for i in $(echo /bin:/sbin:/usr/bin:/usr/sbin|tr : \ ); do file $i/*|grep -i $j; done|wc -l);done)
FreeBSD amd64
Perl 0
Python 0
shell 81
ELF 859
Ok, but at least do not replicate my ugly script. A shorter and more efficient code can treat all types on a single pass:
ls -d /*bin/* /usr/*bin/*|xargs -L 1 file -b|cut -d' ' -f-2|sort|uniq -c|sort -n
Alas, this overcounts on systems where one or more bin directories are symlinks to other directories. By adding (yet another) sort/uniq stage, we can deduplicate everything
( for dir in $(echo "$PATH" | tr : '\n'); do ls -d "$dir"/*; done ) | xargs realpath | sort | uniq | xargs -L 1 file -bn | cut -d , -f 1 | sort | uniq -c | sort -n
On my (arch linux) system, this yields the output
1 Algol 68 source
1 a /usr/bin/env ash script
1 a /usr/bin/env csh script
1 a /usr/bin/env dash script
1 a /usr/bin/env fish script
1 a /usr/bin/env pdksh script
1 a /usr/bin/env php script
1 a /usr/bin/env tcsh script
1 a /usr/bin/guile1.8 \ script
1 a /usr/bin/guile1.8 -s script
1 DOS batch file
1 GNU awk script
1 gzip compressed data
1 Java source
1 Paul Falstad's zsh script
1 setuid ELF 64-bit LSB executable
1 sticky ELF 64-bit LSB pie executable
1 TeX document
1 XML 1.0 document
2 a /usr/bin/env ksh script
2 a /usr/bin/env texlua script
2 a /usr/bin/env /usr/bin/python script
2 awk script
2 POSIX shell script executable (binary data)
2 Tcl/Tk script
3 a /usr/bin/fontforge -lang=ff script
3 ELF 32-bit LSB executable
3 regular file
3 setgid ELF 64-bit LSB pie executable
3 setuid
4 Node.js script
4 setuid executable
5 Perl script
7 Tcl script
8 ReStructuredText file
11 ASCII text
11 ELF 32-bit LSB pie executable
12 a /usr/bin/env sh script
17 directory
17 setuid ELF 64-bit LSB pie executable
19 a /usr/bin/env texlua script
21 Ruby script
23 a /usr/bin/ocamlrun script executable (binary data)
36 ELF 64-bit LSB shared object
163 Bourne-Again shell script
286 ELF 64-bit LSB executable
330 Perl script text executable
491 Python script
664 POSIX shell script
4736 ELF 64-bit LSB pie executable
There is an additional part to this though: for some reason, a bunch of scripts are provided as two or even three versions of the same thing, running via “perl”, “perl 5.18” and “perl 5.28”.
of the ~233 results on my system, 11 are from my own install of auto tools (so not part of the base system); 71 target perl 5.18 specifically, 63 target perl 5.28 specifically. So that leaves me with ~99 that just target ‘perl’, or 88 from the base system.
How many of the perl scripts are part of the actual perl installation (eg. pod, dbi, lwp, cpan, etc)? heh
It would certainly be interesting to know how many of those scripts were actually things that were actual system components or utilities in their own right.
Always wondered who it was named after.
https://en.wikipedia.org/wiki/Hans_Reiser
Hans Thomas Reiser (born December 19, 1963) is an American computer programmer, entrepreneur, and convicted murderer.
Cancel my meetings I’ve got some reading to do
Yep. Perfect example of “well that escalated quickly.”
I remember when it was in the news. I was sure Hans was innocent, given that one of his victim’s ex-boyfriends had already been in jail for murdering someone. I was genuinely shocked when he was found guilty and took the police to where he buried the body.
Wired did a really good write up of it at the time.
I actually had dinner with him a few months before the murder & I remember him ranting about his wife a lot at the time so I wasn’t that surprised.
Also one of the early attempts the “geek defense” by framing himself as Asperger’s, throwing other autists under the bus with it. :/
I was working at a startup that was using ReiserFS at the time, and he was doing contract work for us. I never met the dude, but it was very unsettling to be that close to the story.
Yup! Kinda bizarre. I posted this excerpt several years ago:
Reiser4 has a somewhat uncertain future. It has not yet been accepted into the main line Linux kernel, the lead designer is in prison, and the company developing it is not currently in business.
There are a number of crime dramas about this as well.
You could always do nested softraid. It’s just not brought up at boot time, in this example only the raid1 is configured at boot and the rest has to be configured/mounted manually.
So you are not able to boot from an nested softraid, that’s all.
C11 has
_Alignas
and_Alignof
. I guess OpenBSD is not using those. Interesting development nonetheless.There are still some K&R function declarations in the codebase… as for the newer code, C99 is the target.