I didn’t spend too much time searching for it, but is there a restriction on the kind of expressions that can be evaluated at compile-time? If I mark as comptime a while (true) and ship it as part as a library, what would happen? Is there something like gas (i.e. available steps) for the interpreter?
By default the compiler only allows a certain number (1000) of backwards branches. After that, compilation fails. See this example. This can be increased in code using the @setEvalBranchQuota builtin function, up to a maximum of 2³².
The file gets compiled on the fly, every time that I run the script. And that’s just the way I want it during development or later tinkering. And I don’t care during regular use because it’s not that slow. The Python crowd never loses sleep over that, so why should I?
Somebody should politely point out the __pycache__ folder to this professor. Lots of other little misunderstandings about Python in here, but this one I found annoying enough to comment on.
Overall though, if you really want to write your utility scripts in Java, sure, why not? Familiarity with an API is reason enough. One programmer’s cumbersome boilerplate is another’s comfy furniture; de gustibus non est disputandum.
If you use tcc, you don’t even need that cache construct. You barely notice any slowdown because the compilation is so damn fast (hat tip to this post)
I think allowing a shebang script to be the interpreter for another shebang script is a bash extension. This will fail if you need to exec the script from another program, or use another OS where the default shell isn’t bash.
EDIT: This appears to be linux specific behaviour, not bash.
You can always use cscript hello.cscript. That works fine on MacOS – I just had to either change the shell to bash instead of sh so echo -n worked, or else drop the commenting out of the script shebang and use tail +2 instead of cat.
Presumably you can also just write your interpreter in C and shell out to gcc. That will also obviously be less overhead.
It’s reasonable to cache compilation output, but I’m not sure why you’re feeling bothered about this post. Java doesn’t really need to cache compilation output because the compiler and the runtime (Hotspot JIT) are plenty fast for this particular use case (fooling around with scripts), and the ability to just point the java command at a source file is a nice usability improvement in a 30 year old language that has never really focused on making things easy.
You’re absolutely right, having a cache most likely doesn’t really add much in the way of performance gains for this kind of use. It’s even possible that the better runtime performance of Java outweighs any advantage that Python’s cacheing could provide, although we’d have to know more about the actual workload. And it’s one less line in your .gitignore, I guess.
I was just annoyed by the ignorance of the quoted statement. I could do more nitpicking in this vein but I’m trying not to slip too far into pedantry mode.
Python’s plenty fast for 99% of what I’d use it for in scripting. Just because Python is slow compared to C doesn’t mean that it’s not more than fast enough for automating the **** out of everything :)
But it’s also good that developers are never content, and always want to push performance higher. We all win from that work.
Catching up on the last few days of Advent of Code (I’m doing it in Raku and Zig, but have only completed days 1 through 3 so far), and tidying my apartment as I’ve let it get a bit out of hand over the last while.
I have a couple of pork shanks that I’m going to braise in home-made stock and red wine. I haven’t decided what I’ll serve with them yet, maybe I’ll bake some bread too.
What’re the benefits of running containers vs. jails? Then what’re the benefits of running containers on BSD? Most prod BSD users I know use it because of jails.
The short answer is ‘type error’. Comparing jails and containers is like comparing MMUs and processes. One is an abstraction that is easily implemented using the features provided by the other.
The longer answer is:
Jails are an isolation mechanism. They were the first shared-kernel virtualisation implementation (though Solaris Zones was probably the first complete one - it took jails a little while to catch up). They allow you to create a filesystem tree that is isolated and appears to the users be a root filesystem, with its own root user, its own password database, its own IP address, and so on. You can combine this with VNET to provide a separate version of the network stack (which can reduce lock contention), and so on.
OCI containers are an orchestration and management model. They have a bunch of abstractions. Containers are instantiated from images, which are composed from layers. Abstractly, each layer is a filesystem delta (the ‘base layer’ is logically a delta applied to an empty layer). These are built into snapshots, where each layer is applied to the one below and snapshotted. Container images are expected to be buildable from a generic recipe and can be upgraded by replacing the layers. If two images share the same base layer, then the filesystem abstraction is expected to share the files (ideally, blocks) for common files. Containers are instantiated on top of some isolation mechanism (the ‘shim’) and contain a filesystem from an image. They may also have host directories mounted in them and may also have volumes, which are filesystems that are not part of the image (for example, you may have a mail server image that contains dovecot and a bunch of related things and the config, but then put all email data in a volume, so you can upgrade the image and restart the container while preserving its data). Containers also depend on a network plugin that manages IP addresses and packet routing.
There are a lot of isolation mechanisms for containers. Windows uses Hyper-V to run Windows and Linux containers in lightweight VMs. On Linux, runc and crun use namespaces, cgroups, and so on to build a jail-like abstraction. Alternatively, on Linux gVisor uses ptrace to intercept system calls and provide isolation, and things like Kata Containers use Firecracker to run very lightweight VMs.
On FreeBSD, runj and ocirun use jails to provide this isolation for containers. Jails are only a small part of the total story though. Most FreeBSD installs now use ZFS and ZFS is an ideal filesystem for the image abstraction. Each layer is extracted on top of a clone of the layer below and then snapshotted. This means that blocks are shared (both on disk and, more importantly, in the buffer cache: if two jails use the same libc.so.7 then there will be one copy resident in memory, for example) and access to blocks is O(1) in terms of the number of layers. On Linux, there are a lot of other snapshotters, but ones that are built on some form of overlay FS are O(n) in terms of the number of layers.
On top of that, racct is used to limit memory and CPU usage for containers. On the networking side, pf handles the routing (with or without VNET).
Most people who ‘use jails’ use some management framework on top of jails. OCI containers are one such management framework and have a lot of ancillary tooling. For example, you can build containers from a Dockerfile / Containerfile with automatic caching of layers, you can push images to container registries and then pull them and automatically update them, and create new containers that depend on some existing image.
For a start - Jails are secure and isolated. Docker/Podman containers are not. To make similar security isolation with Docker/Podman you need additional tool such as SELinux or AppArmor.
If you already have everything running on FreeBSD - you just stick to it and use whatever suits your needs - there is not reason in switching to Linux then.
With FreeBSD you have: full thick Jails, thin Jails, single command+deps Jails (like Docker), Bhyve inside Jails, Jails inside Jails (for some various network topologies) … and now you have another ‘way’ of using them - which maybe useful for some.
For a start - Jails are secure and isolated. Docker/Podman containers are not. To make similar security isolation with Docker/Podman you need additional tool such as SELinux or AppArmor.
This article is about podman/OCI container support on FreeBSD. Podman is using FreeBSD’s native jail support here, and combining it with the OCI container packaging format for convenience.
I probably used a ‘mental shortcut’ by saying Docker/Podman - while I should say: Docker and/or Podman managed Linux container based on namespace(s) and cgroup(s).
I’ve been following this work p excitedly, and frankly I find the workflow much better for containers than the traditional workflow for jails. I’d much rather just build a new immutable image for each application and run them that way. The traditional jail approach tends to require maintaining each jail as an individual machine, which I find more tedious than just rebuilding an image. It also makes it easier to test stuff, to rollback bad changes, stuff like that.
The nice thing is that there is better isolation between containers on FreeBSD than on Linux, since the container support is built on top of the jail infrastructure.
OpenBSD isn’t even really supposed to be a desktop OS. I’d say it’s more like router firmware. I’m always shocked when someone actually implies they do or have been using it as a desktop OS.
And yes, I know there’s going to be someone who insists they also use it. I’ve also seen people try to use Windows XP x64 Edition well into 2014. Trust me, I have seen no shortage of questionable life choices.
The author of this was previously on the OpenBSD development team. OpenBSD devs tend to dogfood their own OS, so of course she would have used it as a desktop.
This isn’t really true. A few porters do huge amounts of work to keep (among other things) KDE and Chromium and Firefox available for OpenBSD users, and not insignificant work goes into making the base system work (more or less) on a decent variety of laptops. It’s less compatible than Linux but for a project with orders of magnitude less in resources than Linux it does pretty good. But I guess we’ve finally reached the Year of the Linux Desktop if we’re now being shocked that someone would have a BSD desktop.
I would say that the vast majority of OpenBSD developers are using it as their primary OS on a desktop or laptop. I am shocked (well not really anymore, but saddened) that developers of other large mature operating systems don’t use it as their primary OS. If you’re not using it every day, how do you find the pain points and make sure it works well for others?
We have reasonably up-to-date packages of the entire Gnome, KDE, Xfce, Mate, and probably other smaller desktop environments. We have very up-to-date packages of Chrome and Firefox that we’ve hardened. The portable parts of Wayland have made (or are making) their way into the ports tree. None of this would be available if there weren’t a bunch of people using it on their desktop.
Why?
It comes with an X server, an incredible array of software, both GUI and terminal based applications that I can install. For my needs OpenBSD is a very capable desktop, and more responsive and flexible then the Windows desktop that work gives me.
There’s finally checked arithmetic! It’s clunky, but I can safely add signed numbers without worrying that my overflow check will cause UB itself, or get compiled out as impossible to happen in a language where ints officially never overflow.
OTOH I’m flabbergasted that _BitInt happened, and implementations actually support up to around 300-900 bits, instead of copping out at 31.
It’s clunky, but I can safely add signed numbers without worrying that my overflow check will cause UB itself, or get compiled out as impossible to happen in a language where ints officially never overflow.
OTOH I’m flabbergasted that _BitInt happened, and implementations actually support up to around 300-900 bits, instead of copping out at 31.
Why 31? There’s plenty of valid use cases for larger integer sizes in embedded/systems programming (physical of address in a page-table entry on a system with 4K pages, for example, could be anywhere from 32 to 52 bits). Modern system support SIMD registers up to 512-bits in size in many cases. It would seem odd to cap it at something so arbitrarily low.
I’m saying it’s good! Just my expectations were so low, because C traditionally had a bunch of concessions for small/old/weird systems. I wouldn’t be surprised if some vendor vetoed this feature because their legacy chip had only 17-and-a-half bit registers or something.
It’s still only guaranteed up to ullong width, without actual bignum support.
While I find bootstrapping from nothing a mostly academic exercise (you aren’t bootstrapping from CPU switches, and if you have a Trusting Trust attack on that scale, you have far bigger problems), I am interested in “bootstrap from a normal environment with a C compiler and set of tools like bison”. It’s a little hard for a distro or a new port of Rust to build if the cross-compiler unusable or inappropriate, but we do have perfectly fine native toolchains. (Speaking as someone in this boat myself, albeit one without the time to port Rust.)
Of course, for that case, you don’t need the torture of not having bison/yacc/etc, nor is using C++ a big bummer…
It’s a little hard for a distro or a new port of Rust to build if the cross-compiler unusable or inappropriate, but we do have perfectly fine native toolchains.
I continue to be annoyed by the state of cross compilation. Our toolchains really ought to be able to perform the exact same work and produce the exact same binary regardless of where they are run.
I think for distros, they might have fully functional cross-compilers, but prefer to build on the host. It’s certainly less weird with a lot of build systems (i.e. autotools).
In my case, I don’t think there is a working cross-compiler for AIX…
For FreeBSD, the package-build infrastructure supports cross building with qemu user mode. This was particularly important for things like 32-bit Arm (mostly gone now) where the fastest chips was far slower than a fast x86 machine running an emulator, and also important for bringing up RISC-V before there was any real silicon. You absolutely don’t want to be building packages from a 50 MHz FPGA. The jails with the emulated environments can have native binaries inserted and so, much of the time, they’re using native clang and LLD in cross-compile mode, but running configure scripts and so on in emulation.
NetBSD had infrastructure for this at least ten years earlier. I’d have assumed Linux distros weren’t 20 years behind *BSD. No idea about AIX though.
Given that we’re this far along, bootstrapping is purely an aesthetic exercise (and a cool one, to be sure – I love aesthetic exercises). If it were an actual practical concern, presumably it would be much easier to use the current rustc toolchain to compile rustc to RISC-V and write a RISC-V emulator in C suitable for TinyC. (If you already trust any version of rustc back to 1.56, this also solves the trust problem.) I haven’t dug into the Bootstrappable Builds project so I don’t know if they use some kind of syscall bridge or implement an entire OS from scratch — if the latter, this might not work.
If it were an actual practical concern, presumably it would be much easier to use the current rustc toolchain to compile rustc to RISC-V and write a RISC-V emulator in C suitable for TinyC
As I understand it, Zig has does something like this already, but with WebAssembly.
Unique, no. But they are arguably defining characteristics, notably of Smalltalk.
Compare with another language that runs in a VM, Java.
Java integrates with the host OS. It was designed to. You edit Java in a native app on the OS. You “compile” it with a native app. The result runs in the native environment looking (at least somewhat) like a native app. Java looks in the native filesystem for code modules containing code objects which are native OS files.
Contrast with Smalltalk:
You run a Smalltalk environment. You edit Smalltalk code using Smalltalk in the Smalltalk environment, and the result is saved into that environment, and executed in that environment. You can build and test and then run a large complex app without ever leaving that environment, without using any non-Smalltalk code at all.
It is a self-contained world and the fact that anything exists outside it can be completely ignored. You don’t need it and you don’t use it and unless you need to export data out of the Smalltalk environment – for instance, print it, or read from external data sources – you never need interact with it at all.
I don’t know what the author had in mind but the parent post is, at the very least, imprecise.
There’s nothing in the JVM spec that requires an OS. In fact, the latest JVM spec literally states the opposite:
Oracle’s current implementations emulate the Java Virtual Machine on mobile, desktop and server devices, but the Java Virtual Machine does not assume any particular implementation technology, host hardware, or host operating system. It is not inherently interpreted, but can just as well be implemented by compiling its instruction set to that of a silicon CPU. It may also be implemented in microcode or directly in silicon.
JRockit, for example, ran straight on top of an x86 hypervisor, and picoJava was literally something you ran an OS on, not something you ran under an OS :-).
Every spec above that level (e.g. the language spec) is defined for the JVM so it obviously doesn’t depend on the underlying OS, either. E.g. the whole threads spec is defined in terms of JVM threads, not native threads. The manner in which a JVM implementation handles threads is an implementation detail as far as the spec is concerned. It can delegate them to the underlying OS. Or not.
That’s exactly how the the Blue Book “specifies” Smalltalk-80, too. In fact, all of Part 2 feels like reading a spec of the Java class library from another era and with a weirder notation.
Things like Java’s System class have dirrect correspondents in Smalltalk world, even as early as Smalltalk-80 (adjustin for portability and industry inertia, so e.g. no standard input and output streams, but FileStream interfaces, which on a modern Unix implementation you’d point at a process’ stdin). Even things that are deliberately devised for interfacing with the underlying OS (e.g. AWT’s pluggable backend) have obvious counterparts in Smalltalk implementations (e.g. GNU Smalltalk’s GTK bindings).
And there’s also nothing in the Smalltalk-80 (or the draft ANSI standard) that precludes integration with a host system. The Blue Book operates with a clear distinction between the Smalltalk language and the Smalltalk environments, which is acknowledged even in the introduction:
However, in order to explain how this graphical user interface [in context: the Smalltalk programming environment] really works, the reader first has to understand the programming language. Thus, this book inverts the presentation of the system by starting with the language itself.
There are implementations of the Smalltalk language that work just fine without it. E.g. GNU Smalltalk does ship with gst-browser, but it also ships with a bunch of bindings for common libraries (sqlite, SDL, cairo, gettext, ncurses), a C interop layer (and you dynamically link the VM with the libraries you want to access), you run Smalltalk applications via a native interpreter (gst) and the recommended environment for editing Smalltalk code is Emacs.
But don’t Lisps require dynamic memory allocation be available? Before the page table is set up, you’d need to avoid dynamic memory allocation. How do you do that in e.g. Scheme R5RS?
Not necessarily, you could use a bump allocator, or just use a fixed sized heap during the bootstrap of your dynamic memory allocator / garbage collector.
I don’t think the lack of an ability to do something in some particular standard means much. They’re lowest common denominators for implementers to target, nothing more. Every OS written in C uses compiler-specific extensions, doesn’t mean they aren’t using C.
Not necessarily, although I think it’s orthogonal to what the author of the linked article is saying.
Dynamic memory allocation doesn’t require either virtual memory or paging. Both make it enormously more efficient and easier, but you can allocate and de-allocate memory dynamically without either of them. In fact, the IBM 704, on which Lisp-1 ran, lacked both.
But most of the pre-paging bootstrap work tends to be easier than the generic case, too, as it’s pretty self-contained and ultimately requires remarkably little memory deallocation. So there are various ways you can trick your way out doing too much heavy work, depending on architecture. E.g. you can use a simple page allocator that also updates a set of page table entries for when you’re going to enable paging.
I’ve got a high end Cortex M7 microcontroller board sitting on my desk right now that will happily malloc and return physical addresses from an attached 16MB SDRAM chip. Definitely don’t need anything else.
You don’t program your computer with LISP or Smalltalk. Your computer runs a separate LISP or Smalltalk computer that you can then program on its own terms and in its own world. (And attempts to bring machines closer to these languages mostly did not succeed).
Emphasis mine. The author addresses this point literally two sentences later.
It’s a strange contention. There were commercially successful Lisp machines from multiple companies for a decade (Symbolics, LMI, Texas Instruments, etc.). What killed them was the Unix workstations: the technology reached the point where you could take an off the shelf microprocessor like an m68k, run a BSD variant on it, and produce a machine with impressive specs for the day at a fraction of the cost of building hardware and OS from the ground up. The same force killed off Burroughs large machines and Connection Machines and a bunch of others. You could make the same contention that Unix workstations mostly did not succeed since Sun, DEC, and SGI died when you could take an off the shelf PC and produce a machine with impressive specs for the day at a fraction of the cost of a dedicated Unix workstation. They had a similar run of roughly a decade, just like the transistor machines with magnetic core memory killed off vacuum tube and delay line machines and machines with integrated circuit memory killed off transistor plus magnetic core machines.
They were commercially viable for a while but “successful” may be a bit of an overstatement. Between them, Symbolics, LMI, PERQ, Xerox and TI had sold less than 10,000 machines (on the order of 7-8000 IIRC?) by the end of the 1980s. The workstation market wasn’t exactly a mass market so it’s not a bad figure, but it’s not exactly a resounding success, either. E.g. SGI alone sold 3000+ units of its first-generation IRIS system, and they were a small player at the time.
It looked more commercially viable than it was because much of the early development cost had been pre-paid in a way. E.g. early on, Symbolics, LMI and TI largely used the same software, licensed from MIT, and they all started pretty much with MIT CADR machines. Symbolics initially planned to start working on their own hardware right away but it took them a couple of years to kickstart the 3600 project.
Even at the peak of Lisp popularity, three (i.e. HP, DEC and Sun) of the five major companies with important Lisp offerings didn’t sell Lisp machines. The other two were Symbolics and TI, but even within Symbolics, lots of business groups weren’t profitable. I think the only one that remained profitable throughout its existence was the Macsyma group.
What killed them was the Unix workstations: the technology reached the point where you could take an off the shelf microprocessor like an m68k, run a BSD variant on it, and produce a machine with impressive specs for the day at a fraction of the cost of building hardware and OS from the ground up.
It also didn’t help that their hardware, despite being very much at the edge of what was possible at the time, was still not enough. Early Symbolics Lisp machines actually had an M68K along with the 3600 CPU. The M68K (the front-end processor, FEP for short) handled startup and some of the peripherals. Merely integrating Lisp machine hardware was expensive.
Unix workstations were one of the things that killed Lisp machines, but not quite by using off-the-shelf hardware. Symbolics hardware was actually contemporary with M68K-era Unix workstations (LM-2 was launched the same year as the DN100, 3600-series was launched the same year as Sun-1 and IRIS).
By the time that Lisp peaked in popularity in the second half of the 1980s, most big workstation manufacturers had actually moved on to their own stuff (SGI started using MIPS RISC in 1987, Sun started using SPARC in 1988, DEC were doing their VAX thing, IBM’s main workstation offering was trying to make ROMP take off since 1986). When Symbolics really started going downwards, the Unix machines that beat their Lisp machine counterparts used their own proprietary CPUs, too. NeXT were, I think, the only ones doing high-tier workstations with M68K CPUs, and even those were slow enough that they were largely excluded from particularly high-performance market segments (e.g. high-end CAD applications).
You don’t program your computer with LISP or Smalltalk.
Well, do you?
The Lisp Machines would like a word. Actually, a whole list of words.
Smalltalk machines were a thing, too.
They existed and were exactly what I alluded to. They were never “a thing”
It seems to come from the use of thing in the sense of a popular phenomenon—cf. “Ecigs are the new thing”. However, its meaning also extends to differentiating set phrases, names, or terms of art from normal productive constructions. For example:
This article’s viewpoint is too narrow: it lacks important historical context.
Hmm…or you could have just just read the next two sentences, where that historical context was provided:
Your computer runs a separate LISP or Smalltalk computer that you can then program on its own terms and in its own world. (And attempts to bring machines closer to these languages mostly did not succeed).
I find this a very strange and hard to parse reply.
From your tone and back-checking a few links, I deduce that this is your blog post, correct? If that is the case then why didn’t you say so? Why do you obscure this simple fact by using the passive voice and so on?
Well, do you?
Me, personally? No. I stopped programming when it stopped being fun, which for me was roughly when Windows started to take off.
They existed and were exactly what I alluded to. They were never “a thing”
I entirely disagree. Indeed this line contradicts itself: the second sentence is invalidated by the first sentences. They existed, therefore, they were a thing.
That means that your supposition in your blog post, that Lisp and Smalltalk only exist inside their own isolated boxes on top of other OSes, is false… and you knew it was false when you wrote it and not only that but you admitted it a line later.
This is simply unfathomable to me.
Hmm…or you could have just just read the next two sentences, where that historical context was provided:
To be perfectly honest, I stopped when I reached that false assertion and wrote my reply. Then, I read the rest before posting, because I’m not 12 years old and I’ve been on the internet for 39 years now. I am not claiming I’m perfect – I’m not – but I commented on the bit that struck me.
I am a bit baffled by your apparently angry response, TBH.
From your tone and back-checking a few links, I deduce that this is your blog post, correct? If that is the case then why didn’t you say so? Why do you obscure this simple fact by using the passive voice and so on?
Marcel said (and you quoted!) “They existed and were exactly what I alluded to.” How’s that obscuring?
I am a bit baffled by your apparently angry response, TBH.
If I’m not mistaken, you write articles for a living. Surely you understand why an author might be annoyed at someone for “commenting on the bit that struck them” while seemingly ignoring that the point was addressed later?
For what it’s worth, I’m under the impression that you’re trying to pick a fight and I hope I’m wrong. :)
Because, TBH, that’s the moment I realised “hang on, is he saying he wrote this?”
If I’m not mistaken, you write articles for a living. Surely you understand why an author might be annoyed at someone for “commenting on the bit that struck them” while seemingly ignoring that the point was addressed later?
Sure, that’s a fair point. :-)
Wider context for reference: I am a lifelong sceptic and disbelieve in all forms of religion, the paranormal, alternate medicine, etc.
If I read an essay that starts “homeopathy shows us that X can do Y” then I am not going to pay much attention to the rest of the essay that talks about – well, anything really – because homeopathy does not show that. Homeopathy does not work, full stop, the end.
So when an article about programming languages says “X is never used for Y, and you can’t do Y with X” when I have stood on stage and told a thousand people about how and why doing Y with X is important, yeah, you are damned right I am going to stop right there and say “hey, that is not right.”
Marcel seems to feel that because something was not commercially successful it didn’t happen. That is not true.
For what it’s worth, I’m under the impression that you’re trying to pick a fight and I hope I’m wrong. :)
Not at all, but I’ve spent over 40 years calling out BS when I see it, and I actively enjoy making people angry by telling them they are wrong.
I am not picking fights. I am trying to point out logical errors and fallacious claims. Not the same thing.
The grown up way to respond when someone points out a hole in your argument is to change your argument. The schoolkids’ way is to get louder defending it.
I think Marcel’s argument rests on a core point that’s wrong. I am only interested in that core point, not what sits on top of it, which seems to be the bit he wants to talk about.
So when an article about programming languages says “X is never used for Y, and you can’t do Y with X” when I have stood on stage and told a thousand people about how and why doing Y with X is important, yeah, you are damned right I am going to stop right there and say “hey, that is not right.”
You could have stopped with your first comment and just said “hey you know lisp machines were actually a bigger thing”, and gone into a bit more detail yourself, but instead just dismissed the rest of the article as lacking historical context.
Context that is utterly irrelevant to the rest of the article. The author doesn’t say you can’t use Smalltalk or LISP to do systems programming. He says that people don’t. Perhaps try being charitable and read this as “people don’t tend to”. Which is not untrue.
Ambiguity isn’t uncommon in writing, and sometimes readers should just be able to infer things themselves from common sense.
I entirely disagree. Indeed this line contradicts itself: the second sentence is invalidated by the first sentences. They existed, therefore, they were a thing.
Once again: “X is a thing” is an idiom for X being something that was popular.
How many LISP and Smalltalk machines were built and sold, in your opinion?
What market share did they achieve in the workstation market?
Once again: “X is a thing” is an idiom for X being something that was popular.
Nope. It means “they existed”, “they were real”.
How many LISP and Smalltalk machines were built and sold, in your opinion?
Doesn’t matter. They were on commercial sale, were real products loved by legions of influential people, and inspired important works that you seem unaware of, such as the Unix Hater’s Handbook, which I submit you need to read.
What market share did they achieve in the workstation market?
Irrelevant. This is not a popularity contest. You are attempting to talk about absolutes – “this only runs in this way” – which are not true. Which is why I responded “this article lacks important context.”
Because with every reply you send, you reinforce my impression that you do not understand the bigger picture here.
The author of the post has linked to a discussion about the origin and evolution of the term, where being popular, especially in a surprising context, is one of the accepted meanings of the word. It’s not just the assertion of the person who asked about it, but also one of the quoted definitions:
An action, fashion style, philosophy, musical genre, or other popularly recognized subsection of popular culture.
Early use of the term seems to have been an ellided version of “is there such a thing as”, but several later instances of its use obviously lean towards the aspect of popularity. Presumably the secondary sense was developed by contamination from the pejorative “a thing”, as in “PCs will be a thing of the past in 20 years”. Regardless, it’s a thing now. See e.g. this entry.
FWIW it’s definitely how I use it, too. Seeing how this is slang, yeah, on behalf everyone else, I would like to take this chance to not apologize for using it in a way that you disapprove of. I’m sure that, if a whole generation of stationery users woke up one day in a world where “to clip” meant both “to cut” and “to fasten” and managed to get over it, you’ll eventually come to terms with this horrifying misuse of language, too.
Is it ambiguous? Yes. Does minimal charity resolve virtually every instance of ambiguity? For heaven’s sake also yes, is it really not obvious that an article that literally says “attempts to bring machines closer to these languages mostly did not succeed” acknowledges that attempts to bring machines closer to these languages existed? What exactly do you think is missing here and isn’t obvious to a community that has a higher incidence of Lisp per capita than the daily appointment list of a speech therapy clinic?
fn freeSpace(self: *Cache(T)) void {
const last = self.list.pop() orelse return;
// hasFn -> hasMethod
if (comptime std.meta.hasMethod(T, "removedFromCache")) {
// This won't compile
T.removedFromCache(last.data);
}
}
The T.removeFromCache(last.data) works when T is User, because that translate to User.removedFromCache. But when T is a *User, it translate to *User.removedFromCache, which isn’t valid - again, pointers to structs don’t contain declarations.
So while std.meta.hasMethod is useful, it doesn’t completely solve our problem.
I understand that this is probably just meant as a pedagogical excuse to demonstrate @typeInfo, but because of Zig’s somewhat lazy compilation model, this can be written as:
fn freeSpace(self: *Cache(T)) void {
const last = self.list.pop() orelse return;
if (comptime std.meta.hasMethod(T, "removedFromCache")) {
// This will for for User and *User.
last.data.removedFromCache();
}
}
The standardization processes from NIST historically came in very different forms. Sometimes, as was the case with the infamous Dual EC DRBG, they were essentially “we specify things the NSA told us, and ignore all the comments we got”.
But there’s a different type of NIST process that was first established with AES, later also with SHA-3, and now with the post quantum stuff. They start with an open competition where cryptographers can submit proposals, and then there’s a long, open discussion process about those. In multiple rounds, the broken or weaker candidates get thrown out. (An interesting aspect of this is also that many people competent in the field have a strong incentive to find weaknesses - because they themselve have algorithms in the competition, and finding weaknesses in their competitors makes it more likely they get chosen.)
The reason you can reasonably trust these new standards isn’t because you “trust NIST”. It’s because over multiple years, it appears noone has found any substantial weakness in them, and you can be certain that most cryptographers competent to do so have tried.
With SHA3 they still found a way to make a questionable choice in the first draft! They picked the parameters of the fast version mentioned briefly in the original submission while most reviewers treated the safer/slower version as the main one. (To be fair, they did discuss it with the original authors who did not find it an unreasonable choice, but applicability of some public comparisons between candidates was lost) But indeed the process was open enough that this choice was visible immediately, called out, and rolled back.
Yeah, the problem there was essentially that they had requirements that if met, made SHA-3 slower than it had to be. But changing it after the competition didn’t look great. And in the end, we have SHA-3 that few people use, because SHA-2 is still good (length extension attack is the only weakness, and that doesn’t matter 99% of the time), and if you want something fast, you use blake2 or blake3.
That was all unfortunate, but I don’t think it changes the fact that noone really has any security concerns around SHA-3. It’s just that there are other algorithms that have advantages, and that noone has any security concerns either.
Funny enough, the PQ standardization may actually be the first widespread use of SHA-3 (although in the SHAKE variant, which I believe has the reduced parameters).
Funny enough, the PQ standardization may actually be the first widespread use of SHA-3 (although in the SHAKE variant, which I believe has the reduced parameters).
Yeah, SHAKE256 and SHAKE128 have the same parameters as SHA3-256 and a hypothetical SHA3-128, but with any size output, so by choosing an output size of 512 or 256 respectively rather than 256 or 128, you get the increased collision resistance without paying the cost of SHA3-512 or SHA3-256.
You’re right, NIST have been criticised in the past for choosing suspicious numbers for their crypto standards and not saying where they came from.
For DES, the magic numbers turned out to have been chosen to mitigate against differential analysis which only they knew about at the time. It also seems to have had intentionally weak keylength, though.
For Dual_EC_DRBG the numbers were suspected to have been chosen to open up back doors.
The latter prompted the creation of Curve25519 which I think is more widely trusted. It recently made it into FIPS which is great.
I think generally people still scrutinise the NIST recommendations, but will aim to use the good ones since some (especially government) organisations can’t use things that aren’t FIPS certified.
Here is a post from a cryptography expert (djb) who is generally critical of NIST’s processes, but whose Curve25519 and SPHINCS+ work is now in FIPS:
http://blog.cr.yp.to/20220805-nsa.html
Curve25519 dates from 2005 and Dual EC DRBG dates from 2006.
The Dual EC DRBG scandal led to doubts about the provenance of the magic numbers in the NSA/NIST elliptic curves (p256 etc.) which led to much wider interest in nothing-up-my-sleeve parameter selection. Curve25519 had already addressed these problems years before, so it became more widely deployed in parallel with official standardization. The standards work took a regrettably long time because they went on a side-quest to develop Curve448, a bigger nothing-up-my-sleeves curve with similar performsnce-oriented design considerations to Curve25519.
In light of today’s events, I’m appreciating the fact that I haven’t worked in cloud support for several years (I moved to web backend, briefly worked on compilers while contracting, and now robotics/linux/systems development), and that my current organization doesn’t use CrowdStrike.
My solution is pre.code .token.shell-symbol { user-select: none } which prevents text selection. There is trade off here… ::before means folks coming in on TUI browsers, crawlers, or otherwise don’t get the long-standing sigil to denote input for the shell session, but my solution means those copying text from a browsers without CSS support won’t get the user-select: none. The writer of this post is also probably using the incorrect syntax highlighting of bash (assuming the language-bash CSS class) instead of console, sh-session, or shell-session which means we can’t see what tokens we might want to use for either the user-selector the ::before solution.
Perennial question for the writer: why does this static post require JavaScript to to get its content? I got a completely blank screen when visited the post.
Why does mac.install.guide require JavaScript? The content is rendered from Markdown files using web components. It’s vanilla JS and standard HTML, no frameworks or build process.
Wouldn’t it be more performant & accessible for the a server to build it once & host it statically to 10,000 users than have 10,000 users download an entire parsing library & building the same output 10,000 times (assuming their user agent even supports JavaScript) when the content isn’t dynamically changing? This setup sounds wasteful.
The web component in question is called <yax-markdown>… I guess there’s a mini framework built around this by the folks at Yax.com?
I’m the folks at yax.com. There’s no framework, just the lit-hmtl library for the web components. A few years ago I was advocating a buildless, framework-less approach using web components when I started writing the mac.install.guide turorials. Turns out the beginner tutorials got a lot more interest than the web development experiments and I got sidetracked. For the sake of efficiency and compute cycles I’m planning to render the site statically but it’s not a user demand, just on my wish list.
Why not just render it statically rather than having your readers compile it from markdown to HTML every time it loads, wasting energy and compute? (admittedly a small amount, but it’s the principal)
I heartily agree with the principle and was chasing a plan to use the Lit server-side rendering in a GitHub action to render a static site from the web components. Got sidetracked with other projects but it’s still a good idea. Now that I’ve figured out how to fix the ‘command not found: $’ error :-)
it puts $ only in front of the first line in the block level element using the class="shell" which for me is usually <code class="shell">some code goes here</code> as shown: https://www.schoolio.co.uk/diary/2024-06-13/
Mine is : $(short_pwd) ; where short_pwd is a function that’s just pwd but with $HOME abbreviated to ~, and all directories but the last abbreviated to a single letter, so if I’m in /home/joe/projects/linux/drivers/dwc2, my prompt looks like:
: ~/p/l/d/u/dwc2 ; cd ..
: ~/p/l/d/usb ;
And when I switch to a root shell, the color changes to red.
It’s interesting that ARMv7 is remaining. The only 32-bit machines I have that I might plausibly use are first-gen RPis, which are v6 (as is the Zero). The only v7 machine I have is a laptop with an 800 MHz Cortex A8, which was painfully slow even when new. I wonder what the v7 machines are that people think are worth the cost of maintaining 32-bit kernel support for another 8 years.
The thing I’d really like to see is a port of Mambo so that we can run v7 processes (fast) on modern AArch64-only cores.
I’m surprised too. The original 32-bit Pis might be the reason - most ARMv7 devices are an unstandardized mess; ~20 year old PC hardware is a lot easier to get your hands on, easier to install on, and is faster than the average ARMv7 device.
Do you have any more information about Mambo? The only similar sounding piece of software I can find is this Mambo, which is a just dynamic binary modification tool for multiple RISC architectures, and doesn’t appear to allow execution of v7 processes on Aarch64 cores.
I think that’s the one, I might be confusing the subprojects. I think HyperMambo might be the thing that used it to emulate AArch32 on AArch32. They had a nice paper at VEE in, I tihnk 2017, showing that it outperformed the native AArch32 mode in some chips that shipped both, which helped motivate Arm to drop 32-bit support in newer hardware.
Reminds me of Captive DBT hypervisor - the most interesting part I find in it was using the HW MMU to accelerate page table translations. I always wanted to port it to Qemu or something to run amd64 Linux under AArch64 macOS but that’s kinda niche now that Rosetta Linux exists (and now has AoT caching) and FEXEmu supports TSO. Now I just need to finish hacking up macOS to enable the private entitlements to allow for TSO under Hypervisor.fw instead of relying on, ugh, Virtualization.fw.
They were obsolete, but they outsold everything else by a depressing margin. The BegleBone Black was the reference platform for 32-bit Arm support on FreeBSD and was a much better device, but sold a fraction of the number. The last 32-bit A profile that Arm made available to license was in 2014, so most other things also quickly moved to 64-bit. The A53 was cheap both to license and to fab.
Yeah, it’s a shame that the BBB was significantly more expensive than the Pi. Among all of the other good things it had going for it, the thing that made me very happy about the BBB is that the processor and supporting bits were:
readily available on the market
very well documented
very well supported by mainline Linux (and I’m assuming FreeBSD?)
The Pi is a great prototyping platform but hasn’t really until recently had a clear path from prototype-to-product (the compute modules and friends do help with that but even then… supply chain uncertainty still makes it tough). The BBB hit the sweet spot of “it’s a great dev tool but it’s also a great reference design” in a way that the Pi did not. With the introduction of the Octavio SiP, that got even easier: https://octavosystems.com/octavo_products/osd335x/
I think this is getting even worse for the RPi ecosystem because the SoCs are increasingly custom. The original RPi was an off-the-shelf Broadcom chip, so you could write code for it and then just buy the chip (which Broadcom started making again because RPi wanted to buy a load of them) and put it on your own board. With the newer ones, the SoC is custom and so any code that you write for them requires you to integrate the board in your product.
There were a bunch of things using TI OMAP3 SoCs that used the Cortex A8 around when the RPi launched and these SoCs were easy to buy in bulk and put on a custom board, which made the other boards a lot better for prototyping high-volume things (the final one needs different wiring from the pins to the peripherals, but the software is the same).
Yeah, that’s true about the custom Pi SoCs now. I suppose it’s switched around now. The thing about the old Pis with the Broadcom chips was that one… doesn’t simply “buy some” unless you’re ordering enough that they’ll execute an NDA and blah blah blah vs. with TI where you can just order Qty 10 off of digikey for your first prototype run.
At least with the compute modules now you can order 100 of them and expect them to arrive in a relatively timely fashion. With the original Pi boards, even if you wanted to integrate them into a product most vendors would only let you order 1 or 2 at a time.
I’m curious about the OMAP chips now! For custom embedded Linux I’ve mostly used the MotorolaFreescale NXP iMX line of processors. They’re nice like the TI ones with open datasheets, readily available supply, etc. but just don’t really have the same small-and-compact-and-cheap prototyping kit that Pi and BBB provide.
RPi was an off-the-shelf Broadcom chip, so you could write code for it and then just buy the chip (which Broadcom started making again because RPi wanted to buy a load of them) and put it on your own board.
Pretty sure no one actually did that. I’ve only ever heard of people sticking a full RPi in a box.
Yeah, I can respect that Broadcom has chosen a specific market to work with (high-volume high-budget bigger companies) but unfortunately that is not the market I’m in :)
What role does KVM play here? Isn’t KVM only necessary as an accelerator for running x86_64 VMs on x86_64 hosts? I don’t believe KVM is actually used in this setup, please correct me if I’m wrong.
Thanks, it seems I have mixed up some of my terminology. As I understand your comments, I think you are saying that KVM is only used for HVM guests, i.e.: the guest uses the same instruction set as the host, in my case x86_64. When the guest uses a different instruction set to the host then emulation, and not HVM (i.e. KVM) is used. Do I understand that correctly?
Yes. Or rather: When the guest and the host use the same instruction set, KVM can be used - there is still no guarantee, and you might be doing ISA emulation (as opposed to HVM) which is slower. The thing actually doing the ISA emulation in this case is QEMU, not KVM.
Historical note: QEMU in fact started out as a pure ISA emulator way back in the day and grew the ability to use hypervisors like KVM later on (this combination is referred to as QEMU/KVM). But you can still use it for ISA emulation just the same, as many people do with qemu-user to run e.g. aarch64 or risc-v userland things on their x86_64 computers.
Windows does have binary compatibility via system DLLs rather than syscalls, while OS X and OpenBSD apparently don’t have ANY kind of binary compatibility.
i.e. this is an ABI vs API distinction – Windows has an ABI but OS X and OpenBSD don’t – or what I call exterior vs. interior interfaces
I remember ~10 years ago that Illumos ported the FreeBSD Linux emulation support to run Docker containers on Illumos!! That tidbit is mentioned here - https://www.oilshell.org/blog/2022/03/backlog-arch.html (an abstract post in which API vs ABI is central)
It basically forces programming languages to bind to libc, which is becoming unpopular due to C’s legacy and non-safety. This article covers more information. As far as I know it impacted Go but it’s probably addressed by now?
I think you have a grave misunderstanding of the scale of the problem. The C wrappers around syscalls are barely more than stubs for the most part, and C’s safety hardly plays into it.
What issues do you think C’s lack of memory safety could possibly cause in this situation that raw syscall access would not?
I think the problem is that most people and/or systems don’t make a distinction between “libc” the C standard library, “libc” various platform-specific extensions to it (GNU, BSD, etc), and “libc” the POSIX syscalls and such that most people actually use. C learning materials seldom make the difference clear, especially on Unix systems, and they all tend to be bundled together into libc.so and treated as if they were all the same thing.
Thus you get the situation where the C standard library is pretty terrible and has many known-broken things in it that will probably never be fixed (like gets() and puts()), the platform-specific extensions to it usually fix most of those things in various incompatible ways (like OpenBSD’s arc4random() replacing rand()), and the POSIX syscall functions like write() are entirely separate animals. When people want to dodge libc, what they usually really want to avoid is all the standard-library bits that are useless or worse than useless, and most of the extension bits that are incompatible and will be replaced by the language lib anyway.
There’s an open PR on FreeBSD to split these apart and provide a libsyscalls that libc links to (and is a filter library). This is, in part, motivated by CHERI wanting to replace the syscalls layer for libc running in a compartment to make it call proxies that allow the host process to interpose (if you try a raw syscall, it will simply fail when sandboxed).
“Most people” includes POSIX, because POSIX does not make a clear distinction between system calls and library functiions. (nor does C, but C has relatively few pure syscalls in its standard library)
I didn’t spend too much time searching for it, but is there a restriction on the kind of expressions that can be evaluated at compile-time? If I mark as
comptimeawhile (true)and ship it as part as a library, what would happen? Is there something like gas (i.e. available steps) for the interpreter?By default the compiler only allows a certain number (1000) of backwards branches. After that, compilation fails. See this example. This can be increased in code using the
@setEvalBranchQuotabuiltin function, up to a maximum of 2³².Somebody should politely point out the
__pycache__folder to this professor. Lots of other little misunderstandings about Python in here, but this one I found annoying enough to comment on.Overall though, if you really want to write your utility scripts in Java, sure, why not? Familiarity with an API is reason enough. One programmer’s cumbersome boilerplate is another’s comfy furniture; de gustibus non est disputandum.
I think he is simply pointing the equivalence of “java foo.java” with “python foo.py”.
That second invocation, however, does create a cached, compiled artifact.
Easy enough to do in C.
QaD PoC:
Obviously bigger savings with bigger programs.
If you use
tcc, you don’t even need that cache construct. You barely notice any slowdown because the compilation is so damn fast (hat tip to this post)I think allowing a shebang script to be the interpreter for another shebang script is a bash extension. This will fail if you need to exec the script from another program, or use another OS where the default shell isn’t bash.EDIT: This appears to be linux specific behaviour, not bash.
You can always use
cscript hello.cscript. That works fine on MacOS – I just had to either change the shell to bash instead of sh soecho -nworked, or else drop the commenting out of the script shebang and usetail +2instead ofcat.Presumably you can also just write your interpreter in C and shell out to
gcc. That will also obviously be less overhead.It’s reasonable to cache compilation output, but I’m not sure why you’re feeling bothered about this post. Java doesn’t really need to cache compilation output because the compiler and the runtime (Hotspot JIT) are plenty fast for this particular use case (fooling around with scripts), and the ability to just point the
javacommand at a source file is a nice usability improvement in a 30 year old language that has never really focused on making things easy.You’re absolutely right, having a cache most likely doesn’t really add much in the way of performance gains for this kind of use. It’s even possible that the better runtime performance of Java outweighs any advantage that Python’s cacheing could provide, although we’d have to know more about the actual workload. And it’s one less line in your
.gitignore, I guess. I was just annoyed by the ignorance of the quoted statement. I could do more nitpicking in this vein but I’m trying not to slip too far into pedantry mode.Python’s plenty fast for 99% of what I’d use it for in scripting. Just because Python is slow compared to C doesn’t mean that it’s not more than fast enough for automating the **** out of everything :)
But it’s also good that developers are never content, and always want to push performance higher. We all win from that work.
Suggesting the
lisptag, as the policies are evaluated by an embedded tinyscheme interpreter.Catching up on the last few days of Advent of Code (I’m doing it in Raku and Zig, but have only completed days 1 through 3 so far), and tidying my apartment as I’ve let it get a bit out of hand over the last while.
I have a couple of pork shanks that I’m going to braise in home-made stock and red wine. I haven’t decided what I’ll serve with them yet, maybe I’ll bake some bread too.
What’re the benefits of running containers vs. jails? Then what’re the benefits of running containers on BSD? Most prod BSD users I know use it because of jails.
The short answer is ‘type error’. Comparing jails and containers is like comparing MMUs and processes. One is an abstraction that is easily implemented using the features provided by the other.
The longer answer is:
Jails are an isolation mechanism. They were the first shared-kernel virtualisation implementation (though Solaris Zones was probably the first complete one - it took jails a little while to catch up). They allow you to create a filesystem tree that is isolated and appears to the users be a root filesystem, with its own root user, its own password database, its own IP address, and so on. You can combine this with VNET to provide a separate version of the network stack (which can reduce lock contention), and so on.
OCI containers are an orchestration and management model. They have a bunch of abstractions. Containers are instantiated from images, which are composed from layers. Abstractly, each layer is a filesystem delta (the ‘base layer’ is logically a delta applied to an empty layer). These are built into snapshots, where each layer is applied to the one below and snapshotted. Container images are expected to be buildable from a generic recipe and can be upgraded by replacing the layers. If two images share the same base layer, then the filesystem abstraction is expected to share the files (ideally, blocks) for common files. Containers are instantiated on top of some isolation mechanism (the ‘shim’) and contain a filesystem from an image. They may also have host directories mounted in them and may also have volumes, which are filesystems that are not part of the image (for example, you may have a mail server image that contains dovecot and a bunch of related things and the config, but then put all email data in a volume, so you can upgrade the image and restart the container while preserving its data). Containers also depend on a network plugin that manages IP addresses and packet routing.
There are a lot of isolation mechanisms for containers. Windows uses Hyper-V to run Windows and Linux containers in lightweight VMs. On Linux, runc and crun use namespaces, cgroups, and so on to build a jail-like abstraction. Alternatively, on Linux gVisor uses ptrace to intercept system calls and provide isolation, and things like Kata Containers use Firecracker to run very lightweight VMs.
On FreeBSD, runj and ocirun use jails to provide this isolation for containers. Jails are only a small part of the total story though. Most FreeBSD installs now use ZFS and ZFS is an ideal filesystem for the image abstraction. Each layer is extracted on top of a clone of the layer below and then snapshotted. This means that blocks are shared (both on disk and, more importantly, in the buffer cache: if two jails use the same
libc.so.7then there will be one copy resident in memory, for example) and access to blocks is O(1) in terms of the number of layers. On Linux, there are a lot of other snapshotters, but ones that are built on some form of overlay FS are O(n) in terms of the number of layers.On top of that, racct is used to limit memory and CPU usage for containers. On the networking side, pf handles the routing (with or without VNET).
Most people who ‘use jails’ use some management framework on top of jails. OCI containers are one such management framework and have a lot of ancillary tooling. For example, you can build containers from a
Dockerfile/Containerfilewith automatic caching of layers, you can push images to container registries and then pull them and automatically update them, and create new containers that depend on some existing image.This also uses jails underneath, it’s just support for the OCI container format for packaging the contents of that jail (via podman).
For a start - Jails are secure and isolated. Docker/Podman containers are not. To make similar security isolation with Docker/Podman you need additional tool such as SELinux or AppArmor.
If you already have everything running on FreeBSD - you just stick to it and use whatever suits your needs - there is not reason in switching to Linux then.
With FreeBSD you have: full thick Jails, thin Jails, single command+deps Jails (like Docker), Bhyve inside Jails, Jails inside Jails (for some various network topologies) … and now you have another ‘way’ of using them - which maybe useful for some.
This article is about podman/OCI container support on FreeBSD. Podman is using FreeBSD’s native jail support here, and combining it with the OCI container packaging format for convenience.
I probably used a ‘mental shortcut’ by saying Docker/Podman - while I should say: Docker and/or Podman managed Linux container based on namespace(s) and cgroup(s).
Hope that helps.
Familiarity and ecosystem?
I think it makes sense to see this is mostly as a compatibility thing. If you want to run an OCI container you now can.
I’ve been following this work p excitedly, and frankly I find the workflow much better for containers than the traditional workflow for jails. I’d much rather just build a new immutable image for each application and run them that way. The traditional jail approach tends to require maintaining each jail as an individual machine, which I find more tedious than just rebuilding an image. It also makes it easier to test stuff, to rollback bad changes, stuff like that.
The nice thing is that there is better isolation between containers on FreeBSD than on Linux, since the container support is built on top of the jail infrastructure.
Learning Raku by putting together a small web application with Cro and HTMX.
OpenBSD isn’t even really supposed to be a desktop OS. I’d say it’s more like router firmware. I’m always shocked when someone actually implies they do or have been using it as a desktop OS.
And yes, I know there’s going to be someone who insists they also use it. I’ve also seen people try to use Windows XP x64 Edition well into 2014. Trust me, I have seen no shortage of questionable life choices.
The author of this was previously on the OpenBSD development team. OpenBSD devs tend to dogfood their own OS, so of course she would have used it as a desktop.
This isn’t really true. A few porters do huge amounts of work to keep (among other things) KDE and Chromium and Firefox available for OpenBSD users, and not insignificant work goes into making the base system work (more or less) on a decent variety of laptops. It’s less compatible than Linux but for a project with orders of magnitude less in resources than Linux it does pretty good. But I guess we’ve finally reached the Year of the Linux Desktop if we’re now being shocked that someone would have a BSD desktop.
Using a BSD isn’t weird. OpenBSD specifically is a curious choice, though.
Use it if you like it, don’t if you don’t.
I love curious choices though!
I use OpenBSD as desktop OS for the last 10 years.
Good that you tell me that it’s not supposed to be used as Desktop OS. Otherwise, I wouldn’t have noticed!
You jest, but the blog post legitimately contains a massive list of things the author found very useful in Linux that isn’t in OpenBSD.
almost as if different users have different needs
I would say that the vast majority of OpenBSD developers are using it as their primary OS on a desktop or laptop. I am shocked (well not really anymore, but saddened) that developers of other large mature operating systems don’t use it as their primary OS. If you’re not using it every day, how do you find the pain points and make sure it works well for others?
We have reasonably up-to-date packages of the entire Gnome, KDE, Xfce, Mate, and probably other smaller desktop environments. We have very up-to-date packages of Chrome and Firefox that we’ve hardened. The portable parts of Wayland have made (or are making) their way into the ports tree. None of this would be available if there weren’t a bunch of people using it on their desktop.
XXX isn’t really supposed to be YYY.
For your usage, my usage, a supposed general usage or one of my cat’s usage?
Be thankful that enough people made the “questionable life choice” to run Linux as a desktop OS in the 90s.
Why? It comes with an X server, an incredible array of software, both GUI and terminal based applications that I can install. For my needs OpenBSD is a very capable desktop, and more responsive and flexible then the Windows desktop that work gives me.
There’s finally checked arithmetic! It’s clunky, but I can safely add signed numbers without worrying that my overflow check will cause UB itself, or get compiled out as impossible to happen in a language where ints officially never overflow.
OTOH I’m flabbergasted that
_BitInthappened, and implementations actually support up to around 300-900 bits, instead of copping out at 31.You could already do that, it was just annoying.
I couldn’t do that without double-checking with some reference, because there’s one correct way to do this, and a dozen faulty ones.
Same here (I think we agree), having to look it up every time is what makes it annoying :)
Why 31? There’s plenty of valid use cases for larger integer sizes in embedded/systems programming (physical of address in a page-table entry on a system with 4K pages, for example, could be anywhere from 32 to 52 bits). Modern system support SIMD registers up to 512-bits in size in many cases. It would seem odd to cap it at something so arbitrarily low.
I’m saying it’s good! Just my expectations were so low, because C traditionally had a bunch of concessions for small/old/weird systems. I wouldn’t be surprised if some vendor vetoed this feature because their legacy chip had only 17-and-a-half bit registers or something. It’s still only guaranteed up to ullong width, without actual bignum support.
Ah, apologies. I misread your tone entirely.
While I find bootstrapping from nothing a mostly academic exercise (you aren’t bootstrapping from CPU switches, and if you have a Trusting Trust attack on that scale, you have far bigger problems), I am interested in “bootstrap from a normal environment with a C compiler and set of tools like bison”. It’s a little hard for a distro or a new port of Rust to build if the cross-compiler unusable or inappropriate, but we do have perfectly fine native toolchains. (Speaking as someone in this boat myself, albeit one without the time to port Rust.)
Of course, for that case, you don’t need the torture of not having bison/yacc/etc, nor is using C++ a big bummer…
I could use
miniyaccif I wanted to, but I think Rust is too complicated for it to parse. I’m unsure if Bison would fare any better.I continue to be annoyed by the state of cross compilation. Our toolchains really ought to be able to perform the exact same work and produce the exact same binary regardless of where they are run.
I think for distros, they might have fully functional cross-compilers, but prefer to build on the host. It’s certainly less weird with a lot of build systems (i.e. autotools).
In my case, I don’t think there is a working cross-compiler for AIX…
For FreeBSD, the package-build infrastructure supports cross building with qemu user mode. This was particularly important for things like 32-bit Arm (mostly gone now) where the fastest chips was far slower than a fast x86 machine running an emulator, and also important for bringing up RISC-V before there was any real silicon. You absolutely don’t want to be building packages from a 50 MHz FPGA. The jails with the emulated environments can have native binaries inserted and so, much of the time, they’re using native clang and LLD in cross-compile mode, but running configure scripts and so on in emulation.
NetBSD had infrastructure for this at least ten years earlier. I’d have assumed Linux distros weren’t 20 years behind *BSD. No idea about AIX though.
Given that we’re this far along, bootstrapping is purely an aesthetic exercise (and a cool one, to be sure – I love aesthetic exercises). If it were an actual practical concern, presumably it would be much easier to use the current rustc toolchain to compile rustc to RISC-V and write a RISC-V emulator in C suitable for TinyC. (If you already trust any version of rustc back to 1.56, this also solves the trust problem.) I haven’t dug into the Bootstrappable Builds project so I don’t know if they use some kind of syscall bridge or implement an entire OS from scratch — if the latter, this might not work.
As I understand it, Zig has does something like this already, but with WebAssembly.
This is an incoherent notion. No one runs a “LISP” computer on their computer.
And of course, it’s perfectly possible to write a whole OS in a Lisp without special hardware…
It isn’t, really; a VM can be a kind of computer.
But in that case, Lisp and Smalltalk aren’t at all unique and talking about them like that is extremely weird.
Unique, no. But they are arguably defining characteristics, notably of Smalltalk.
Compare with another language that runs in a VM, Java.
Java integrates with the host OS. It was designed to. You edit Java in a native app on the OS. You “compile” it with a native app. The result runs in the native environment looking (at least somewhat) like a native app. Java looks in the native filesystem for code modules containing code objects which are native OS files.
Contrast with Smalltalk:
You run a Smalltalk environment. You edit Smalltalk code using Smalltalk in the Smalltalk environment, and the result is saved into that environment, and executed in that environment. You can build and test and then run a large complex app without ever leaving that environment, without using any non-Smalltalk code at all.
It is a self-contained world and the fact that anything exists outside it can be completely ignored. You don’t need it and you don’t use it and unless you need to export data out of the Smalltalk environment – for instance, print it, or read from external data sources – you never need interact with it at all.
Fair point. I suppose the author had this in mind and then just added Lisp as well due to somehow thinking it is the same.
I don’t know what the author had in mind but the parent post is, at the very least, imprecise.
There’s nothing in the JVM spec that requires an OS. In fact, the latest JVM spec literally states the opposite:
JRockit, for example, ran straight on top of an x86 hypervisor, and picoJava was literally something you ran an OS on, not something you ran under an OS :-).
Every spec above that level (e.g. the language spec) is defined for the JVM so it obviously doesn’t depend on the underlying OS, either. E.g. the whole threads spec is defined in terms of JVM threads, not native threads. The manner in which a JVM implementation handles threads is an implementation detail as far as the spec is concerned. It can delegate them to the underlying OS. Or not.
That’s exactly how the the Blue Book “specifies” Smalltalk-80, too. In fact, all of Part 2 feels like reading a spec of the Java class library from another era and with a weirder notation.
Things like Java’s
Systemclass have dirrect correspondents in Smalltalk world, even as early as Smalltalk-80 (adjustin for portability and industry inertia, so e.g. no standard input and output streams, butFileStreaminterfaces, which on a modern Unix implementation you’d point at a process’ stdin). Even things that are deliberately devised for interfacing with the underlying OS (e.g. AWT’s pluggable backend) have obvious counterparts in Smalltalk implementations (e.g. GNU Smalltalk’s GTK bindings).And there’s also nothing in the Smalltalk-80 (or the draft ANSI standard) that precludes integration with a host system. The Blue Book operates with a clear distinction between the Smalltalk language and the Smalltalk environments, which is acknowledged even in the introduction:
There are implementations of the Smalltalk language that work just fine without it. E.g. GNU Smalltalk does ship with
gst-browser, but it also ships with a bunch of bindings for common libraries (sqlite, SDL, cairo, gettext, ncurses), a C interop layer (and you dynamically link the VM with the libraries you want to access), you run Smalltalk applications via a native interpreter (gst) and the recommended environment for editing Smalltalk code is Emacs.Lisps are often image-based in the same way, and e.g. Common Lisp is host OS-independent to the point of annoyance at times (path versions …).
But don’t Lisps require dynamic memory allocation be available? Before the page table is set up, you’d need to avoid dynamic memory allocation. How do you do that in e.g. Scheme R5RS?
Not necessarily, you could use a bump allocator, or just use a fixed sized heap during the bootstrap of your dynamic memory allocator / garbage collector.
I don’t think the lack of an ability to do something in some particular standard means much. They’re lowest common denominators for implementers to target, nothing more. Every OS written in C uses compiler-specific extensions, doesn’t mean they aren’t using C.
Not necessarily, although I think it’s orthogonal to what the author of the linked article is saying.
Dynamic memory allocation doesn’t require either virtual memory or paging. Both make it enormously more efficient and easier, but you can allocate and de-allocate memory dynamically without either of them. In fact, the IBM 704, on which Lisp-1 ran, lacked both.
But most of the pre-paging bootstrap work tends to be easier than the generic case, too, as it’s pretty self-contained and ultimately requires remarkably little memory deallocation. So there are various ways you can trick your way out doing too much heavy work, depending on architecture. E.g. you can use a simple page allocator that also updates a set of page table entries for when you’re going to enable paging.
I’ve got a high end Cortex M7 microcontroller board sitting on my desk right now that will happily malloc and return physical addresses from an attached 16MB SDRAM chip. Definitely don’t need anything else.
A classic answer is Pre-Scheme
From the article:
The Lisp Machines would like a word. Actually, a whole list of words.
Smalltalk machines were a thing, too.
This article’s viewpoint is too narrow: it lacks important historical context.
Emphasis mine. The author addresses this point literally two sentences later.
It’s a strange contention. There were commercially successful Lisp machines from multiple companies for a decade (Symbolics, LMI, Texas Instruments, etc.). What killed them was the Unix workstations: the technology reached the point where you could take an off the shelf microprocessor like an m68k, run a BSD variant on it, and produce a machine with impressive specs for the day at a fraction of the cost of building hardware and OS from the ground up. The same force killed off Burroughs large machines and Connection Machines and a bunch of others. You could make the same contention that Unix workstations mostly did not succeed since Sun, DEC, and SGI died when you could take an off the shelf PC and produce a machine with impressive specs for the day at a fraction of the cost of a dedicated Unix workstation. They had a similar run of roughly a decade, just like the transistor machines with magnetic core memory killed off vacuum tube and delay line machines and machines with integrated circuit memory killed off transistor plus magnetic core machines.
They were commercially viable for a while but “successful” may be a bit of an overstatement. Between them, Symbolics, LMI, PERQ, Xerox and TI had sold less than 10,000 machines (on the order of 7-8000 IIRC?) by the end of the 1980s. The workstation market wasn’t exactly a mass market so it’s not a bad figure, but it’s not exactly a resounding success, either. E.g. SGI alone sold 3000+ units of its first-generation IRIS system, and they were a small player at the time.
It looked more commercially viable than it was because much of the early development cost had been pre-paid in a way. E.g. early on, Symbolics, LMI and TI largely used the same software, licensed from MIT, and they all started pretty much with MIT CADR machines. Symbolics initially planned to start working on their own hardware right away but it took them a couple of years to kickstart the 3600 project.
Even at the peak of Lisp popularity, three (i.e. HP, DEC and Sun) of the five major companies with important Lisp offerings didn’t sell Lisp machines. The other two were Symbolics and TI, but even within Symbolics, lots of business groups weren’t profitable. I think the only one that remained profitable throughout its existence was the Macsyma group.
It also didn’t help that their hardware, despite being very much at the edge of what was possible at the time, was still not enough. Early Symbolics Lisp machines actually had an M68K along with the 3600 CPU. The M68K (the front-end processor, FEP for short) handled startup and some of the peripherals. Merely integrating Lisp machine hardware was expensive.
Unix workstations were one of the things that killed Lisp machines, but not quite by using off-the-shelf hardware. Symbolics hardware was actually contemporary with M68K-era Unix workstations (LM-2 was launched the same year as the DN100, 3600-series was launched the same year as Sun-1 and IRIS).
By the time that Lisp peaked in popularity in the second half of the 1980s, most big workstation manufacturers had actually moved on to their own stuff (SGI started using MIPS RISC in 1987, Sun started using SPARC in 1988, DEC were doing their VAX thing, IBM’s main workstation offering was trying to make ROMP take off since 1986). When Symbolics really started going downwards, the Unix machines that beat their Lisp machine counterparts used their own proprietary CPUs, too. NeXT were, I think, the only ones doing high-tier workstations with M68K CPUs, and even those were slow enough that they were largely excluded from particularly high-performance market segments (e.g. high-end CAD applications).
Well, do you?
They existed and were exactly what I alluded to. They were never “a thing”
The idiom “be a thing”
Hmm…or you could have just just read the next two sentences, where that historical context was provided:
I find this a very strange and hard to parse reply.
From your tone and back-checking a few links, I deduce that this is your blog post, correct? If that is the case then why didn’t you say so? Why do you obscure this simple fact by using the passive voice and so on?
Me, personally? No. I stopped programming when it stopped being fun, which for me was roughly when Windows started to take off.
I entirely disagree. Indeed this line contradicts itself: the second sentence is invalidated by the first sentences. They existed, therefore, they were a thing.
That means that your supposition in your blog post, that Lisp and Smalltalk only exist inside their own isolated boxes on top of other OSes, is false… and you knew it was false when you wrote it and not only that but you admitted it a line later.
This is simply unfathomable to me.
To be perfectly honest, I stopped when I reached that false assertion and wrote my reply. Then, I read the rest before posting, because I’m not 12 years old and I’ve been on the internet for 39 years now. I am not claiming I’m perfect – I’m not – but I commented on the bit that struck me.
I am a bit baffled by your apparently angry response, TBH.
Marcel said (and you quoted!) “They existed and were exactly what I alluded to.” How’s that obscuring?
If I’m not mistaken, you write articles for a living. Surely you understand why an author might be annoyed at someone for “commenting on the bit that struck them” while seemingly ignoring that the point was addressed later?
For what it’s worth, I’m under the impression that you’re trying to pick a fight and I hope I’m wrong. :)
Because, TBH, that’s the moment I realised “hang on, is he saying he wrote this?”
Sure, that’s a fair point. :-)
Wider context for reference: I am a lifelong sceptic and disbelieve in all forms of religion, the paranormal, alternate medicine, etc.
If I read an essay that starts “homeopathy shows us that X can do Y” then I am not going to pay much attention to the rest of the essay that talks about – well, anything really – because homeopathy does not show that. Homeopathy does not work, full stop, the end.
https://www.howdoeshomeopathywork.com/
So when an article about programming languages says “X is never used for Y, and you can’t do Y with X” when I have stood on stage and told a thousand people about how and why doing Y with X is important, yeah, you are damned right I am going to stop right there and say “hey, that is not right.”
Marcel seems to feel that because something was not commercially successful it didn’t happen. That is not true.
Not at all, but I’ve spent over 40 years calling out BS when I see it, and I actively enjoy making people angry by telling them they are wrong.
I am not picking fights. I am trying to point out logical errors and fallacious claims. Not the same thing.
The grown up way to respond when someone points out a hole in your argument is to change your argument. The schoolkids’ way is to get louder defending it.
I think Marcel’s argument rests on a core point that’s wrong. I am only interested in that core point, not what sits on top of it, which seems to be the bit he wants to talk about.
Do that somewhere else.
You could have stopped with your first comment and just said “hey you know lisp machines were actually a bigger thing”, and gone into a bit more detail yourself, but instead just dismissed the rest of the article as lacking historical context.
Context that is utterly irrelevant to the rest of the article. The author doesn’t say you can’t use Smalltalk or LISP to do systems programming. He says that people don’t. Perhaps try being charitable and read this as “people don’t tend to”. Which is not untrue.
Ambiguity isn’t uncommon in writing, and sometimes readers should just be able to infer things themselves from common sense.
Pick one, mate.
And your belief is incorrect. Language moved on and the idiom under contention is listed in Wiktionary.
How’s it feel to be on the other side of that?
Once again: “X is a thing” is an idiom for X being something that was popular.
How many LISP and Smalltalk machines were built and sold, in your opinion?
What market share did they achieve in the workstation market?
Nope. It means “they existed”, “they were real”.
Doesn’t matter. They were on commercial sale, were real products loved by legions of influential people, and inspired important works that you seem unaware of, such as the Unix Hater’s Handbook, which I submit you need to read.
https://web.mit.edu/~simsong/www/ugh.pdf
Irrelevant. This is not a popularity contest. You are attempting to talk about absolutes – “this only runs in this way” – which are not true. Which is why I responded “this article lacks important context.”
Because with every reply you send, you reinforce my impression that you do not understand the bigger picture here.
The author of the post has linked to a discussion about the origin and evolution of the term, where being popular, especially in a surprising context, is one of the accepted meanings of the word. It’s not just the assertion of the person who asked about it, but also one of the quoted definitions:
Early use of the term seems to have been an ellided version of “is there such a thing as”, but several later instances of its use obviously lean towards the aspect of popularity. Presumably the secondary sense was developed by contamination from the pejorative “a thing”, as in “PCs will be a thing of the past in 20 years”. Regardless, it’s a thing now. See e.g. this entry.
FWIW it’s definitely how I use it, too. Seeing how this is slang, yeah, on behalf everyone else, I would like to take this chance to not apologize for using it in a way that you disapprove of. I’m sure that, if a whole generation of stationery users woke up one day in a world where “to clip” meant both “to cut” and “to fasten” and managed to get over it, you’ll eventually come to terms with this horrifying misuse of language, too.
Is it ambiguous? Yes. Does minimal charity resolve virtually every instance of ambiguity? For heaven’s sake also yes, is it really not obvious that an article that literally says “attempts to bring machines closer to these languages mostly did not succeed” acknowledges that attempts to bring machines closer to these languages existed? What exactly do you think is missing here and isn’t obvious to a community that has a higher incidence of Lisp per capita than the daily appointment list of a speech therapy clinic?
Definitely not in 2024, as you can check easily by googling for /what does “it’s a thing” means/
I understand that this is probably just meant as a pedagogical excuse to demonstrate
@typeInfo, but because of Zig’s somewhat lazy compilation model, this can be written as:EDIT: fixed formatting and added brace back in.
How much credibility does NIST have these days?
(Asking this as a non-cryptographer. IIRC there have been some reservations about them in the Snowden days).
I guess the best answer is “doesn’t matter”.
The standardization processes from NIST historically came in very different forms. Sometimes, as was the case with the infamous Dual EC DRBG, they were essentially “we specify things the NSA told us, and ignore all the comments we got”.
But there’s a different type of NIST process that was first established with AES, later also with SHA-3, and now with the post quantum stuff. They start with an open competition where cryptographers can submit proposals, and then there’s a long, open discussion process about those. In multiple rounds, the broken or weaker candidates get thrown out. (An interesting aspect of this is also that many people competent in the field have a strong incentive to find weaknesses - because they themselve have algorithms in the competition, and finding weaknesses in their competitors makes it more likely they get chosen.)
The reason you can reasonably trust these new standards isn’t because you “trust NIST”. It’s because over multiple years, it appears noone has found any substantial weakness in them, and you can be certain that most cryptographers competent to do so have tried.
With SHA3 they still found a way to make a questionable choice in the first draft! They picked the parameters of the fast version mentioned briefly in the original submission while most reviewers treated the safer/slower version as the main one. (To be fair, they did discuss it with the original authors who did not find it an unreasonable choice, but applicability of some public comparisons between candidates was lost) But indeed the process was open enough that this choice was visible immediately, called out, and rolled back.
Yeah, the problem there was essentially that they had requirements that if met, made SHA-3 slower than it had to be. But changing it after the competition didn’t look great. And in the end, we have SHA-3 that few people use, because SHA-2 is still good (length extension attack is the only weakness, and that doesn’t matter 99% of the time), and if you want something fast, you use blake2 or blake3.
That was all unfortunate, but I don’t think it changes the fact that noone really has any security concerns around SHA-3. It’s just that there are other algorithms that have advantages, and that noone has any security concerns either.
Funny enough, the PQ standardization may actually be the first widespread use of SHA-3 (although in the SHAKE variant, which I believe has the reduced parameters).
Yeah, SHAKE256 and SHAKE128 have the same parameters as SHA3-256 and a hypothetical SHA3-128, but with any size output, so by choosing an output size of 512 or 256 respectively rather than 256 or 128, you get the increased collision resistance without paying the cost of SHA3-512 or SHA3-256.
SHA-3 as standardized could’ve been faster and just as safe if it hadn’t been for the ridiculous requirements they had.
You’re right, NIST have been criticised in the past for choosing suspicious numbers for their crypto standards and not saying where they came from.
For DES, the magic numbers turned out to have been chosen to mitigate against differential analysis which only they knew about at the time. It also seems to have had intentionally weak keylength, though.
For Dual_EC_DRBG the numbers were suspected to have been chosen to open up back doors.
The latter prompted the creation of Curve25519 which I think is more widely trusted. It recently made it into FIPS which is great.
I think generally people still scrutinise the NIST recommendations, but will aim to use the good ones since some (especially government) organisations can’t use things that aren’t FIPS certified.
Here is a post from a cryptography expert (djb) who is generally critical of NIST’s processes, but whose Curve25519 and SPHINCS+ work is now in FIPS: http://blog.cr.yp.to/20220805-nsa.html
An understatement if there ever was one.
Curve25519 dates from 2005 and Dual EC DRBG dates from 2006.
The Dual EC DRBG scandal led to doubts about the provenance of the magic numbers in the NSA/NIST elliptic curves (p256 etc.) which led to much wider interest in nothing-up-my-sleeve parameter selection. Curve25519 had already addressed these problems years before, so it became more widely deployed in parallel with official standardization. The standards work took a regrettably long time because they went on a side-quest to develop Curve448, a bigger nothing-up-my-sleeves curve with similar performsnce-oriented design considerations to Curve25519.
In light of today’s events, I’m appreciating the fact that I haven’t worked in cloud support for several years (I moved to web backend, briefly worked on compilers while contracting, and now robotics/linux/systems development), and that my current organization doesn’t use CrowdStrike.
using css ::before also solves this problem:
.shell::before { content: "$ "; }as when you copy and paste the
$is not picked upMy solution is
pre.code .token.shell-symbol { user-select: none }which prevents text selection. There is trade off here…::beforemeans folks coming in on TUI browsers, crawlers, or otherwise don’t get the long-standing sigil to denote input for the shell session, but my solution means those copying text from a browsers without CSS support won’t get theuser-select: none. The writer of this post is also probably using the incorrect syntax highlighting ofbash(assuming thelanguage-bashCSS class) instead ofconsole,sh-session, orshell-sessionwhich means we can’t see what tokens we might want to use for either theuser-selector the::beforesolution.Perennial question for the writer: why does this static post require JavaScript to to get its content? I got a completely blank screen when visited the post.
Why does mac.install.guide require JavaScript? The content is rendered from Markdown files using web components. It’s vanilla JS and standard HTML, no frameworks or build process.
Wouldn’t it be more performant & accessible for the a server to build it once & host it statically to 10,000 users than have 10,000 users download an entire parsing library & building the same output 10,000 times (assuming their user agent even supports JavaScript) when the content isn’t dynamically changing? This setup sounds wasteful.
The web component in question is called
<yax-markdown>… I guess there’s a mini framework built around this by the folks at Yax.com?I’m the folks at yax.com. There’s no framework, just the lit-hmtl library for the web components. A few years ago I was advocating a buildless, framework-less approach using web components when I started writing the mac.install.guide turorials. Turns out the beginner tutorials got a lot more interest than the web development experiments and I got sidetracked. For the sake of efficiency and compute cycles I’m planning to render the site statically but it’s not a user demand, just on my wish list.
You love to see it
Why not just render it statically rather than having your readers compile it from markdown to HTML every time it loads, wasting energy and compute? (admittedly a small amount, but it’s the principal)
I heartily agree with the principle and was chasing a plan to use the Lit server-side rendering in a GitHub action to render a static site from the web components. Got sidetracked with other projects but it’s still a good idea. Now that I’ve figured out how to fix the ‘command not found: $’ error :-)
that’s a neat approach
Clever solution!
Does that not put a $ at the front of every line, including the ones that are output from the commands? Or do you put each command line in a span?
it puts
$only in front of the first line in the block level element using theclass="shell"which for me is usually<code class="shell">some code goes here</code>as shown: https://www.schoolio.co.uk/diary/2024-06-13/Ah, thanks. A lot of my shell blocks have more than one command in them.
Same. It’s such an “almost perfect” solution.
I use
:;as my prompt so that I can C&P entire lines. But it has the disadvantage that readers who recognise$as a prompt usually won’t recognise:;.Mine is
: $(short_pwd) ;whereshort_pwdis a function that’s justpwdbut with$HOMEabbreviated to~, and all directories but the last abbreviated to a single letter, so if I’m in/home/joe/projects/linux/drivers/dwc2, my prompt looks like:And when I switch to a root shell, the color changes to red.
cd $PWD;would be a cool prompt. That would allow to copy entire commands and be placed in the directory where the command was run.My prompt is just a newline, works great.
on zsh there is an option where you can omit the cd :-)
It’s interesting that ARMv7 is remaining. The only 32-bit machines I have that I might plausibly use are first-gen RPis, which are v6 (as is the Zero). The only v7 machine I have is a laptop with an 800 MHz Cortex A8, which was painfully slow even when new. I wonder what the v7 machines are that people think are worth the cost of maintaining 32-bit kernel support for another 8 years.
The thing I’d really like to see is a port of Mambo so that we can run v7 processes (fast) on modern AArch64-only cores.
I’m surprised too. The original 32-bit Pis might be the reason - most ARMv7 devices are an unstandardized mess; ~20 year old PC hardware is a lot easier to get your hands on, easier to install on, and is faster than the average ARMv7 device.
Do you have any more information about Mambo? The only similar sounding piece of software I can find is this Mambo, which is a just dynamic binary modification tool for multiple RISC architectures, and doesn’t appear to allow execution of v7 processes on Aarch64 cores.
I think that’s the one, I might be confusing the subprojects. I think HyperMambo might be the thing that used it to emulate AArch32 on AArch32. They had a nice paper at VEE in, I tihnk 2017, showing that it outperformed the native AArch32 mode in some chips that shipped both, which helped motivate Arm to drop 32-bit support in newer hardware.
Reminds me of Captive DBT hypervisor - the most interesting part I find in it was using the HW MMU to accelerate page table translations. I always wanted to port it to Qemu or something to run amd64 Linux under AArch64 macOS but that’s kinda niche now that Rosetta Linux exists (and now has AoT caching) and FEXEmu supports TSO. Now I just need to finish hacking up macOS to enable the private entitlements to allow for TSO under Hypervisor.fw instead of relying on, ugh, Virtualization.fw.
https://dl.acm.org/doi/10.1145/2996798
https://github.com/avisi-group/captive
The ARMv6 of the Pi was already outdated when it came out. Most Pi competitors used ARMv7, and lots of them are still available to purchase.
They were obsolete, but they outsold everything else by a depressing margin. The BegleBone Black was the reference platform for 32-bit Arm support on FreeBSD and was a much better device, but sold a fraction of the number. The last 32-bit A profile that Arm made available to license was in 2014, so most other things also quickly moved to 64-bit. The A53 was cheap both to license and to fab.
Yeah, it’s a shame that the BBB was significantly more expensive than the Pi. Among all of the other good things it had going for it, the thing that made me very happy about the BBB is that the processor and supporting bits were:
The Pi is a great prototyping platform but hasn’t really until recently had a clear path from prototype-to-product (the compute modules and friends do help with that but even then… supply chain uncertainty still makes it tough). The BBB hit the sweet spot of “it’s a great dev tool but it’s also a great reference design” in a way that the Pi did not. With the introduction of the Octavio SiP, that got even easier: https://octavosystems.com/octavo_products/osd335x/
I think this is getting even worse for the RPi ecosystem because the SoCs are increasingly custom. The original RPi was an off-the-shelf Broadcom chip, so you could write code for it and then just buy the chip (which Broadcom started making again because RPi wanted to buy a load of them) and put it on your own board. With the newer ones, the SoC is custom and so any code that you write for them requires you to integrate the board in your product.
There were a bunch of things using TI OMAP3 SoCs that used the Cortex A8 around when the RPi launched and these SoCs were easy to buy in bulk and put on a custom board, which made the other boards a lot better for prototyping high-volume things (the final one needs different wiring from the pins to the peripherals, but the software is the same).
Yeah, that’s true about the custom Pi SoCs now. I suppose it’s switched around now. The thing about the old Pis with the Broadcom chips was that one… doesn’t simply “buy some” unless you’re ordering enough that they’ll execute an NDA and blah blah blah vs. with TI where you can just order Qty 10 off of digikey for your first prototype run.
At least with the compute modules now you can order 100 of them and expect them to arrive in a relatively timely fashion. With the original Pi boards, even if you wanted to integrate them into a product most vendors would only let you order 1 or 2 at a time.
I’m curious about the OMAP chips now! For custom embedded Linux I’ve mostly used the
MotorolaFreescaleNXP iMX line of processors. They’re nice like the TI ones with open datasheets, readily available supply, etc. but just don’t really have the same small-and-compact-and-cheap prototyping kit that Pi and BBB provide.Pretty sure no one actually did that. I’ve only ever heard of people sticking a full RPi in a box.
Yeah, I can respect that Broadcom has chosen a specific market to work with (high-volume high-budget bigger companies) but unfortunately that is not the market I’m in :)
Right, I remember distros at the time were starting to wind down ARMv6 support until the Pi came out.
This kind of things happen. Do not stress about it: nobody was ever injured for not being able to access lobsters.
It is also good to know there’s a backup domain.
Thank you a lot for all the work. Keep it cool for a few day to recover from the adrenaline rush.
If anything, it probably resulted in a net productivity increase.
What role does KVM play here? Isn’t KVM only necessary as an accelerator for running x86_64 VMs on x86_64 hosts? I don’t believe KVM is actually used in this setup, please correct me if I’m wrong.
You’re correct. KVM is not used here at all.
Thanks, it seems I have mixed up some of my terminology. As I understand your comments, I think you are saying that KVM is only used for HVM guests, i.e.: the guest uses the same instruction set as the host, in my case x86_64. When the guest uses a different instruction set to the host then emulation, and not HVM (i.e. KVM) is used. Do I understand that correctly?
Yes. Or rather: When the guest and the host use the same instruction set, KVM can be used - there is still no guarantee, and you might be doing ISA emulation (as opposed to HVM) which is slower. The thing actually doing the ISA emulation in this case is QEMU, not KVM.
Historical note: QEMU in fact started out as a pure ISA emulator way back in the day and grew the ability to use hypervisors like KVM later on (this combination is referred to as QEMU/KVM). But you can still use it for ISA emulation just the same, as many people do with
qemu-userto run e.g. aarch64 or risc-v userland things on their x86_64 computers.Thanks for the clarification. I think I need to adjust my article…
KVM could offer a VM boundary rather than just a process boundary.
What is the difference? VMs are largely processes with funny looking syscalls.
Uh, hm, you’re probably right.
After this in the past few months, I do not want to hear about this joke of an operating system ever again, but I’m sure I will have to.
Isn’t Linux pretty much the only OS that has syscall level stability? Windows and macOS don’t, for example.
AFAIK FreeBSD has the syscall ABI documented … Someone else probably has a better link, but:
https://old.reddit.com/r/freebsd/comments/v4k4e5/how_good_is_freebsds_abi_stability_compared_to/
Windows does have binary compatibility via system DLLs rather than syscalls, while OS X and OpenBSD apparently don’t have ANY kind of binary compatibility.
i.e. this is an ABI vs API distinction – Windows has an ABI but OS X and OpenBSD don’t – or what I call exterior vs. interior interfaces
Also, FreeBSD implements Linux’s syscall ABI - https://docs.freebsd.org/en/books/handbook/linuxemu/
I remember ~10 years ago that Illumos ported the FreeBSD Linux emulation support to run Docker containers on Illumos!! That tidbit is mentioned here - https://www.oilshell.org/blog/2022/03/backlog-arch.html (an abstract post in which API vs ABI is central)
FreeBSD has syscall ABI compatibility via optional compatibility modules. But it is discouraged, you should be using libc.
which resulted in one of three great lightning talks by brian cantrill. (the other ones are great, too)
You just call things through libc… which is subject to ASLR.
This is my understanding (I may be wrong), but what is wrong with these protections?
What’s the matter?
It basically forces programming languages to bind to libc, which is becoming unpopular due to C’s legacy and non-safety. This article covers more information. As far as I know it impacted Go but it’s probably addressed by now?
I think you have a grave misunderstanding of the scale of the problem. The C wrappers around syscalls are barely more than stubs for the most part, and C’s safety hardly plays into it.
What issues do you think C’s lack of memory safety could possibly cause in this situation that raw syscall access would not?
I think the problem is that most people and/or systems don’t make a distinction between “libc” the C standard library, “libc” various platform-specific extensions to it (GNU, BSD, etc), and “libc” the POSIX syscalls and such that most people actually use. C learning materials seldom make the difference clear, especially on Unix systems, and they all tend to be bundled together into
libc.soand treated as if they were all the same thing.Thus you get the situation where the C standard library is pretty terrible and has many known-broken things in it that will probably never be fixed (like
gets()andputs()), the platform-specific extensions to it usually fix most of those things in various incompatible ways (like OpenBSD’sarc4random()replacingrand()), and the POSIX syscall functions likewrite()are entirely separate animals. When people want to dodge libc, what they usually really want to avoid is all the standard-library bits that are useless or worse than useless, and most of the extension bits that are incompatible and will be replaced by the language lib anyway.There’s an open PR on FreeBSD to split these apart and provide a libsyscalls that libc links to (and is a filter library). This is, in part, motivated by CHERI wanting to replace the syscalls layer for libc running in a compartment to make it call proxies that allow the host process to interpose (if you try a raw syscall, it will simply fail when sandboxed).
“Most people” includes POSIX, because POSIX does not make a clear distinction between system calls and library functiions. (nor does C, but C has relatively few pure syscalls in its standard library)
gets()has, amazingly, been removed from C.why would C’s legacy contribute to the unpopularity of binding to libc?
I feel like meaningful vulnerabilities due to C’s use in libc are few and far between. I don’t see the big deal here at all.