Threads for weinholt

  1. 3

    Interesting slides, thanks @amirouche! I did my master thesis on an Erlang supercompiler. It can theoretically do the Futamura projections, so I’m a bit familiar with them but I had no idea they were used in practice in GraalVM. Very cool.

    I want to mention a related cool thing, which is checking properties of programs. If you have the procedure f and you want to find under which conditions it returns an even number, then you can partially evaluate (lambda (n) (even? (f n)). When doing this the program essentially runs with an abstract n argument rather than a concrete number, and the code after partial evaluation will be a new program that tells you under which conditions an even number is returned. Basically f will disappear and only those parts of it that were relevant to answering even? should remain. You’d need something like a supercompiler to get the full effect.

    1. 2

      Thanks for the pointers.

      I am going through pebook [0] and plan to apply what I learn on Kernel.

      [0] http://www.itu.dk/people/sestoft/pebook/

    1. 2

      You can set up Tor to access your SSH port through a Tor hidden service. I’ve used this in the past on to access my home server, and there was even Tor support in the SSH app I used on my phone at the time. Zero cost, but low bandwidth and latency might be an issue.

      1. 2

        It is amazing how complex io_uring has become in just a few years.

        1. 2

          For another take on naming systems, I like The Hideous Name

          1. 2

            That’s an interesting piece of history there by Pike and Weinberger. They mention some things about the “Ninth Edition” of UNIX, which I guess went into Plan 9?

            They were very in favor the the UNIX convention of slash-separated identifiers. These types of names only seem simple. They are deceptively complicated. Just look at how many bugs there have been over the years. Web servers that are tricked by /../../../../etc/passwd, archive tools that write to arbitrary directories. CWE-22 is ranked 8 in the CWE top 25 this year. Filenames are just slash-separated identifiers, so you easily think that you just need to split on / and call it a day. If you want to append two Unix paths and you use a string append to do it then it will appear to work, but you probably have a serious bug.

            What I think this comes down to is a shell-centric mindset. When you have a text-based shell then you need these little strings that get reinterpreted by the kernel into something that it can actually work with. Filenames are a DSL for referring to things in the filesystem, and IMHO it’s a pretty bad DSL we’ve ended up with. It is obviously insufficient or we wouldn’t have syscalls like Linux’s openat2(2), which modify the implicit semantics of the DSL.

            The paper argues against anything that is not Unix. It’s also arguing against the Internet and Internet mail.

          1. 4

            The global 64-bit address space is not linear; it is an object cache addressed with an (object + offset) tuple, and if that page of the object is not cached, a microcode trap will bring it in from disk.

            I’m having trouble understanding how an address could be not linear. You can view a linear address as a (page + offset) tuple. So what makes (page + offset) linear, but (object + offset) not linear?

            Does the object cache prevent you from misinterpreting a number as an address? Or misinterpreting a large offset as a different object?

            1. 8

              In most of the systems that I’ve seen along these lines, there’s a linear address space somewhere. The thing that phk is complaining about is, I believe, linear address spaces as a programmer abstraction. In particular, virtualised linear address spaces as a programmer abstraction. As a programmer, I want objects and pointers to (offsets within) objects. The fact that the hardware exposes a linear address space to me and then does a virtual-to-pseudophysical and then pseudophysical-to-virtual translation doesn’t really buy me much, other than a (quite large, power-and-performance) overhead from page-table walks and TLBs.

              He’s glossing over a lot of the details though. Something needs to allocate objects. Virtualised linear address spaces make it easy to provide a separation of concerns between allocators. The hypervisor allocates pages to the VM, the VM allocates pages to the process, and a userspace malloc assigns objects to user code. If you have an object-based memory then the object allocator must be shared. This is a bit scary for security (it’s now a part of the TCB that can violate both process and VM isolation if it’s wrong) and it has to be generic. Different languages have markedly different allocation patterns and there’s a lot of performance gained by tuning an allocator.

              For security, if the allocator is used across security contexts then it must also enforce temporal safety. Building a set of abstractions that make this safe is hard. Systems like the B5500 made the allocator and GC part of the OS (and made the compiler part of the TCB: you could break process isolation if you were able to run arbitrary unprivileged instructions). This comes with some overhead. Even among managed languages, it’s hard to build a GC that’s efficient for everything.

              Any system that wants to build GC into its TCB needs to do one of two things:

              • Provide a tracing mechanism that scans all of memory.
              • Provide an indirection table that is used for all memory accesses.

              A lot of the historic object-memory systems picked the latter and were killed by RISCy systems that didn’t because their load-to-store penalties were much higher. For CHERI, we opt for the former and have done a load of work to optimise this. Even so, we gain a huge benefit from the virtualised address space because a memory capability is relative to some specific address space and so we don’t need to scan all of physical memory (actually, physical memory + swap) to determine that there are no capabilities to an address, we ‘just’ need to scan a process’s address space (and, within that, only the pages that have had pointers stored to them, which typically ends up being around 20%). That dramatically increases revocation throughput relative to a multi-tenant single-address-space object-memory system.

              Project Maxwell from Sun Labs is probably the most interesting example of a modern object memory system but it was tailored quite aggressively to Java and explicitly placed the JVM in the TCB. I don’t think you’d want to run code from two distinct security contexts in the same object memory space, even with it. In particular, Maxwell did young generation GC in the cache in hardware and provided hooks for old-generation GC in software. Every user needs to trust the software GC. A Java GC is a lot more complex than the small subset of a hypervisor that’s required to maintain page tables for tenant isolation.

              1. 5

                Here’s a clue:

                Given Ada’s reputation for strong typing, handling the type information in hardware rather than in the compiler may sound strange, but there is a catch: Ada’s variant data structures can make it a pretty daunting task to figure out how big an object is and where to find things in it. Handling data+type as a unit in hardware makes that fast.

                A page address is just a number, while an object has as type and a size. This means the hardware can check that the object is being used in a proper manner, and particularly that the access is not outside of the object.

                The Rational R1000 looks interesting and I’d like to know more, but it’s not the only machine with this kind of addressing. I think the Burroughs B5000 did something similar, with hardware support for arrays, and the Intel iAPX 432 is another one with object-oriented memory. I’m sure the various Lisp machines also did this sort of stuff.

                The C language is actually designed to work on these types of machines. If you convert a pointer to an integer then you lose some information, and can’t convert that integer back to a pointer to the same object as you started with. (Except in implementations that support that sort of conversion, of course). On the Lisp machine I think they represented C pointers as a pair of (object,offset) and converting that to an integer meant you dropped the object reference.

                I have some small experience of my own with hardware-checked pointers and I think they’re great. My Scheme implementation’s runtime (Loko Scheme) runs on AMD64 with alignment checking turned on and uses the low bits in pointers for hardware-based type checking. I’ve found numerous bugs, even deep in the runtime (like in the GC), because the hardware automatically trapped a reference where it e.g. had tried to use a fixnum as a pointer to a procedure, or tried to use a vector as a pair. Without the hardware checking these things for free on each memory reference, it would have corrupted some memory or read from some weird location, and crashed much later in some incomprehensible manner. (It’s only a few select type checks that can be supported with alignment checking on AMD64; range checking needs extra instructions).

                1. 1

                  AIUI pages are within the same address space, an object is its own address.

                  Two (page + offset) tuples, with unique page values, can refer to the same underlying memory by tweaking the offset.

                  Each unique object is its own address space. The hardware isolates the objects as opposed to all the software mitigations we use now.

                1. 15

                  I think the headline is somewhat misleading. The problem isn’t the lack of a stable Linux userland ABI. Linux has a stable system call interface. Glibc has a stable ABI with aggressive symbol versioning. The problem is that open source desktop environments (which are typically not specific to Linux, though most of the development tends to be there) don’t provide stable ABIs.

                  X11 has provided a stable core protocol for decades but the core protocol isn’t really sufficient for anything. It took a long time for damage, composite, and render to be supported. The DRI / Mesa driver stack is large and can cause problems on updates. KDE and GNOME core frameworks change their APIs quite significantly every few years.

                  Generally, the people who make money from Linux and *BSD do so in the server space. Stable ABIs are common there because paid engineers lose time and their employers lose money every time that they change.

                  1. 6

                    I think what Linux has as a stable ABI (system calls, maybe glibc) is adequate for embedded and server contexts - Docker is arguably close to being the server stable ABI anyways.

                    I think you’re second paragraph is where I’m at for desktop - the “base platform” on Windows includes stable ABI for not just “here’s how to put a frame in a window” that libX11/a Wayland client gives you, but basic GUI constructs, 3D rendering, audio, shell integration like file dialogs, etc. In the fd.o world, this may not be consistent or stable. That’s again, a story for another day though…

                    Generally, the people who make money from Linux and *BSD do so in the server space. Stable ABIs are common there because paid engineers lose time and their employers lose money every time that they change.

                    I think a lot about how proprietary software can be more practically extensible than something open source due to providing stable interfaces rather than “have you kept your patches up to date :)”.

                    1. 7

                      I can’t say much about the server space, but on the embedded end, up until a year or so ago, when I last used it, it was a very hot dumpster fire. There are consulting companies out there with teams of hundreds of people whose jobs consist of literally nothing other than matching various package versions, cherry-picking patches and diffs to assemble Frankenstein versions of the Linux kernel, drivers, and major userspace pieces (PulseAudio, various AV codecs, various compositors) in order to match drivers and/or kernel changes in vendor-supplied BSPs. That’s why so many Linux embedded gadgets run five year-old, unpatched kernels and userspace components.

                      With some exceptions, at least on price-sensitive hardware (not only consumer, but also e.g. automotive hardware that isn’t safety-critical), as soon as you’re out of kernel land, you’re exactly as screwed as you are on a desktop. That includes a huge range of hardware, from IoT and industrial machinery to medical and infotainment systems.

                      1. 2

                        Some of us working in embedded are trying to change this. Debian works very well as a base, and then you just also need to a pick a tool to build the images. Debian is very well tested, integrated, and maintainable, much more so than some ragtag BSP with Linux 3.14 (which is actually what the vendor supplied for a board I’m working with now).

                        1. 4

                          I know it does, and it’s actually what I used for my own tinkering for a while. That being said, in 10+ years of this stuff, I don’t think I’ve ever seen it used in commercial projects. At this point, barring occasional RPi-based devices, it’s Yocto virtually everywhere, with buildroot now and then.

                          Man just writing Yocto makes me nauseaous. brb I gotta go puke :(.

                          1. 2

                            I can’t upvote this enough. Yocto makes me want to curl up into a ball.

                            1. 1

                              I feel you, but yocto and buildroot are so much better than what we used to have. I used to customize fedora to do the same thing as yocto in a more half-assed way back when it was called “fedora core” and the version numbers were in the single digits. And that was a massive upgrade over the crap we used to suffer through to ship some vertical stuff atop a zaurus PDA in 2002 or 2003. (Thankfully, we did not try to ship it atop a general purpose zaurus; we bought the PDAs and sold them configured with our images and a custom CF card as part of the price of our services.)

                              But there are things worse than yocto and I’ve had to live with more than one of them. Just pointing that out makes me nauseous…

                          2. 1

                            This creates some perverse economic incentives. This kind of thing is far less of a problem for the folks building embedded systems using FreeBSD. As a result, these companies need to hire fewer engineers. As a result, there are fewer experienced FreeBSD appliance developers on the job market than experienced Linux appliance developers. As a result, you’re better off choosing Linux for your appliance because you can more easily hire experienced people to work on it. The consulting companies that I know of that support both are very happy when people choose Linux because it means more work for them.

                            1. 2

                              Yeah, this is one of the many case studies that have led me to the conclusion that “the next big thing” in any field is usually not the best thing available, but the mediocre solution that does not have any immediately obvious shortcomings. These are the technologies that proliferate: they are good and efficient enough that they can solve actual problems, especially in early development stages, but they do so tediously enough that they aggregate so many people that they build momentum. Or, to put it another way, the technologies that self-inflict the most problems, while still being sufficiently good to solve enough of these problems as to save face.

                              1. 2

                                I think there’s a related trend: technologies are often built to solve the shortcomings of the things in the layer below. Docker addresses shortcomings that are fairly specific to Linux distros. GitHub is successful in a large part because git has such a terrible UI that there’s a big incentive to use extra tooling on top of it. Disruptive technologies are more less likely to emerge on top of good platforms.

                            2. 1

                              Realistically I don’t think fd.o world is common in embedded anymore. Android (and other things) have been increasingly displacing it. I loved my N900, but….

                              1. 1

                                AFAIK there’s still a very non-trivial amount of low-cost(-ish) devices running QML apps, or a web app, and they’re running on top of Linux for a whole variety of reasons that I really don’t understand in detail, because I haven’t written Android code in, like, forever, and things that have significantly more I/O than a handful of LEDs weren’t really my thing. Probably fewer and fewer, though.

                                Ironically enough, these things solved the stable platform problem the same way: they all ran on top of Qt, or a time-frozen JS stack on top of WebKit or whatever. But both of these have a lot of non-trivial dependencies. The simple act of assembling all of them on a working image is a difficult enough task that it’s a full-time job for so many people that it’s not even funny.

                        1. 1

                          If it is running U-Boot and Linux then why do you even need to reverse engineer it? These are licensed under the GPL, just get the source code. What am I missing?

                          1. 1

                            U-Boot and Linux are only a fragment of what was there, and in this case hardly the interesting bits. For “just get the source code” – Embedded vendors that provide turn-key solutions for what is essentially piracy might not be the best in the world at GPL compliance, among other things. As Naomi Wu demonstrates here (YT video), you might sometimes need to go to fairly extreme lengths.

                            The agenda in this case was modifying other parts of the system, and there wasn’t exactly a support-plan for doing so, hence the reversing aspect; you don’t know what is there until you look. The procedure, tools and walkthrough here is fairly basic 101- stuff but applicable to a fair amount of common-off-the-shelf embedded, should anyone want to get their feet wet.

                            1. 1

                              Ah. I did not look so deeply at the product, so I didn’t realize it’s not authorized by the owners of the games. It’s still completely unnecessary for them to not release their Linux and U-Boot sources though, there’s nothing to gain from not doing it, but that is their own karma they are creating. Naomi Wu is awesome, FWIW.

                              I work with embedded Linux and sent the article to my colleagues. It’s great to know these methods, even if you supposedly have the sources and documentation from the vendor.

                          1. 4

                            I was about to rant about how painful the save dialog is, but then I read the last line:

                            As other people noted in the comments, having a New menu is handy because it lets you create the file directly where you want it, saving you the trouble of having to navigate through the Save As dialog just to get back to where you started.

                            I have to wonder why save dialogs have to be so painful. Saving something on Windows could open one of about 4 or 5 different save dialogs, most of which are just kept around for backwards compatibility, without even counting the custom ones (e.g. GIMP). I think there’s only one that lets you paste in the path to a directory so you don’t have to navigate all the way back to a folder you have open in another window.

                            Only one of the file pickers supports thumbnails (on Linux only the KDE one does, unless something changed recently) and I think not a single one lets you drag and drop a file from another window into the picker to select it: some ignore it and some move it into the directory that’s currently open on the picker.

                            1. 9

                              The standard WPF dialog isn’t too bad. The trouble is Microsoft has started hiding even that - in the current version of Word, if you start a new document and hit Ctrl+S it shows a custom UWP dialog offering only some frequently used locations. You have to click a small “More Options…” link (not even a button), and then “Browse”, and finally you are rewarded with a full-featured save dialog. This change drove me to using the New menu only recently.

                              1. 7

                                I have to wonder why save dialogs have to be so painful. […] navigate all the way back to a folder you have open in another window.

                                On RISC OS the save dialog is just a filename entry and an icon that you drag to the folder where you want to save the file. I only know this because I installed RISC OS on a Raspberry Pi 400 yesterday, and it’s pretty neat. This particular aspect is just drag-and-drop, but I can’t remember seeing another system where you drag-and-drop from programs to folders, it’s usually in the other direction. It’s surprisingly nice to use and the system doesn’t need to invent a second (or fifth) way for the user to browse the file system.

                              1. 13

                                One thing that really surprised me is that symbols are not interned

                                I’m surprised that this is surprising. Code is made up of symbols; code must know what file/line it came from in order to produce stack traces. If two instances of the same identifier were always identical, how could you determine its origin in order to produce a useful stack trace?

                                A favorite gotcha of mine is that integers are not automatically promoted to bignums like in most Lisps that support bignums.

                                This is really frustrating to me too and feels like a missed opportunity. I’d much rather have safe math be the default and fast math be opt-in.

                                There doesn’t seem to be a good rationale about why vectors are used instead of regular lists, though.

                                This is a really subtle benefit, but the advantage of using binding vectors is that when you see an open paren, you know that it means a function or macro is being called or defined! It leads to a lot more consistency and predictability than other lisps. Unfortunately there are a handful of exceptions in Clojure, but they’re fairly rare.

                                Now, this may sound weird from a Scheme programmer, but I never fully bought into the REPL style of developing. Sure, I experiment all the time in the REPL to try out a new API design or to quickly iterate on some function I’m writing. But my general development style tends more towards the “save and then run the test suite from an xterm”

                                One big difference is that Clojure’s vars and namespaces were designed from the ground up to support reloading in a way that allows the new code to be visible even retroactively to code that’s already been loaded and even stuff that’s currently still running on another thread. I’ve never seen this done so seamlessly outside BEAM; it’s really a great design. Once you get used to it you get kinda spoiled and working without a repl feels like a huge leap backwards.

                                This is made extra painful by “nil punning”. For example, when you look up something in a map that doesn’t exist, nil is returned.

                                This is definitely the biggest source of errors in Clojure by a long shot. I understand why it’s done this way but boy is it nice when I work in Racket to not have to think about nil.

                                1. 2

                                  Code is made up of symbols; …

                                  From R7RS 6.5. Symbols

                                  Symbols are objects whose usefulness rests on the fact that two symbols are identical (in the sense of eqv?) if and only if their names are spelled the same way. For instance, they can be used the way enumerated values are used in other languages.

                                  The rules for writing a symbol are exactly the same as the rules for writing an identifier; see sections 2.1 and 7.1.1. It is guaranteed that any symbol that has been returned as part of a literal expression, or read using the read procedure, and subsequently written out using the write procedure, will read back in as the identical symbol (in the sense of eqv?).

                                  Note: Some implementations have values known as “uninterned symbols,” which defeat write/read invariance, and also violate the rule that two symbols are the same if and only if their names are spelled the same. This report does not specify the behavior of implementation-dependent extensions.

                                  1. 2

                                    That doesn’t really answer my question; if symbols are interned, how do you preserve line numbering info for stack traces in between the reader and the compiler?

                                    1. 9

                                      Symbols returned from the reader are always interned, but compilers don’t just work with bare list structures as input these days. Let’s take the Chez Scheme compiler as an example. Chez Scheme use a reader that wraps objects in annotations, which contain source information. (The normal read procedure just returns the stripped datum as usual, but the compiler uses the annotated datum). The macro expander understands annotations and is able to extract source info and to remove annotations, as necessary. Macros work with syntax objects, not directly with datums, so they also preserve the source information. The compiler passes that follow expansion are written in the nanopass compiler framework and preserves source information every step of the way through the compilation.

                                      1. 4

                                        Ah right; I forgot about syntax objects. I feel like I’ve read up about this process a few times but it’s so complicated I was never able to get it to stick in my brain for more than a couple hours at a time.

                                    2. 2

                                      Looking at this, it seems in R7RS, symbols are what Clojurians would call Keywords. Since they are distinct from identifiers in that text.

                                  1. 2

                                    A few years ago I decided to see how thoroughly I could overengineer fizzbuzz in JavaScript. The fact I decided to use the sum of two waves means that it doesn’t need any conditionals or modulo. Behold:

                                    // How many ms between fizzbuzz calls?
                                    const RATE = 100;
                                    
                                    // FizzBuzz Parameters
                                    const FIZZ_PERIOD = 3;
                                    const BUZZ_PERIOD = 5;
                                    
                                    // Map waves to final output values in range [ 0..3 ]
                                    const FIZZ_AMPLITUDE = 1;
                                    const BUZZ_AMPLITUDE = 2;
                                    
                                    // Lookup table for results
                                    const LOOKUP = [ undefined, "Fizz", "Buzz", "FizzBuzz" ];
                                    
                                    // Generate a square wave
                                    const squareWave = period => amplitude => n =>
                                      amplitude * Math.max( 0, Math.floor( Math.cos( n * 2 * Math.PI / period ) + 0.5 ) );
                                    
                                    // Compose the fizzbuzz function out of waves components
                                    const fizzbuzz = n => LOOKUP[ [
                                      squareWave( FIZZ_PERIOD )( FIZZ_AMPLITUDE ),
                                      squareWave( BUZZ_PERIOD )( BUZZ_AMPLITUDE ),
                                    ].reduce( ( prev, curr ) => curr( n ) + prev, 0 ) ] || n;
                                    
                                    // Async sleep to avoid busyloop
                                    const sleep = async ms => new Promise( resolve => setTimeout( resolve, ms ) );
                                    
                                    // Generate an infinite sequence starting from some base
                                    function* seq( start = 1 ) {
                                      for ( let i = start; true; ++i ) {
                                        yield i;
                                      }
                                    }
                                    
                                    // FizzBuzz Forever!
                                    async function main() {
                                      for ( const i of seq() ) {
                                        console.log( i, fizzbuzz( i ) );
                                        await sleep( RATE );
                                      }
                                    }
                                    
                                    // Start
                                    main();
                                    
                                    1. 3

                                      Isn’t Math.max a conditional?

                                      1. 4

                                        I suppose that was cheating slightly, but “no conditionals” wasn’t my original intent. The sum-of-waves method was just intended to be needlessly obtuse. It happens to have this property as a side-effect.

                                        There are other ways to generate a square wave that don’t have hidden conditionals. This one is nicer and fixes a bug I just noticed in the original (when used with different periods than 3 & 5):

                                        // Generate a square wave
                                        const squareWave2 = period => amplitude => n =>
                                          amplitude * ( Math.cos( n * Math.PI / period ) & 1 );
                                        

                                        Also, thinking about it, the || could also be considered a hidden conditional, but it can be eliminated by just not hard-coding the lookup table and making n be the first entry every time.

                                        1. 4

                                          With squareWave2 this program gives the wrong answer for i = 64084278. With squareWave it gives the wrong answer for i = 895962678003541 (and maybe lower values of i as well, I did not check).

                                          1. 3

                                            Turns out the 2nd one is good up for all inputs up to 2^25. After that it would need another wave function that doesn’t rely on floating point. This one should be correct until integer overflow:

                                            const squareWave3 = period => amplitude => n =>
                                              amplitude * !( n % period );
                                            

                                            Sadly this re-introduces the rather cliché modulo operator, but I don’t think that can be avoided if the promise of “FizzBuzz Forever!” is to be upheld. At least, this one should be good for all values of forever < 2^54. To go any higher than that reliably, I’d need to go through the whole program and suffix all the numbers with n to use BigInt.

                                    1. 2

                                      That would have been an interesting timeline to visit. In our timeline the Linux console is regressing, they recently removed the scroll back buffer (i.e. shift-pageup does nothing).

                                      1. 3

                                        Just wait until the kernel implementation is removed altogether and moved into logind.

                                        1. 3

                                          I’m somewhat in two minds about that. Back in the old days, the kernel just spoke a very primitive protocol to an RS-232 or similar and all of the clever terminal logic was implemented in hardware in the terminal. In the slightly less old days, IBM PCs (though not most other microcomputers) had a hardware text mode for the monitor and the OS started talking to that and so needed to emulate a bunch of terminal features but got others for free from BIOS services. In the modern world (and on early ’90s UNIX workstations), the simples interface that the hardware provides is a frame buffer.

                                          This difference made the built-in terminals very slow on Linux on 32-bit SPARC workstations and meant that a lot of work was needed to make them fast on UEFI systems (which don’t have the old BIOS text mode).

                                          I’d generally much rather see the kernel provide a dumb framebuffer and run a userspace program to handle the terminal. A userspace program can do things like allocate a file for persistent logging, use malloc on pageable memory to get large amounts of back trace support, incorporate a TrueType Font renderer without security issues and so on. It has to shunt data to a device fast enough that users won’t notice, which is quite easy with an X server running on a dumb framebuffer and a terminal emulator running in a separate process and so should be fairly trivial without X11 in the way.

                                          The one case where this might be problematic is recovery. On Linux, with the initrd infrastructure, that’s far less of an issue than on something like *BSD (where you’d be left without a terminal if the root filesystem didn’t mount) because you can bundle the userspace program and are only stuck if you’re in a situation where the kernel can’t successfully launch a statically linked program from memory (and if your system is that badly broken, having a terminal emulator in the kernel probably won’t help).

                                          Windows NT never assumed that the hardware supported text mode and so boots a recovery mode with the simple GUI and ConPTY running in userspace.

                                          1. 2

                                            *BSD (where you’d be left without a terminal if the root filesystem didn’t mount)

                                            Interestingly, FreeBSD can easily do “initrd”/“initramfs”. The loader can load -t md_image your.file, the kernel can mount an FS from that ramdisk as the root fs, and the userspace there can ask the kernel to reroot (reboot -r) into your actual system. It’s just that nobody uses this in normal circumstances because the loader can just load required kernel modules directly from the disk :)

                                            1. 1

                                              I believe that all your requirements are exactly the features of kmscon. It needed kms or other framebuffer device, and then as a user, you replaced the getty with kmsconvt. It supported UTF-8 glyphs and had a proper configuration file.

                                              More details at the archlinux wiki

                                              1. 1

                                                I’ve used this to boot a FreeBSD kernel that was loaded via JTAG and which then mounted most of the FS via NFS, but it’s not part of the normal workflow. I believe that, now that the reroot stuff has landed, it’s easier to make this a default part of the FreeBSD process (you could boot from your ramdisk and then, once you’ve validated that you can mount /, unmount it and throw away the pages used for the ramdisk. This isn’t part of the normal boot flow though.

                                            2. 2

                                              You’re joking, but there was such a replacement from one of systemd’s devs in the form of kmscon. As far as I remember it was discontinued at one point, but I don’t recall the reason.

                                              I would love to see a modern replacement for the tty susbset, even coming from the systemd team.

                                            3. 2

                                              they recently removed the scroll back buffer (i.e. shift-pageup does nothing).

                                              Really!? What was the motivation for that change?

                                              1. 3

                                                Announcement. tl;dr: was declared broken and therefore removed.

                                                1. 1

                                                  I would also like to find that out. Just haven’t had time to check.

                                                2. 1

                                                  It has always been kinda iffy… if you switch between VTs it stops working anyway… (edit: i mean like it would be cleared)

                                                  But what they should do is just pass through shift+pageup to the application if they aren’t going to do anything with it. Then programs like my nested terminal emulator could handle the scrollback itself.

                                                  1. 1

                                                    Do they actually eat shift+pageup now? Well you can always rebind the command in your nested terminal to something else. My tmux reacts to bare pageup on its own (tmux is smart about it, sending pageup into a text editor inside still works fine)

                                                1. 11

                                                  And? I don’t see anyone complaining that most desktop applications use mostly the menu layouts and keyboard shortcuts. Why is uniformity and not surprising the user suddenly a bad thing when its the web?

                                                  1. 6

                                                    most desktop applications use mostly the [same?] menu layouts and keyboard shortcuts

                                                    I’m not sure that is true anymore. I don’t at all intend to argue against your point about uniformity and not surprising the user. But the desktop applications I’ve seen lately do not have a menu bar, they do not use standard window decorations and they seemingly do not have any common keybindings apart from Ctrl-c, Ctrl-x, Ctrl-v and Alt-F4. They render as borderless windows and implement their own minimize, maximize and close buttons. If there’s a menu it’s some custom stuff behind a button that looks either like a hamburger(‽), a gear or three vertical dots. The selected desktop theme has very little effect on their looks. They might as well be web pages, and many of them probably are.

                                                    1. 6

                                                      Exactly. I think what we are losing are the crazy homebrewed websites of the 90s/00s.

                                                      Now even the most modest and beginner website will be using one of the libraries used by everyone else.

                                                      This is both good, as it means that more people can finally access this technology (ss a developer), and bad, since we lose a bit of quirkiness.

                                                      The thing to think though is: was that crazy internet we had in the past due to creative people being creative or simply because everyone didn’t know anything and didn’t have tools to do better? I remember being 12 and struggling to make a background static, my first website wasn’t “different” out of creativity but out of lack of knowledge and patience.

                                                      1. 4

                                                        I think that uniformity is a benefit, but only if the standard that has been settled on is good. For example, I think that hamburger menus, which are everywhere today, are clearly inferior to horizontal menus, which were everywhere fifteen years ago.

                                                        1. 2

                                                          Horizontal menus are great on desktop, but don’t work in mobile. That’s why we got hamburger menus.

                                                          1. 3

                                                            The problem is that website developers force that compromise on everyone, rather than just on mobile device users.

                                                            If a hamburger menu really is the best solution for a certain website when it’s viewed from a mobile device, I really would like the website to degrade to a hamburger menu when I visit it from a phone, and display a normal, non-horrible menu on desktop. It’s not even that hard anymore, now that pointer media query is supported by everything.

                                                        2. 1

                                                          The entire last section of the article is dedicated to addressing this.

                                                          1. 4

                                                            No, it is not. The last section’s TLDR is: Conformity through libraries brings greater accessibility, Mozilla says its bad, and appeal to nostalgia. It does not at all address that the exact opposite attitude is generally held for most of the rest of computing.

                                                        1. 1

                                                          Is there a single cryptographic ledger in the world, or is it one per e.g. package manager or programming language?

                                                          1. 2

                                                            One per microcosm.

                                                            Optionally, these can cross-sign onto each other for greater resilience. See cross-signing in Chronicle.

                                                            1. 1

                                                              Interesting. That is likely the right design, as there would otherwise be problems with e.g. namespacing.

                                                              I would like to understand the cryptographic ledger better and how it is protected against abuse. Your link mentions using public keys in the context of server communication, so I’m guessing this mechanism stops random people from adding pure junk to the ledger. I understand from the main page that revocations are handled by adding to the ledger. Is it ever possible to remove something completely from the ledger once the data is on it, or is it there forever? If not, is there a human in the loop to review transactions before they are added to the ledger?

                                                              1. 2

                                                                AuthN/AuthZ for ledger publishing is handled by the existing update infrastructure, which Gossamer treats as a black box.

                                                                Everything is immutable: once published, it cannot be removed. Thus, append-only.

                                                          1. 2

                                                            I’ve borrowed a ROCK Pi X, an x86-64 computer in a Raspberry Pi format, and will try to get my kernel running on it.

                                                            1. 5

                                                              Is there a tl;dr on how to check for the rootkit?

                                                              1. 2

                                                                From the NSA disclosure:

                                                                If the following commands are run on the command-line and the “testfile” disappears, the system is infected with Drovorub. If the “testfile” does not disappear, the system may still be infected with Drovorub. The signature “ASDFZXCV” could have changed or the pseudo-device used for host-based communications between Drovorub-client and Drovorub-kernel module could be something other than /dev/zero.

                                                                touch testfile
                                                                echo “ASDFZXCV:hf:testfile” > /dev/zero
                                                                ls
                                                                
                                                              1. 4

                                                                Do most schools not teach kids how to type? I went to a tiny rural elementary school and even we had a typing class. It’s a skill that has a pretty decent ROI - eliminating the cognitive overhead associated with typing reduces the friction between your brain and the machine you’re interfacing with.

                                                                Relevant XKCD (although it should be extended up and to the left for this…)

                                                                1. 5

                                                                  Nope, that’s not very widespread at all.

                                                                  1. 3

                                                                    Here in the UK I’m not aware of any formal touch typing education in the Curriculum…at any level.

                                                                    1. 2

                                                                      I believe it was an elective when I was in grade school in Sweden in the 1980s… none of the kids in our household have encountered it.

                                                                      1. 3

                                                                        I took a touch typing class at a Swedish grade school in the middle of the 1990s. I remember using SPCS Tangentbordsträning and typing the alphabet in under 4 seconds (but when the teacher wasn’t looking we played a climbing game that advertised a fruit juice drink). Touch typing was not a widespread skill among my peers at the time, some were stunned that I typed without looking down.

                                                                        Even though we were using computers, I got the feeling that the class might have been a relic from the time of typewriters.

                                                                    1. 2

                                                                      Fun stuff! How hard would it be to mux multiple ethernets over one fibre link? (even thinner!)

                                                                      1. 1

                                                                        mux multiple ethernets

                                                                        You mean like VLANs? Or some sort of ring topology network?

                                                                        1. 1

                                                                          You can use VLAN to get multiple virtual Ethernet networks, but it doesn’t give you any extra bandwidth.

                                                                          You can get additional bandwidth over a fiber by using more frequencies, but it’s probably not something you’d use at home. You get transceivers for different frequencies and hook them up to a CWDM or DWDM mux/demux filter. This device lets you take the different frequencies from several fiber pairs and run them over a single fiber. So in one end of the device you put in multiple fibers and in the other end you have just one fiber. Basically a fancy prism. This technology can be quite useful if you’ve got connections to multiple operators in a city and want to save on fiber rent.

                                                                          In the article he’s using a single-strand fiber and a bi-directional transceiver, using a pair of frequencies in the same strand. So in a sense he’s already running at twice the regular bandwidth for a strand. (I’m not sure a regular mux/demux filter even works with single-strand fibers). But if you have an existing installation with 1Gbps SFPs and want more bandwidth then you can just upgrade the SFPs to e.g. 10Gbps or 40Gbps.

                                                                        1. 2

                                                                          I enjoy these types of articles! I believe that we need people who point out to us that the systems we’re using and designing could be better, that we have inherited a view of the world that limits our vision of what’s possible. It’s not necessary to accept the suggested alternatives as being better, but they can be used to open up your mind to new ideas. Designs of computer systems are open to a lot of choice, but few of the people I’ve worked with in the past have seen anything other than modern mainstream systems.

                                                                          About ASCII, there’s an old joke that every key on a serial terminal is already claimed by Emacs, except for BREAK.

                                                                          1. 2

                                                                            About ASCII, there’s an old joke that every key on a serial terminal is already claimed by Emacs, except for BREAK.

                                                                            In TECO, the original mother system for Emacs, every string of characters is a valid, if not useful, TECO program.

                                                                          1. 13

                                                                            If your new project-approved main branch is quux but like me you have muscle memory you don’t want to retrain, something like this may help:

                                                                            $ git symbolic-ref -m "save on retraining"  refs/heads/master refs/heads/quux
                                                                            
                                                                            1. 1

                                                                              What does “quux” refer to?

                                                                              1. 2
                                                                                1. 2

                                                                                  foo, bar, baz, quux.

                                                                                  1. 2

                                                                                    Quux refers to none other than Guy Steele, co-creator of Scheme! :)

                                                                                    http://catb.org/jargon/html/Q/quux.html

                                                                                1. 0

                                                                                  I’d argue that types are a terrible solution to the actual problem.

                                                                                  I’d argue the actual problem is, we need to check variables in our languages, for instance for someone’s age, we almost certainly don’t want -32768 as a possible age, as that makes zero sense. Also an age of 32768 is also probably equally stupid.

                                                                                  we want age to be an integer between 0 and 120 or so. Certainly < 200. That’s what we actually want.

                                                                                  We can argue about the upper and lower bounds of the age variable, depending on the program being written(i.e. an age of 120 for a program about the romans would be laughable, since they never lived remotely that long). But I hope we can all agree there should be upper and lower bounds for our variables.

                                                                                  Types, basically don’t do that, and practically every language ever written skips over any sane way to have limits on our variables besides basically int8, int16 and string.

                                                                                  There are some attempts, Liquid Haskell has a way. Postgres does this pretty well with the check constraint(though it’s not a programming language, obviously). Nim recently got ‘DrNim’ that attempts the same thing. Most other languages fail.

                                                                                  1. 8

                                                                                    Types do that. Commonly the pattern looks like having a constructor function that validates the machine type, like smart constructors in Haskell. (Though, as cgenschwap already noted, dependent types can do this in a quite different way.)

                                                                                    1. 2

                                                                                      I think you meant types might be able to do that. so I agree, in theory, types can grow to specify constraints, but other than Haskell(with multiple competing solutions apparently) and now Ada(thanks @weinholt), nobody’s type system has this ability that I’m aware of(I’m sure there are some weird esoteric languages out there that do..).

                                                                                      But no commonly available/used language has anything resembling a way to sanely do constraints. The best you can do is writing a check function and calling it all over the place, hoping you don’t miss a place and your constraint fails to hold, with no warnings if you miss a spot. Languages aren’t immune to the problem, even most databases, except some SQL ones(like PG’s check) pretty much punt on constraints, so it’s a very wide problem.

                                                                                      Even web frameworks, mostly fail at checking constraints, and it’s rare to see web code in the wild do much beyond the occasional JavaScript-based client-side validation of inputs. Otherwise the most you will see is ‘yes input validation is important’, but almost nobody is doing it. It’s hard, it’s not taught in programming classes or books or documentation(for the most part) and languages by and large punt on the issue.

                                                                                      I’ve read a few haskell learning sites, and a book, though I’ve never programmed anything useful in Haskell, and I’d never heard of smart constructors before, so even when it’s available, nobody apparently uses it.

                                                                                      Anyways, the goal here, from my perspective, is we desperately need to validate inputs, and store data with known constraints, and with rare exception, no code does this, and no commonly used languages make this remotely easy. so I think my point stands, constraint checking is still very much not done in the wild.

                                                                                      Thanks for teaching me about Smart Constructors! :) I <3 learning new things!

                                                                                      1. 8

                                                                                        Pascal also supports it:

                                                                                        type
                                                                                          Age = 0..200;
                                                                                        
                                                                                        1. 4

                                                                                          Smart constructors are definitely not in the “nobody uses it” category. They are one of the most pervasive patterns in strongly typed languages (especially ML descendants like Haskell or Scala).

                                                                                          1. 2

                                                                                            Smart constructors are just functions that build values of the required type, but perform some extra checks when the value is constructed…

                                                                                            It seems like you could employ the same smart constructor pattern in many languages. For the age example, you’d create a type for the valid age range and use this type elsewhere in your code:

                                                                                            class Age {
                                                                                                final int value;
                                                                                                Age(int value) {
                                                                                                    if (value < 0 || value > 120) {
                                                                                                        ...raise exception
                                                                                                    }
                                                                                                    this.value = value;
                                                                                                }
                                                                                            }
                                                                                            

                                                                                            This is along the lines of cgenschwap’s recommendation of preferring parsing over verifying.

                                                                                            1. 1

                                                                                              You’d have to make sure the setters and getters did the checking as well along with making sure there was never any access to the internal value other than by the setters and getters. Which is a lot more faff than having it supported natively.

                                                                                        2. 7

                                                                                          Aren’t you just talking about dependent types? I don’t think this is a situation where languages “fail” per se, but more that it is a technically difficult thing to do (while also having a language that is easy to use for regular programmers/programs).

                                                                                          Dependent types are just an extension of types – so they are certainly a good solution to the actual problem!

                                                                                          1. 1

                                                                                            Dependent types is one way to do this, though so far I’ve never seen that built into a language and usually a very complicated way.

                                                                                            But who uses dependent types? no languages really have them built-in, certainly no commonly available/used languages. Haskell, has Liquid Haskell and per pushcx smart constructors also does this, apparently. Are Smart Constructors built-in? It seems like they are, but my Haskell on this machine won’t compile it, but it’s also a really old Haskell, so that could be why.

                                                                                            I agree bolting constraints onto our existing type system(s) is very complicated and messy.

                                                                                            I’ve never seen any code out in the wild that does anything like this.

                                                                                            We all know we can’t trust whatever the user gives us, and the giant list of security vulns just enforce the point, and yet basically no language in common use has any nice sane way to handle these sorts of issues.

                                                                                            1. 3

                                                                                              If dependent types interest you, the answer to “who uses dependent types” is idris

                                                                                              1. 2

                                                                                                It is unfortunate that few languages have dependent types – I completely agree with you here. However, putting constraints on user input isn’t really an issue in my opinion. You should always parse the info from the user rather than verify it.[1]

                                                                                                Raw user input is stored in a Raw{var} type (which can’t be trusted) and is parsed (with constraints) into a {var} which can be used without fear. Of course, this doesnt guarantee the data is valid – your code might still screw it up – but this concept can just be used every time the data is mutated.

                                                                                                Dependent types are fantastic because they turn this into a compile-time guarantee. Otherwise I don’t really see the benefit of having a language feature like this at runtime, it is very easy to do yourself.

                                                                                                [1] https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/

                                                                                                1. 1

                                                                                                  I think you and I are in complete agreement here about the end goal. I’ve read that post before, and while what they write is possible in Haskell, basically never used(from my very limited Haskell experience), in other languages it’s a giant PITA.

                                                                                                  No popular JSON parser will let you add constraints to your parsing of JSON(that I’m aware of).. The most they will do is sort of let you say this must be an int64, or this must be a string. Obviously JSON is just one example here. I’m sure there is one or two very generic parsers that allow for constraints at parsing time, but I’m not aware of any in popular languages that make adding actual constraints easy, or even possible.

                                                                                                  1. 1

                                                                                                    Certainly with strong, static typing this is easier to enforce (functions only take the parsed version of input). However every language can do this, it just becomes a matter of code hygeine. If JSON parsers dont let you add constraints then you just add another parsing step which can apply constraints.

                                                                                                    Sure this adds boilerplate and complexity, but I would argue that this is fairly easy to do.

                                                                                            2. 6

                                                                                              I’d argue that types are a terrible solution to the actual problem.

                                                                                              I’d argue the actual problem is, we need to check variables in our languages…

                                                                                              …we want age to be an integer between 0 and 120 or so. Certainly < 200. That’s what we actually want.

                                                                                              Let me make sure that I understand what you’re asserting here: you believe that there’s only value in type systems insofar as they can allow for constraining numeric types to a bounded range? And, all languages with static type systems which fail to do this are fundamentally failures?

                                                                                              1. 0

                                                                                                No, Types are awesome for compilers, to make things go fast and optimize their implementation. There is a reason types got invented in the first place. It’s hugely easier for a compiler to optimize number math if they don’t have to concern themselves with if it might also be a string “happy”.

                                                                                                I argue though that they are pretty terrible for end programmers, because they lack constraints. There is a reason Python and friends, who mostly ignore types are hugely popular.

                                                                                                I only picked age as a very easy, hard to dispute example, but I believe constraints should apply to all data types, not just numeric types. pay period date constraints on a timesheet would be another easy use-case that doesn’t use numbers, but dates.

                                                                                                PostgreSQL’s check syntax is what I’m after here, before anything can get stored in the table, it has to be verified for sanity. but PostgreSQL is the wrong place to do this, it needs to happen way out near the user, so it’s way, way easier to say “hey user, “first” is not a valid number in the range 0-100, since this is an input for age.” So when we go to store the variable in our program, we need to be able to parse it and constrain it and verify it’s sane before anything else can happen. We as an industry completely fail at this, for the most part.

                                                                                                1. 3

                                                                                                  Yes and no. There is a lot of benefits to types, they help when writing code by pointing out bugs before the code is running (contrary to Python for example). Depending on the type model you even get theorems for free, i.e. you can prove theorems about functions just from it’s type signature! That’s pretty neat.

                                                                                                  Another problem is the overly structured data you’re proposing: What if I’m 106 and want to register for your service? What if I don’t have a first name? What if I don’t have a street address and live “2 mi N then 3 mi W of Jennings, OK 74038”? What if, in a family tree program, a person wants to input a child which he had with his daughter?

                                                                                                  I’m not saying no constraints is a better approach, but there are a lot of edge cases to be thought of, even for something as simple as an age or a name, which maybe shouldn’t be restricted at all.

                                                                                                  1. 2

                                                                                                    There are new things being added to types to make it better, I agree, dependent types one of them. Like the OP post says, many type systems are hard to reason about.

                                                                                                    I agree gross errors are caught trying to shove a string into an int8 variable. But academic papers disagree over how useful types are in practice, it’s mostly just yelling opinions across the internet these days, as far as I can tell. Perhaps over time, academic papers will have better science involved and eventually figure it out, but I think we can all agree the answer , scientifically speaking, is murky at best[0]. That said proving theorems sounds pretty neat!

                                                                                                    Of course there are lots of edge cases with data validation/parsing, the known issues here, are multitude. I don’t have a street address most of the time. That doesn’t mean we should punt and give up on the problem, and let everything be free-form text fields. :) It’s very, very hard to reason about free-form text fields.

                                                                                                    Every application will likely need unique constraints around particular data, depending on the context, but I think sane defaults can be found for many use cases. In my OP I covered this already. We can argue where the limits need to be, but for most applications I think we can agree negative ages, and ages over 150[1] are probably sane defaults.

                                                                                                    If you buy into the parse vs. validate debate[2], we can’t even currently parse with any sanity. OpenBSD mostly punts on sane parsers and just assumes they are totally broken by sandboxing the parsers from the rest of the application(sshd, tcpdump as examples). I also mentioned in a different comment about JSON parsers as an example.

                                                                                                    0: http://danluu.com/empirical-pl/

                                                                                                    1: https://en.wikipedia.org/wiki/Oldest_people

                                                                                                    2: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate

                                                                                              2. 5

                                                                                                Types, basically don’t do that, and practically every language ever written skips over any sane way to have limits on our variables besides basically int8, int16 and string.

                                                                                                I believe that Ada handles this in a good way. You can declare an age type that has a valid range of 0 .. 120. I don’t have direct knowledge of this, but AFAIK it uses a combination of static and dynamic checking. Those cases which are impossible to verify with a static checker are verified at runtime.

                                                                                                1. 3

                                                                                                  Pascal and Common Lisp are two other languages with this feature, just for completeness. They’re called “subrange types” in Pascal.

                                                                                                  1. 2

                                                                                                    Yep, Ada can do this. You can also use a regular integer type with contracts and use Spark (the Ada subset) to prove that the integer is in the range you expect it to be, thus guaranteeing that your program is free from runtime exceptions.

                                                                                                    Ada has some pretty cool features, it’s sad that nobody knows about them.

                                                                                                  2. 4

                                                                                                    You might find Racket’s contract system interesting. Also this talk if you have ~40 minutes (assuming 1.5x speed).

                                                                                                    1. 1

                                                                                                      Neat! So I agree in non-popular languages, this is sort of only recently being addressed, to some degree or other, with varying degrees of implementation. Most documentation ignores the problem. I’ve never used Racket, but Haskell is the other commonly known language that has this ability to some degree, yet I’ve never seen any Haskell code in production that uses any of the features available. Is using the contract system actively used in Racket?

                                                                                                      I went looking through github, seems Pollen is popular for something written in racket, and it uses ‘provide’! [0]

                                                                                                      It seems however, it’s limited to modules only.

                                                                                                      0: https://github.com/mbutterick/pollen/search?q=provide&unscoped_q=provide&type=Code

                                                                                                      1. 3

                                                                                                        Yes, the majority of Racket code uses contracts extensively and they are also used in documentation. Some examples:

                                                                                                        provide is just Racket’s way of exposing bindings from one module so that they can be required by another. The contract library provides the contract-out provide transformer that attaches contracts to bindings when they are provided. You’re right that when someone uses contract-out, then that contract will only be enforced at module boundaries. However, that’s not the only way to attach a contract to a value. The same library also provides define/contract among other things so you can do stuff like this within a single module:

                                                                                                        #lang racket
                                                                                                        
                                                                                                        (define smallint/c (integer-in 0 255))
                                                                                                        
                                                                                                        (define/contract (f x)
                                                                                                          (-> smallint/c smallint/c)
                                                                                                          x)
                                                                                                        
                                                                                                        (f 42)
                                                                                                        (f 1024)
                                                                                                        

                                                                                                        Running the above yields:

                                                                                                        42
                                                                                                        f: contract violation
                                                                                                          expected: (integer-in 0 255)
                                                                                                          given: 1024
                                                                                                          in: the 1st argument of
                                                                                                              (-> (integer-in 0 255) (integer-in 0 255))
                                                                                                          contract from: (function f)
                                                                                                          ...
                                                                                                        

                                                                                                        Or even stuff like:

                                                                                                        #lang racket
                                                                                                        
                                                                                                        (define smallint/c (integer-in 0 255))
                                                                                                        
                                                                                                        (define/contract (f x)
                                                                                                          (->i ([in smallint/c])
                                                                                                               [out (in) (=/c (add1 in))])
                                                                                                          x)
                                                                                                        
                                                                                                        (f 42)
                                                                                                        

                                                                                                        That yields:

                                                                                                        f: broke its own contract
                                                                                                          promised: (=/c 43)
                                                                                                          produced: 42
                                                                                                          in: the out result of
                                                                                                              (->i
                                                                                                               ((in (integer-in 0 255)))
                                                                                                               (out (in) (=/c (add1 in))))
                                                                                                          contract from: (function f)
                                                                                                          blaming: (function f)
                                                                                                           (assuming the contract is correct)
                                                                                                        

                                                                                                        It’s been a while since I’ve done any Haskell, but I don’t think there’s anything quite like this in the language. I definitely recommend watching the talk I linked to because it explains some of these ideas in a visual way and much better than I could do here.

                                                                                                        I believe Clojure is also adopting some of these ideas now via spec, but I haven’t looked into it.

                                                                                                        1. 1

                                                                                                          <3 Thanks!! This looks very awesome! Go Racket!

                                                                                                          1. 1

                                                                                                            <3 Thanks!! This looks very awesome! Go Racket!

                                                                                                      2. 2

                                                                                                        I believe you’re looking for dependent types as in eg Idris: https://en.wikipedia.org/wiki/Dependent_type