Threads for jmmv

    1. 3

      The economics are not usually favorable but you /could/ write an awesome book on all these topics.

      1. 3

        Right? :) I’ve thought about it and, actually, a publisher recently approached me to “write a book on something”. I do have ideas, but… when I research how much they take vs. how much I’d take… well, it’s really, really hard to justify the effort considering how little free time I have.

        1. 2

          This has happened to me too. My limited experience is that if a publisher approaches you to “write a book on something” then they’re usually a fairly scummy publisher. :-/

      2. 10

        A big difference with ACPI, however, is that the kernel cannot query the Device Tree from the hardware because… well, the Device Tree is external to the hardware. The kernel expects the Device Tree to “exist in memory” either because the kernel image embeds the Device Tree for the target board or because the boot loader loads the Device Tree from disk and passes it to the kernel.

        This is actually the same as ACPI: ACPI tables also live in memory, they have nothing to do with hardware and are loaded by firmware or some bootloader. In fact some firmwares provide both ACPI and FDT.

        The only practical difference is that amd64 systems are generally assumed to come with firmware that provides ACPI, while for embedded arm64/etc systems the situation is less standardized and you might have the firmware providing ACPI, providing FDT just like it might ACPI, or provide nothing and then the FDT blobs come from the kernel or some user-supplied boot component.

        1. 2

          But the ACPI tables are generally not loaded by the OS boot loader, right? They are “there” and you never have to fiddle with their loading process. Are they put in place by the EFI? How did this work with legacy BIOS systems? (I don’t know the answers.)

          1. 6

            If your system doesn’t provide UEFI/ACPI natively and you’re adding it in via a separate boot stage (such as UEFI on Raspberry Pis, which is loaded from the SD card), or you are replacing its Flash firmware entirely (coreboot), or you are dealing with a VM where the firmware is just a file on disk (for example, KVM with OVMF UEFI) then yes, you’d fiddle with the ACPI tables just like you would FDTs. Although probably only directly if you’re compiling the firmware yourself, otherwise they usually come bundled into the binary.

            The tables are loaded by EFI or BIOS firmware. On BIOS systems you actually have to search memory for a magic signature to find them. UEFI+ACPI just gives you a pointer to the root table the same way UEFI+FDT gives you a pointer to the FDT.

            1. 3

              Yes, they are put in place by system firmware.

              FDTs can also be already put in place by system firmware, however the reason it’s much more common to fiddle with them and ship them with the OS is that they tend to be a lot more tightly coupled to the OS. They aren’t abstract mostly-standardized descriptions of the system, they’re more like config files for Linux drivers. Like, even Android vendors these days do use FDT – with a truckload of completely custom nodes that mainline Linux (let alone a BSD) would have no idea what to do with.

              1. 6

                FDTs are actually supposed to be standardized and backwards/forwards compatible. OpenBSD uses the same FDTs as Linux to boot on Macs, for example. The “standard” is the DT documentation in the Linux tree, which is pretty tightly controlled.

                The reason why they aren’t in practice is vendors ship Linux forks with non-upstream changes, and then all those additions aren’t subject to the compatibility requirements or standardized so it becomes tightly coupled to those kernels.

          2. 2

            Last weekend I spent a lot of time making a small LCD work on my Raspberry Pi running NetBSD (from Rust no less!). Figuring out how to get SPI to work was really painful, but in the end it did work. During the week, I cleaned up the code and this morning I “merged” it (see video).

            So this weekend… well, I have family visiting, but if I have a chance, I’ll get to work on supporting the Pi Zero 2 W whose form factor is going to be a better fit for this. I already got NetBSD to work a couple of weeks ago with unofficial changes, but I suspect not everything is going to be OK just yet. Or… I’m wanting to write an article on what I learned about hardware autoconfiguration and the Device Tree, so maybe that instead.

            1. 2

              I mentioned that I would spend the weekend trying to make an LCD work from NetBSD on the Raspberry Pi. I did actually make it work after lots of painful troubleshooting and learning about GPIO, SPI, DTBs, and the like.

              So now that I know this whole thing can be made to work, this week I’ll “productionize” the code. It’s not difficult anymore – but it still requires quite a bit of code reorganization and cleanups. In the end I want to get to having a tiny RPI Zero 2W with the LCD in a case booting straight into EndBASIC in just a few seconds :)

              1. 1

                Try to make a tiny LCD work from NetBSD on a Raspberry Pi. I already had the code working from Linux, so I need to adjust the GPIO and SPI interfaces and hopefully it’ll work without much hassle. Ah, and from Rust!

                1. 3

                  Minor nitpick, but…

                  Same font, same colors, same… everything?

                  No, actually. They’re the same size/dimensions, but NetBSD console has a sans serif pixel font with dotted zeroes, and the endbasic console on the right there has a serif pixel font with slashed zeroes.

                  1. 4

                    Heh, you know… I copied the font data from NetBSD and, on the small screen I used, it looked “similar enough” so I didn’t give it a second thought and didn’t pay enough attention. But it looks like I copied the wrong font!

                  2. 2

                    I think you forgot to mention that it’s possible to actually run x11 on top of this frame buffer :-)

                    1. 2

                      Not directly, but I do mention the drivers for it at the end though :P

                      1. 7

                        Have you measured where the ‘x11 xorg startup tax’ actually comes from? I am in the process of instrumenting that for other reasons, but when I did it last a few years ago in a Linux context, it all boiled down to how it shells out and runs the keymap compiler(!) to get the keymap to seed to clients. This was assuming the desired modeset matched the one set through previous stages in boot, since that can impose seconds of blocking waiting for displays to continue being terrible.

                    2. 3

                      I’m wondering if I can use this example along with a Raspberry Pi and a 7” touchscreen to display temporary WiFi guest codes without relying on X11. It’s fairly straightforward to achieve using feh, X11, LightDM, and xinput, but without X11, I haven’t been able to display high-quality QR codes so far.

                      1. 7

                        For sure. The video at the end of the article is precisely showing this :) A Raspberry Pi 3 with a 7” touchscreen, and I’ve gotten boot times from power up to showing graphics down to about ~5 seconds. It should be easy to render the QR codes.

                        But actually, the point of creating the “dev box” I show at the end is to support use cases like this one. There is quite a bit of work to implement APIs in EndBASIC to interact with the host OS, but they are on my mind these days…

                        1. 3

                          I’d be very curious to see a writeup on your boot time optimizations. I have spent a good amount of time in that area with Pi’s, especially with Balena.io and Diet Pi. I eventually got into my own bare metal kernel development for some projects as opposed to stripping down Linux.

                          1. 4

                            I didn’t do anything fancy yet. I just compiled a slimmed down kernel, which avoids some devices that take significant time during boot (WiFi is one of them, and I want to convert that to a module later) and also avoided the flickering of the Raspberry “BIOS” (because it is able to load the kernel image faster).

                            I also modified /etc/rc to start my program immediately without waiting for any of the other stuff in the system: I let file system checks and network initialization happen in the background. A big hack, but for now it’s great. NetBSD’s rcorder is nice, but it’s too slow and does not support any sort of parallelization, which is important when some services just take a long time to start. I’ve been tempted to replace it all with a custom “script” that just initializes the very few things I need instead.

                            Not as fast as it could be if it were bare metal, but I’ve gotten it to be fast enough. Combined with a splash screen to cover the kernel logs (which I removed from the recorded video), it almost feels as if the pause is intentional :)

                            1. 4

                              Nice, great info. I’m a longtime FreeBSD user, but the one machine I keep it on, a 90s era PC I keep around for format conversions & testing, is falling out of favor in a couple releases, so I have been meaning to spend more time with NetBSD. This gives me two reasons. I’m pretty familiar with the BSD-style startup.

                        2. 6

                          On Linux you also have the option of using cage, which is a Wayland compositor that gives apps a single fullscreen window whether they like it or not. It will run directly from the console with DRI (though you can also run it inside an existing Wayland or X session, which is useful for debugging).

                          Takes about half a second to start on a Pi 4, which isn’t quite as snappy as I hoped, but it’s better than X — and you get GPU acceleration within that context.

                          At least… when things work out right, but that’s a whole other story. Suffice it to say I’m not too happy with Raspberry Pi bookworm at the moment, but I’ve managed to find joy with Manjaro-ARM.

                          1. 3

                            There is DRM (not the bad, proprietary kind, but the Linux subsystem), which is pretty much meant to be the way to display stuff on Linux. I recommend it much more so than the legacy framebuffers.

                            The mentioned cage is also a great direction, if you would rather use some graphical framework for the actual drawing.

                          2. 8

                            CPUs got a lot faster in the 32-to-64 bit transition, but that was mostly because of (a) adding more registers to the ISA (and for x86, transitioning to an ABI that passes args in registers), and (b) wider memory buses.

                            None of those apply to WASM. It just gets hit with the drawback of increased memory use due to every pointer doubling in size.

                            1. 7

                              Hmm, curious, I didn’t think the x86 32 - 64 transition came with a change in memory bus width, because the memory bus is mostly affected by cache line sizes not register size. (The physical address bus got a few bits wider from 32 bit with PAE to 64 bit, but that doesn’t affect performance.)

                              I looked around and I found a table of front-side bus performance covering the transition period. On both sides of the transition, Intel x86 FSBs were 64 bits wide with 4 transfers per clock. (The FSB connected the CPU to the northbridge which contained the DRAM controllers.)

                              Nehalem came along shortly after the 64 bit transition, the successor to the Core 2 chips. Nehalem integrated the memory controllers onto the CPU, abolishing the northbridge and FSB. That probably did improve memory bandwidth significantly.

                              1. 2

                                I think the other big benefit of integrating the memory controller onto the CPU was a reduction in memory latency? I remember seeing this cited at the time as one of the reasons why Opterons were beating Xeons at the time, because AMD switched to on-chip memory controllers years earlier than Intel did.

                              2. 4

                                The earlier discussion on an article I wrote on this topic (32 vs. 64 bits and the consequences on performance) seems relevant here too: https://lobste.rs/s/a8klxl/costs_i386_x86_64_upgrade

                              3. 5

                                This year, I ran into a scenario where I wanted to run MySQL v5 and v8 side by side on my local dev machine - an M1 Macbook.

                                While I had MySQL v8 running through the official macOS distribution, by that point in time, official distribution for MySQL v5 for macOS had stopped existing. I didn’t have much faith in Homebrew (which also had deprecated v5 anyway).

                                NetBSD’s package manager pkgsrc surprised me. I was able to get MySQL v5 built and running through pkgsrc on macOS aarch64 without a hitch. Impressive considering that it was also my first encounter with NetBSD ecosystem.

                                1. 3

                                  Hm wow, that is impressive … Yeah I was going to ask how the build.sh in this post relates to pkgsrc

                                  (https://github.com/NetBSD/src/blob/trunk/build.sh - 3000 lines of shell)

                                  https://pkgsrc.org/

                                  pkgsrc is a framework for managing third-party software on UNIX-like systems, currently containing over 26,000 packages. It is the default package manager of NetBSD and SmartOS

                                  i.e. do they share some logic?

                                  I remember seeing some tweets about building all packages on SmartOS and being pretty impressed. I think the equivalent of building all Debian packages is not nearly as nicely automated (though of course Debian has many more packages)

                                  1. 6

                                    Nope. pkgsrc shares no logic with NetBSD’s build system. (There are some packages inside pkgsrc that do use NetBSD’s own bsd.*.mk files, but the core infrastructure of pkgsrc does not.)

                                    1. 2

                                      Sorry my comment is only tangentially related. I was heaping praises about my only interface with the NetBSD ecosystem - pkgsrc package build system.

                                      1. 4

                                        I’d like someone to write a similar post about pkgsrc!

                                        I think it is somewhat similar to what Aboriginal Linux was supposed to be for Linux: http://landley.net/aboriginal/

                                        • cross compiling
                                        • limiting weird circular dependencies. You can only build Debian on Debian, Red Hat on Red Hat, Alpine on Alpine, etc.

                                        Looks like you can build NetBSD on an OS X system, and you can cross compile. Probably you can build SmartOS on a NetBSD system, etc.

                                        They probably solved a whole bunch of bootstrapping problems (and limited themselves to POSIX and such, which is hard)

                                        1. 4

                                          It’s not quite as portable but Chimera Linux’s cbuild tool can build or cross-compile packages on any Linux system with apk-tools. I use this to build packages on my Arch desktop destined for my servers running Chimera.

                                  2. 3

                                    The page doesn’t explain why the CPU needed a separate, incompatible “protect mode” to access more memory. I learned more about that from the following Wikipedia pages, which I have summarized.

                                    Real mode, also called real address mode, was once the only mode of Intel CPUs, so operating systems already supported it. In this mode, addresses always correspond to real locations in memory. For backwards compatibility, x86 CPUs still start up in real mode, though modern operating systems turn on protected mode when booting so as to be able to access more than 1 MiB of memory.

                                    Protected mode, also called protected virtual address mode, enables virtual memory, paging, and other features. The mode was backwards incompatible with real mode because it reinterpreted some bits of a memory address to point to memory indirectly via an entry inside a descriptor table. It also reinterpreted two bits as defining the privilege of the memory request.

                                    Protected mode was introduced in the Intel 80286 processor, the one mentioned in the article. The Wikipedia article on the processor has sections on its protected mode and its OS support.

                                    1. 9

                                      May I suggest two articles I wrote at the beginning of the year that go deep into this topic? :)

                                      One is https://blogsystem5.substack.com/p/from-0-to-1-mb-in-dos and talks about the real mode limitations and difficulties mentioned in the article. The other is https://blogsystem5.substack.com/p/beyond-the-1-mb-barrier-in-dos and talks about protected mode and the DOS extenders also mentioned in the article. Hope they are a useful summary to later navigate other Wikipedia content!

                                      Also… “protect mode”? I had never heard anyone call it that way, ever. It has always been “protected mode” to me. (Sorry, had to mention it somewhere; not related to what you asked.)

                                      1. 4

                                        Also… “protect mode”? I had never heard anyone call it that way, ever.

                                        Me neither, and I’ve known about protected mode since the 90s.

                                        1. 3

                                          Also… “protect mode”? I had never heard anyone call it that way, ever.

                                          I have discovered this since sharing the post. :-( Apparently my shorthand form that I’ve been using for over a third of a century is Bad and Wrong and Incorrect. My mistake.

                                          1. 3

                                            Not your fault! The article is full of “protect mode” as well, so I assumed the title here was intentional. I suppose someone edited the article content before publishing it and “fixed” protected with protect?

                                            1. 3

                                              The article is full of “protect mode” as well

                                              Does a double take

                                              So it is.

                                              OK, so, I think I retract my apology, then.

                                              I was an early adopter of Windows – I used Windows 2 in production on my own work computer in 1988/1989, and I told my boss and my boss’ boss at work that Windows 3 was going to be huge and they should both stock up with as many copies as they could get, and that they should let me train the staff on Windows.

                                              They laughed at me. They did not.

                                              We sold out of all 17 (seventeen) copies of Windows 3.0 before 9AM on the day of release, and the company was flooded with enquiries. There was huge interest and nobody else other than me had ever used Windows as an environment. (We sold apps with Runtime Windows, such as Excel, Pagemaker, and the radical Omnis database, AFAIK the first GUI database for the PC, in which my company developed a bespoke app.) But the company regarded Windows as a joke product.

                                              After 6 months of overwork due to being the only person who could support all the customers with Windows 3.0, I quit.

                                              But, yes, at the time, around the start of the 1990s, that was a common term: Windows ran in “protect mode” and it was “protect mode” that let it use XMS as more program RAM and so on. Few other “protect mode” OSes existed: OS/2 1.x, SCO Xenix, and Novell Netware were about it.

                                              Yes, protected mode might have been more accurate, but it was known by the shorter name.

                                              This is not history to me. I am at work right now (and should not be commenting here) and this was an earlier stage of my job. I only switched to being a Linux guy in the 21st century. The first part of my career was as an expert Windows support guy.

                                              1. 4

                                                As another person who lived through those times, I agree that “protect mode” was commonly heard, though it was also common to hear “protected mode”.

                                                Not that this is by any means definitive, considering how common shortening of such things in a config file context is/was, but this conversation made me think of the OS/2 CONFIG.SYS setting: “PROTECTONLY=NO”. I always looked forward to the day that every app was built for OS/2 and I’d be able to flip that to YES. Still waiting!

                                                1. 3

                                                  I agree that “protect mode” was commonly heard,

                                                  Thank you!

                                                  Still waiting!

                                                  Tried ArcaOS? :-)

                                                  Honestly, it’s pretty good. I tried it.

                                                  It does SMP, it can run thousands of DOS and 16-bit Windows apps, dozens of OS/2 apps, and some Win32 and FOSS/xNix apps. It’s tiny and amazingly quick compared to any modern OS, and I very much include Linux in that. No Linux comes close in performance and about the only thing that even could compete is TinyCoreLinux running from RAM. Alpine Linux is tiny and runs well on early 64-bit kit such as a high-end Core 2 Duo, and ArcaOS absolutely stomps it. It boots faster, apps load faster, it’s more responsive, and it takes less space on disk or RAM.

                                                  An ArcaOS with PAE support in the kernel would deliver quite a lot of what I want from an OS even now.

                                                  I just don’t have much use for it… :-/

                                                  It’s blisteringly quick on low-end kit, but then, so is Windows XP, especially XP64, and XP has a lot more apps and is much easier to get running. It’s just horribly insecure.

                                                  1. 2

                                                    Lol I enjoyed the article, thanks. I bet OS/2 (er, ArcaOS) does indeed run pretty quick on modern hardware compared to how it ran on my IBM PS/2 Model 70 that packed an almost unimaginably large 6 MB of RAM!

                                                    I’ll always love OS/2, both for it being the first “real OS” that I ever ran (and igniting a life-long interest in systems software) and for the lessons I learned as a teenager watching people on Usenet fall further and further into denial that it was all over for it (because of Macro$oft and the corrupt Ziff-Davis, natch). Oh to go back to a time with such low-stakes internet brainrot.

                                                    1. 2

                                                      Thanks!

                                                      Yes, I agree.

                                                      I recently did something that I felt guilty about, because I enjoyed it too much: put XP on a recent machine and went online. It was so fast and responsive, and at that time, you could still get free XP antivirus. You can get tools to find drivers, modern browsers that can handle 202x social networks etc…

                                                      It is a forbidden pleasure.

                                                      By comparison with modern OSes, WinXP is sleek and fast. Especially XP64 in 8GB of RAM.

                                                      I reviewed MS Office 97 when it came out. I hated it: bloated, sluggish, buggy. Now, it’s tiny and fast. It runs perfectly on WINE, including installing the service releases. Time makes a lot of difference. But it’s one suite.

                                                      XP makes one look at modern Windows and Linux and realise how appallingly sluggish everything is today.

                                                      But ArcaOS flies on machines that are low-end for XP. It can use up to 64 cores, it can just about do wifi and multihead true-colour accelerated graphics, USB2, UEFI… it has browsers, it has the yum command and repos with FOSS ports. It is surprisingly capable for a late-1990s OS.

                                                      The only big thing it can’t do is access >4GB RAM, except for a big fast RAMdisk.

                                                      It reminded me why I thought XP was a bloated mess in 2001.

                                                2. 3

                                                  My father’s company got its first Windows 3 install bundled with an app. A vector drawing program, I think it was called MetaDesign, decided it was cheaper to write a Windows 3 app and bundle a copy of Windows for DOS users than to write or license a vector drawing library and printer drivers for DOS. Before that, they used GEM, but once they had Windows installed it was easy to sell them more Windows apps.

                                                  I never ran 3.0 in protected mode though, only ever on real mode on an 8086 or 386 Enhanced Mode on an 80386 or newer. I was quite happy with that, because the 80286’s segmentation model was a thing that you teach in computer architecture courses as a cautionary tale.

                                                  1. 3

                                                    bundle a copy of Windows for DOS users

                                                    This was a standard thing to do in the Windows 2 era. It was known as “runtime Windows” and the result was most of a Windows installation, but rejigged: there was no WIN.COM and instead users ran the binary of the app itself from DOS, which loaded Windows with the app as “shell”. Quit the app, Windows shut down.

                                                    It worked fine. I had a client in London, Mentorn – they’re still around – whose accounts department ran on Excel, but which didn’t use Windows for anything else. So it was Excel + runtime Windows.

                                                    Other apps that did this were Pagemaker and Omnis, the first Windows database.

                                                    Snag: it was the plain unaugmented Windows, not Windows/286 or Windows/386. So, it could only use 640kB of RAM and run in Real Mode.

                                                    If you installed several such apps, each got its own copy of Runtime Windows.

                                                    But if you then installed a full copy of standalone Windows (2 or later) on the machine, you could browse to the app’s directory, run the binary, and it ran normally under full Windows, including interop with other apps. The app was stock and it was Windows that was modified, AFAICT.

                                                    I do not recall ever seeing Runtime Windows 3. With Windows 2’s fugly MS-DOS Executive replaced by the perfectly usable Program Manager, Win3 made a quite decent DOS shell that users liked, and some used it with no other apps.

                                                    I never ran 3.0 in protected mode though

                                                    (!)

                                                    It was a thing.

                                                    Mainly for me on 386 machines with only 1MB of RAM. Win3 & 3.1 wanted 2MB minimum for 386 Enhanced Mode and if they had less they’d load in Standard Mode by default.

                                                    (286s were mostly too slow to comfortably run Windows. Indeed Windows 3 was a major driver of adoption of the 386 via the budget 386SX with a 16-bit bus.

                                                    ((Aside to the aside: clock-for-lock, the 80286 was quicker than the 80386. But the normal speed of 286 PCs was 8MHz to 10MHz with a few 12MHz. I know of a single 20MHz 286 machine that was ever made, by Dell. Many 10-12MHz 286s had slower RAM and so ran with wait states. CPU cache only became a thing with high-end 386DX machines and most 286s had no cache, so if the RAM couldn’t keep up, the CPU just had to go slower.

                                                    (((Most IBM 286 PS/2 models shipped with slow RAM and wait states: in other words, the 286-based PS/2s for which IBM crippled OS/2 1.x and destroyed its chances in the market were lousy slow 286s and unsuitable for running OS/2 1.x, which was about the most demandingh 286 OS ever released, bigger and slower than Netware 2, or Xenix 286, or Concurrent DOS 286.)))

                                                    /* I am nesting way too deep here */

                                                    The 386SX ran from 16MHz to up to 33 or even 40MHz in some late models. So, 386SX machines were in fact faster than 286s, not because the CPU was inherently quicker – it wasn’t – but because it was clocked faster.

                                                    Also, few non-IBM 286s had VGA. Windows on EGA wasn’t very pleasant.))

                                                    So, there were lots of decent 386 machines, both 386DX and some 386SX, that were specced to run DOS and only had a meg or two of RAM. I saw some with 1.5 or 1.6MB RAM: 640kB plus 1MB of extended memory. Odd, but cheap.

                                                    You could force them to run in Enhanced Mode. I forget the switch now. win /e or something. It worked. You got pre-emption of DOS apps in V86 mode “VMs”, you got virtual memory. It was dog slow but it worked. It wasn’t very stable but Windows 3 wasn’t very stable.

                                                    Secondly, Windows could only run in Standard Mode if under another 386 multitasker. So, it ran in Standard Mode under DESQview/386, and DESQview/X, and Concurrent DOS 386, and a few other things.

                                                    So it was very much a thing and quite common.)

                                                    When I posted here on Lobsters in the past that Windows/286 ran in Standard Mode a lot of people got angry and contradicted me. It seems to me that there still is not a categorical, ex-cathedra statement of exactly how Windows 2 in 286 mode worked.

                                                    But Windows 3 in Standard Mode was quite good and made effective use of up to 15.6MB of XMS if you had it – on a 286.

                                                    1. 2

                                                      This was a standard thing to do in the Windows 2 era

                                                      It’s possible that I misremember. It could have been Windows 2. I think 3.0 was the first one that they rolled out company-wide.

                                                      It was a thing.

                                                      I know, I just never owned a 286, and the 386s I owned had 4 or 5 MiB of RAM, so there was no point in pretending to be a 286. I went straight from 3.0 in Real mode to 3.1 in Enhanced Mode.

                                                      Windows 3.0 ran on my Amstrad PC1640 HD20 (with the hard disk upgraded to a massive 40 MB!). With 640 KiB of RAM, you could run Windows and 1-2 apps. It had an EGA monitor, so you mostly ran one app maximised anyway, which reduced the value of multitasking.

                                                      1. 1

                                                        Windows/286

                                                        See here: https://winworldpc.com/product/windows-20/windows-286

                                                        Give it a try. As I recall, it is merely running in “real mode” with the HMA enabled, and with a special version of USER in the HMA. At least that is what the docs included with it say.

                                                        What windows/286 does appear to have is an EMS 4.0 / EEMS aware KERNEL, and so it can possibly page below 640K if the h/w implements it. i.e. if the board, chipset and memory cards are set for minimal base memory, and EMS “backfill”.

                                                        Or at least that is the impression I got a number of years ago when I had a try at disassembling some of its KERNEL.* I looks like that was removed/deleted/replaced when the Windows 3.0 scheme with support for DPMI was eventually created. Which would make sense, as it is simpler.

                                                        I’d have to suggest that you’d need to offer proof for windows/286 operating in “standard” i.e. 286 protected mode, as that goes against everything I’ve seen written in the past. So maybe take those images from the site above, and get them working in such a mode.

                                                        A place I worked at during 88 - 89 had a copy of Windows/386 and Windows/286, and us developers had Acer 286 machines with (usually) 1M RAM and EGA (some lucky folks had 2M). As I recall, none of us tried running those particular Windows versions. Trying Windows/386 would have been pointless (despite there being a handfull of 386 machines), and we did give plain windows a go. Yes it was painful in EGA (or even CGA) mode.

                                                        On the other hand, Windows/286 knows how to use LIM 4.0 boards to store and swap executable code. Older versions didn’t even know how to use LIM EMS 3.0.

                                                        1. 2

                                                          Give it a try. As I recall, it is merely running in “real mode” with the HMA enabled, and with a special version of USER in the HMA.

                                                          Interesting stuff. Thanks, and for the PC Mag link.

                                                          I am not bothered enough to try to run up some kind of 286 PC emulator to try it out. I do not miss Windows 2.x at all and have no urge to return.

                                                          It’s very interesting to me, though, that pivotal parts of the development of IT in the 1980s and 1990s is being forgotten now… because without knowing this stuff, some things about how it is today make no sense at all.

                                                          This industry needs to know its history much better.

                                                3. 2

                                                  Thanks for sharing the post. I suggested a title change to “Protected…” and if a few others do as well, we can stop discussing your shorthand and it’ll be easier to stick to the substance of the article. I enjoyed the post you shared from landley.net as well.

                                                  I find this tremendously interesting, because my head was firmly in Mac space at this time, so I missed much of it.

                                              2. 3

                                                modern operating systems turn on protected mode when booting so as to be able to access more than 1 MiB of memory.

                                                These days, it’s already the firmware (UEFI) that enters protected mode. It’s also the only mode supported by long mode. (aka 64-bit mode, aka x86-64)

                                              3. 2

                                                Continuing with my quest to build a minimal bootable EndBASIC image for the RPI 3 I have lying around.

                                                So far my current prototype is based on NetBSD, which drove me to write a console backend that talks directly to the wscons framework (aka renders directly to the framebuffer, bypassing the need for X11). With this, I can modify /etc/rc to start EndBASIC really early in the boot process and background the rest of the system initialization, giving me boot times in the ~seconds range. (It’s still not fast enough though, but check the video from last week :D )

                                                I’m now fighting with the wskbd keymap and the switch I made from gcc to clang, but I should have a 3rd prototype built by the end of the weekend, which hopefully shows significantly-improved graphics rendering performance.

                                                1. 1

                                                  I wasted the weekend figuring out how to cross-build EndBASIC (a Rust project) for aarch64 with SDL support, and oh, targeting NetBSD. I eventually gave up on cross-building and, just now, reached the simplest solution which is to use qemu and an old-fashioned expect script to drive the build. Not the fastest option (well… actually it’s faster than setting up a cross-build environment from scratch!), and certainly the easiest path. By the way, I did try buildroot too for Linux but also reached some blockers there.

                                                  Anyway. The thing is: that’s just a precursor to finishing this idea I’ve had in mind to create a “boot to EndBASIC” minimal image for the Raspberry Pi. So hopefully I’ll get that idea to a decent state by the end of the long weekend :)

                                                  1. 2

                                                    I impulse-bought a huge NVMe upgrade for my server so I’ll see if I can replace the current small SSD it has (used only for ZFS ARC) to act as its main drive – and then repurpose the SSD to act as cache for the NAS.

                                                    I also impulse-bought a tiny HDMI display for the Raspberry Pi and I want to get back to my goal of having a bootable “box” that launches into EndBASIC with minimal overhead. So I hope to start toying with this idea. The thought of designing and 3D-printing a case also came to mind and I have no clue how to go about it yet, but that’s what I want to end up doing! Not this week for the latter though!

                                                    1. 12

                                                      Heh. It’s funny because I’m facing this “problem” right now. I’m in the middle of doing a “simple change” to the core of the EndBASIC interpreter and I’ve already gone down three different refactoring paths and hit dead ends in all of them “due to the borrow checker”.

                                                      Well, it turns out my “simple change” wasn’t so simple after all: not because of the borrow checker, but because it has rippling effects throughout the code base as it modifies various assumptions, and the change is actually a significant departure from how things used to work. The borrow checker is being painful to deal with, sure, but it is doing its job: it’s showing me that those assumptions exist and must be updated with a new design.

                                                      1. 2

                                                        Spend a little bit of time updating my “Demystifying secure NFS” post after the helpful advice from the previous thread here (https://lobste.rs/s/7klzqj/demystifying_secure_nfs). And then trying to pick up some work I was doing on EndBASIC late last month. Other than kids stuff, yard work, etc…

                                                        1. 6

                                                          It’s really interesting to see a bhyve management UI written in Lazarus. An unorthodox choice, but quite possibly the best choice for a desktop application in the current landscape, if there aren’t fast-moving external dependencies.

                                                          Now that ISC DHCP-everything is unmaintained, I also wonder if FreeBSD’s dhclient could be ported to other platforms…

                                                          1. 0

                                                            Looks nice, but weird choice indeed. I’d love to have such a management interface over the web though to remotely manage the VMs on my headless FreeBSD box.

                                                          2. 8

                                                            Great post! A couple of minor notes and some answers:

                                                            Kerberos is an authentication broker.

                                                            Nitpick: I’d classify it as an authentication service. A broker to my mind is responsible for managing the coordination of messages between other (often distributed) components, but doesn’t itself perform processing of the messages. Kerberos in contrast is a client/server model where the Kerberos server (KDC) processes Kerberos client requests.

                                                            Its goal is to detach authentication decisions between a client machine and a service running on a second machine, and move that responsibility to a third machine—the Kerberos Domain Controller (KDC).

                                                            You mean the Key Distribution Center (KDC). Domain controllers are strictly a Windows concept as part of an Active Directory deployment. Kerberos itself has no concept of domain controllers.

                                                            Kerberos claims to be an authentication system only, not an authorization system. However, the protocol I described above separates the TGT from the TGS, and this separation makes it sound like Kerberos could also implement authorization policies. Why doesn’t it do these?

                                                            The issuance of a TGT which is used by the client to request service tickets to specific service principals has several benefits over the client directly requesting service tickets without an initial TGT:

                                                            • The Kerberos client can discard the user’s password after obtaining the TGT. There’s no need for it to be retained in either memory or persistent storage, which has some clear security benefits. You sort of mentioned this in the article, by correctly noting that the TGT means the user doesn’t get repeatedly prompted for their password on subsequent interactions with the KDC, but a client could still handle this by caching the password in memory. The TGT makes even this (admittedly terrible idea) unnecessary.
                                                            • The Kerberos server (KDC) can verify the authenticity of the TGT instead of the user itself. I.e. it doesn’t need to perform a far more expensive user authentication process when issuing service tickets (the vast majority of tickets). If the TGT is valid, the KDC can trust in the authenticity of its content and simply issue the requested service ticket for the user in the supplied TGT. A single KDC may be processing a very large volume of requests, so avoiding the need to authenticate users more often than necessary has material performance impact.
                                                            • As a supplement to the last point, authenticating a user is much slower on the network as it will require multiple round-trip requests to the KDC. A service ticket request can be done in a single round-trip, literally in the case of UDP, or after the initial handshake in the case of TCP.

                                                            These are all performance and security reasons, but the resulting design doesn’t enable authorisation decisions to be made at the Kerberos level. The information required to make that decision doesn’t typically exist in Kerberos, and even if it were, it would significantly complicate the service ticket issuance flow.

                                                            Authorisation decisions instead tend to be made on user attributes which are stored in a directory service, usually based on a protocol like LDAP. Active Directory takes this approach where Kerberos is used for AuthN and LDAP is used for AuthZ. Technically, LDAP can be used for both, and some services use this approach for SSO, but then you lose the benefits Kerberos brings.

                                                            Something that is desirable is to include user information in the Kerberos service ticket which can be used by the authenticating Kerberos services to make authorisation decisions. Kerberos doesn’t have a standardised facility to do this, so Microsoft extended Kerberos to include a structure called the PAC (Privilege Attribute Certificate) that includes information about the user (e.g. their group membership as retrieved from LDAP at the time the TGT was issued). Kerberos services can examine the PAC to make authorisation decisions instead of having to perform more expensive lookups to a network service to determine user authorisation if that information is not available locally.

                                                            1. 2

                                                              Thanks for the detailed response! I’ll try to incorporate some of this into the text when I find a moment this week.

                                                            2. 1

                                                              Excellent article! I was wondering if you have evaluated how NFSv4 with krb5p encryption compares to Samba in terms of throughput, latency, and overall system resource usage. How do they perform in real-world situations relative to each other?

                                                              1. 1

                                                                I’ll include those measurements when I write the promised upcoming post comparing the DS923+ to FreeBSD :)

                                                                (I don’t have any numbers though yet. I tried to run bonnie++ earlier but it was infinitely slow over NFS…)

                                                              2. 1

                                                                i don’t know much about nfs but I am planning to get the same Synology device, what amount of setup work is required to have a robust nas system after acquiring the device? looking at this guide it seems like it’s a lot of assembly required. i thought Synology software would handle all this?

                                                                1. 2

                                                                  Setting the device up was trivial, and even the NFS bits could be easily configured through the UI. All of the complexity I wrote about comes from the KDC and the clients.

                                                                  And if you stay away from NFS, it’s much easier. The device seems solid to me.

                                                                  1. 1

                                                                    I have a DS923+ — it is very hand-holdy and quite hard to get wrong. This post is going off the rails (good thing! but yes).