Threads for jcrawfordor

  1. 1

    Tape backups aren’t dead, they’re just pining!

    Nah honestly I can see this being really useful for some workloads… Just not worth the effort for most users anymore.

    1. 12

      The problem with tape is the size of data required to make it cost effective. A disk is more expensive than tape of the same capacity but a lot cheaper than an equivalent tape drive. Last time I did the maths, 50TiB of backups was about the point when tape became cheaper. The article tells me that the point is now 165 TiB. That’s actually a bit misleading because it assumes a constant cost for the tape drive, but if you have 165 TiB of data on LTO-7 then you’re at almost 30 tapes and so you start to enter the territory where you want an autoloader of some form, which pushes the break-even point to 345 TiB (from the nice calculator in the article). And at that point you now have almost 60 tapes, so you probably want a larger autoloader.

      By the time you have 300+ TiB of storage, you’re also well into the category of wanting off-site storage, which changes things a bit. Tape does well in this regard because you can keep your tape library on site and ship tapes to an off-site storage facility. That all adds to your costs.

      The other comparison point is cloud storage. Cloud providers provide very cheap archive storage, which has similar characteristics to tape: cheap, write-mostly storage, with very high (multi-hour) latency for read and have the off-site and append-only bits built into the offering from the start. That 345 TiB point costs around £7,000. That buys you about 788 TiB-years of archive storage in the cloud, so for the same price as the disks or tape solution you could make the whole thing (including the off-site storage) someone else’s problem for 2-3 years, at which point I’d expect prices to have come down.

      1. 1

        Thank you for the excellent summary of the problem! …and for doing all the math for me that I was about to do myself. <3

        I wonder if those cloud providers use tape for their archive storage products?

        1. 4

          I wonder if those cloud providers use tape for their archive storage products?

          I don’t think any of them talk publicly about what they use (and I don’t interact with the Azure Storage team so I have no direct knowledge of anything that we do). Project Pelican’s publications talk about targeting workloads that would traditionally use tape. Project Silica is aiming at replacing whatever we use now with laser-etched glass.

          The economics are quite different for the cloud. Most data written to archive storage is write-only. People use it for offline off-site backups and so you only ever need to read back if your primary storage fails and your on-site backups / redundancy fail. If you’ve got a local redundant storage system with snapshots then the kinds of failure that require you to go to archive storage are pretty catastrophic (e.g. your office burned down). If you’re using cool storage for a first tier of backups then you probably don’t need to go to archive storage for most of these. At a guess, 99% of data is never read back, but you don’t know which 1% is going to be needed. This means that something with a tape-like model of a very expensive reader and very cheap and durable storage units makes sense. Silica is aiming to use cheap sheets of glass for the long-term storage with a reader / writer that makes tape drives look cheap. The writer basically streams whatever’s written to it onto sheets of glass and they’re then stored very densely. The reader may have to wait for the right sheet of glass to be moved into the queue, and streams the contents off into hot / cool storage for remote access.

          This kind of model doesn’t make sense for anyone except a handful of companies. The fixed cost of building this infrastructure is huge and so you need to be storing many petabytes of data for the fact that the marginal cost of an extra terabyte is tiny to matter. Big movie studios that want to archive all of their historical footage including the unedited bits are probably in that space, not many other companies.

          1. 1

            Project Silica is the coolest and most sci-fi thing I’ve ever heard that seems like a fundamentally terrible idea. XD Quartz glass will last basically forever, but sounds quite expensive and bulky compared to, say, metal foil or some kind of plastic. Great for really archival stuff like library stacks, but doesn’t seem terribly appealing for essentially-disposable write-once media in a data center.

            1. 3

              Glass microfiche used to be a very popular archival format (before it was replaced by plastic/gel microfiche, although the glass was more durable and continued to see use), so this sounds pretty practical to me. The media is inexpensive and there was equipment available in the ’70s to store it in a very compact way. Microfiche is really durable and very easy to handle, esp. compared to its competitor microfilm, so the library environment I worked in years ago preferred it since patrons were less likely to damage it. I can see it having the same advantages for automated handling.

              1. 1

                That’s interesting, I didn’t know that. I’ve used the plastic microfiche before but never seen glass.

              2. 1

                The glass is very cheap. On the order of $25 for a single piece. I can’t remember how many TiB they’re putting on one of those, but it’s a lot cheaper than tape with the same capacity. It’s then expected to be able to last for 100 years or more without any data loss (tape has layers of magnetic material next to each other, so gradually fades).

                Cloud archive storage is not essentially disposable. 99% of it is never read, but you never know which 99%. Long-term reliability is incredibly important.

                1. 1

                  I suppose the storage density is the question, yes. From one of the papers on the Project Silica website, “Glass: A New Media for a New Era?”:

                  In the first system we anticipate that in a volume equivalent to a DVD-disk we can write about 1 TB. The technology can potentially get to 360 TB.

                  If we choose a moderate number of say 100 TB/plate then that’s $0.25/TB, or even at 10 TB/plate then it still beats current tape. From the original article:

                  …a 12TB SATA drive costs around £18.00 per TB (at the time of writing), a LTO-8 tape that has the same capacity costs around £7.40 per TB (at the time of writing)

                  …Okay, they are aiming to cram a lot more data onto those things than I expected, I suppose. Unfortunately I don’t see any open publications on how well it’s going, the latest actual numbers I see are an Ars Technica number from 2019 about them saving a 75 GB movie onto one plate. Not exactly thrilling. I do hope they make it work though, quartz glass would make a great historical medium. If I recall correctly from The World Without Us, glass is one of the most environmentally-resistant things we make, and quartz glass is tougher.

                  1. 1

                    If we choose a moderate number of say 100 TB/plate then that’s $0.25/TB, or even at 10 TB/plate then it still beats current tape. From the original article:

                    The $25 plates are a lot bigger than a DVD. I think they’re about 30cm square and don’t have a hole in the middle for a spindle. A DVD is a 12cm diameter circle. The hole is 1.5cm, I think the gap around it adds up to 3cm, so the area is (6^2 * pi) - (1.5^2 * pi) cm^w, or around 106cm^2. The size of a Silica glass slab is around 8.5x the size of a DVD, so 100 TB/DVD-equivalent-area they’d be just short of 1PB. At their target number that’d be at 3PB per 30cm x 30cm glass slab.

                    Unfortunately I don’t see any open publications on how well it’s going, the latest actual numbers

                    This kind of project tends not to publish until it’s either shipping in a product or it’s cancelled. The last update I saw from the project was pretty impressive. The same team is working on holographic storage for cool storage, assuming a future storage flash or other persistent RAM for hot, holographic storage for cool, and glass for archive storage.

          2. 1

            Not sure if you factored this in, but tapes are suitable for cold storage whereas it is generally discouraged for hard drives. That brings down the cost if you consider it over decades, where hard drives would be constantly spinning (at least if you follow conventional advice).

        1. 1

          And on that topic, if you’re going to run a mail server and you’re not using a 100% virtual approach with everything stored in a relational database, it will be well worth your time to become an expert on Linux file permissions and the SetUID behavior of your MDA and, possibly, MTA, filter, etc if they interact with file storage. Many of the real-world problems you’ll run into will turn out to be related to file permissions and what UID a specific component of the mailserver is acting as when it takes a specific action. Depending on details of the setup, processing an individual incoming email often includes steps that run as a service user, and steps that run as the user the email is being delivered to.

          While I would agree with that, is there something specific that happened here? I can’t remember I had to ever deal with permissions on my own in 15 years and using files. The only thing was when I switched providers and mail server (Debian/postfix to OpenBSD/opensmtpd, as well as some other parts). But that was just backup and chown on the new system. Was very straightforward.

          But but it course, running a server, having a grasp of how permissions work is a good idea and if never recommend anyone to do something else. Also obviously understand the software you are running.

          I always ran without db (if you don’t count aliases db). For mail going the very Unix approach feels like the right way. After all most Unix like OSs come with some usage for mail being integrated like that (status mails, cron mailing output, etc.), so often you will find mails for users on non mail servers. See OpenBSD’z welcome mail.

          One thing I’d like to add: have a good understand of DNS. That can help a lot, also elsewhere, but knowing about DNS PTR is a good idea in general, whenever you set up any kind of server and then knowing how SPF and DKIM work, not just blindly copy pasting of course it’s also a good idea.

          Obviously understanding what you set up is a good idea. Since mail doesn’t change a lot you likely won’t touch it much do make sure to comment and document even the parts that seem obvious now. :-)

          1. 1

            Main situation where I’ve run into these problems has always been related to storing mail outside of user home directories - I recommend not doing that, but the biggest upside is that it means that pretty much everything “just works” since most mail software is already set up to setuid to the recipient before trying to handle mail on disk, and expects all user config files to be in user home and read via setuid. If you start changing that situation then you become a lot more likely to run into headaches where you have mail processing happening as ‘mail’ sometimes and as the user other times, mail directories and config files owned by different users, etc. The postfix master.cf Mostly this has come up because if you’re putting a lot of functionality in your IMAP server it can become desirable to not allow the user direct access to their own mail on disk, since they bypass whatever the IMAP server is doing and that can create inconsistency and headaches (this was yet much worse in the days of POP or, worse, the days of IMAP except for five users who have never reconfigured their client).

            Another adjacent source of headaches is chroot. Many distros distribute Postfix preconfigured to run under a chroot, which I think is a very good idea considering the huge attack surface with file I/O access in Postfix, but it does mean that some of your mail components need to run chrooted and others need to not be chrooted, and if you’re changing the config from defaults you will probably screw that up at least once and have a problem that’s annoying to diagnose.

          1. 4

            Even for popular combinations, there are multiple ways to architect the mail delivery, storage, and management process. For example, there are at least 4-5 distinct architectural options for configuring Postfix to make mail available to Dovecot. Different distributions may package these services pre-configured for one approach or the other, or with nothing pre-configured at all. In fact, mail servers are an area where your choice of distribution can matter a great deal..

            I’d love to read an outline of these options, along with a map of distributions which provide each option out of the box. Does such an outline exist?

            1. 3

              That’s a good question, if I can’t find one I might take it on. That 4-5 is a very rough estimate based on my recollections, but I think it’s more likely to be an underestimate than an overestimate. The biggest difference is whether or not you choose to use Dovecot as MDA: you can either have Postfix “deliver” mail and then Dovecot read the mailboxes created by Postfix, or you can have Postfix transfer email via socket to Dovecot for “delivery.” The latter tends to be preferred these days because it allows Dovecot to do a lot of optimization by having full ownership of the mailbox, including filtering and live indexing of inbound mail and IMAP push with less performance overhead. One reason you might not do it though is if you want to use a totally separate MDA for whatever reason (neither the one built into Postfix nor the one built into Dovecot, maybe procmail for compatibility with existing procmailrcs).

              An additional complication is that most MDAs support either “mbox” or “maildir” format for file system storage (maildir tends to be better for a few reasons, but there are catches), and some MDAs and IMAP servers support using a relational database instead (usually your best performance option but you can imagine it gets weird when you have multi-MB email bodies). There’s also a Unix tradition of storing mbox/maildir as the user and in the user home directory, which has benefits but can make troubleshooting a bigger headache. If you NFS mount or otherwise centralize your home directories, users are 100% guaranteed to accidentally delete their whole maildir from time to time, leading to annoying backup recovery tickets. But moving these out of the user homedir tends to violate some expectations on the part of the mail software and has its own pains. Some of the details of these catches lead to the common situation on older mail setups where the user’s inbox is limited in size compared to other folders: it used to be very common in NFS environments to put the inbox, but only the inbox, in the user’s home dir in mbox format (because all of the Linux tooling expects it to be there), but all other folders stored on the mail server in maildir format outside of home directories. Fortunately I have not seen this in a while except for, oddly, Dreamhost, which I assume just hasn’t changed their mail setup for a good while. My memory of what exactly was the main painpoint on moving the inbox is fuzzy now, but I know back when I ran mail we had already put out default mutt/pine configs to use IMAP rather than reading locally, so I think there was some other issue beyond that. I hazily think the details of the UID behavior of Procmail may have been involved.

            1. 7

              Today, many email providers have some kind of method of contacting them, but I have never once received a response or even evidence of action due to one of these messages… both for complaints of abuse on their end and deliverability problems

              Strangely, one thing I was surprised about was how it was possible to get in touch with Postmasters when we had issues with blacklisting. It generally took a while to track down the proper way to contact them, but I think there’s only been one mailhost (Apple) who we never heard back from – although we were removed from their blacklist the day after. When blacklisted by Microsoft (Outlook, Hotmail, Live, etc.) I even got to chat with a helpful real person; an actual named person no less!

              1. 8

                Suppose it depends on the provider. I’ve had quick and helpful responses from Migadu when needed in the past

                1. 2

                  This was my experience also. A couple of years ago, I was trying to change my Warby Parker password after a breach on their end. The process failed at the send a nonce to Chris via email step. I could see that the cause of the failure was that I hadn’t added warbyparker.com to my greylister so I wasn’t going to get immediate delivery. Even after bypassing warbyparker.com I wasn’t getting the nonce. A letter to hostmaster@warbyparker.com got a response but I didn’t need glasses at the time. I honestly didn’t know it was fixed until today when I remembered as a result of this thread.

                2. 6

                  Maybe I’m just being in too bad of a mood! Most of this comment dates back to when I was working in university IT 6-7 years ago and spent a long time battling blacklisting problems. I don’t think I ever got responses from Google, and while they did start accepting mail from us again each time it was long enough later (a few days) that it was unclear whether someone had read my message or the blocklist entry had just hit an expiration date. The worst problems we had in the other direction were abuse coming from Google (spam and fraud email, calendar invites, and phishing via Google Forms) and I don’t think I ever saw any evidence of their acting on abuse complaints. We started blacklisting certain gmail addresses at the institutional level because they were sending so much mail to us that just had very long address lists pasted into the To/Cc. The irony is that SpamAssassin was picking this stuff up and we had to manually bump the domain reputation of gmail.com to avoid it learning to spamcan every marginal email coming from Google.

                  I have not seen this kind of problem since I worked there (although I haven’t run as large of a mailserver since), but I would assume Google has tightened up their controls at some point because it was remarkably brazen behavior to continue from a major email provider for, as I recall, around a year. I’d guess this was 2013 or so. This situation actually lead somewhat directly to that institution switching to Google Workspace later because it fostered a perception that the in-house operation was incompetent, which is sort of ironic. It really got at one of those no-win problems in security: we were getting phished via Google Forms on a weekly basis, but whenever we tried taking active measures like dropping email with Google Forms links it turned into a huge upset and we had to back off. When I worked in incident response in 2015-2017 phishing via Google Forms continued to be a top-ten problem, but at that organization we had somewhat better tools available to combat it and ended up using our outbound web filters to inject a redirect to a warning page the user had to click through (we were fortunate enough to have TLS interception). Google now provides some additional measures to mitigate this problem but they of course require that you be a Google Workspaces customer. I assume they’ve also stepped up detection and enforcement a lot because in more recent cases where I’ve seen Google Forms phishing, the link has often been dead by the time I saw it. In any case the whole thing left me with a very negative perception of how Google runs their email operation (which was boosted when I was involved in the pre-sales process on the switch to G-Suite, which was amazingly incompetent).

                  And I shouldn’t let Google color my perception of the whole industry, my recollection is that Microsoft was pretty easy to get a hold of on hotmail/outlook.com issues, although I think we may have leveraged our existing relationship with them to get a better contact than what they have publicly.

                  With smaller organizations it’s always been a lot easier, of course being a university much of our interaction was with other universities and postmaster@ almost always got a response from someone helpful, whether we were the source or victim of the problem. Unfortunately this situation has become worse over the years as more and more institutions and businesses are moving over to cloud services that they have limited control over and knowledge of. In my current position, where I don’t even really deal with email, I’ve run into a few cases of “something is blocking email from you but we can’t figure out what.” It’s almost always a cloud email security gateway they use, which have a tendency to be very opaque.

                  1. 1

                    Although we haven’t had any issues so far sending to Google, I don’t doubt for a second that they would be a pain to deal with. We manage several dozen Google Workspaces for clients and it is constantly causing bother.

                  2. 2

                    It’s also a good idea to have an alias for postmaster@ (root too, often aliases tables have that per default). There’s many systems and people that default to sending there.

                  1. 4

                    The world of professional reference materials, for professions outside of programming, has always been an interesting one to me. In high school I took advanced chemistry classes and was exposed to the CRC Handbook, a hefty and well-organized book that’s intended to contain most of the information you would need day-to-day as a working chemist - formulae, physical properties, constants, etc. Similarly but more in biochemistry and pharma the Merck Index is a pretty famous resource with the general goal of allowing you to look up any chemical and find out its properties, including medical uses. Pharmacies usually have a copy (of course it’s software now) for the pharmacist to look up questions about interactions and etc.

                    Anyway, in medicine, the Merck Manual of Diagnosis and Therapy is a very old, many times revised reference that aims to fit your typical doctor use case - looking up symptoms to find a diagnosis and treatments. Nowadays there are several software-based options that are updated continuously, although the Merck Manual continues to be revised and reissued (and is available in software form including a mobile app).

                    The cool thing about these references of course is that they are not only comprehensive but also authoritative - the publishers put a lot of time and effort into following the state of research in order to provide the most current citations for all of their information. Google doesn’t offer this type of service, but in practice you can still often get to it by Google, since despite the best efforts of shady SEO googling medical symptoms still usually gets you results from either respected medical institutions or government agencies. These sites, like the Mayo Clinic online reference, do give you citations in the academic literature if you click through.

                    My point is that many of these references are 100 years old - respected professionals have been looking things up for a long time, because any meaningful profession encompasses more information than one person can remember. This seems even more true in fields like medicine and pharmacy where we would hope practitioners aren’t just trusting their memory!

                    1. 2

                      Relegendable caps can be surprisingly expensive unfortunately, I usually order them from X-Keys even if not for an X-Keys device just because they’re a reliable source at a reasonable price. Unfortunately they use a different style that aren’t as easy to hand-cut, I’d be curious if anyone knows of a source of the square ones that’s less ridiculous than US$4+ each.

                      As for macropads, while I don’t necessarily want to besmirch the new open-source designs, I’ve usually stuck to used industrial products. They tend to be cheaper and more durable. The Genovation CP-24 is a good choice that’s regularly available on eBay for very reasonable prices. If you are crafty you can fit a Raspberry Pi Zero or similar into some empty space in the back of the case to make a compact wall-mount controller (I use this arrangement for lighting scene selectors). Other brands to check for on used markets are Cherry (the actual POS and industrial control keyboards Cherry makes) and PrehKeyTec, although the PrehKeyTec products are dome switches unfortunately - they feel great new but the used ones are sometimes sadly squishy. A nice thing about Preh though is that they make a popular line of what I call “140% keyboards,” full-size 101-key keyboards with extra rows of macro keys. Almost all vendors other than Preh use Cherry switches. All of these are programmed on-board, meaning no software required for remaps/macros/layers, but the programming software does tend to be an eccentric Windows-only tool.

                      And of course X-Keys is a well-established manufacturer of “macro pads” that predates the macropad concept (goes back to train sim controllers, of all things, originally), and Kinesis Advantage keyboards are on-board programmed with multiple layers and so I use the left side of mine as a macro pad when on the numpad layer (which I trigger via foot pedal).

                      1. 3

                        The article seems to have a great deal of faith that a “scanner” will address the problem. This might be true, but the article really doesn’t give us the information to tell. PDF417 barcodes are commonly placed on drivers licenses in a number of countries but usually just contain a duplicate of the information on the face, which can be forged just as readily as the human-readable version. Part of this is because of technical difficulty in fitting a meaningful digital signature into a barcode of low enough density and size to still read easily. Unfortunately a lot of people don’t understand this, and companies market “scanner” apps to users like bouncers while implying that they provide strong protection against forgery. In the US, for example, they very much do not.

                        This raises the question of what is in the PDF417 barcode. It’s hard to be confident from the article because we only see examples and licenses from generators, and we can imagine that both might have invalid data for whatever reason. The first example of a generated license, for example, has a barcode with visibly lower bit density than the others. It seems to contain a random number and nothing else, so presumably the author of this generator has just completely skipped trying to create a valid barcode.

                        The other examples, both the one from the government and the second (animated GIF) fake example, contain a short JSON payload. JSON is not optimal for this use due to the high overhead but I’ll allow that it made development easy and future modifications very flexible. It’s hard to say too much about the JSON payload as the keys are either obfuscated or Icelandic abbreviations, but we can see that on the official example “TGLJZW” is the document ID number. The faked example is missing this field entirely. The field “ELUM4L9” is blank in both. More interestingly, the official example barcode contains the key:value “CmFuZG”:“iI5DMxm9”, which purely by eye seems like it could be a very compact digital signature of some sort. The faked example is missing it.

                        So it’s possible that the licenses could be verified offline by use of the signature, although such a short signature presents a meaningful risk of brute-force attack. This is still the most that a “non-official” scanner implementation could do, and that’s assuming that I’m correct that it’s a compact signature and that the public key material is released.

                        I find it more likely that the “scanner” here is assumed to be online, checking against the central database. This is a complex and risky proposition, so I can understand why its release has been much delayed. There are huge privacy implications to creating any kind of public or semi-public endpoint that allows for validation of driver’s license information, even if limited in scope, and you can virtually guarantee that it will be systematically abused for ID theft purposes.

                        1. 3

                          I’m a big fan of ZeroSSL for larger organizations for a lot of reasons. While LE is amazing at its mission of getting more of the internet on HTTPS, it lacks some of the features I think are well worth paying for. Having a REST API you can use to integrate internal tooling is really nice, allowing applications to request and manage their own certificates. It also offers email verification for certificates which is great for applications where the lack of IP whitelisting that Let’s Encrypt provides is a problem.

                          All that said, if your org uses LE extensively as many do, I don’t think there is a real business usecase for randomizing it. If LE is down for a long period of time, then you might need to switch, but it seems strange to optimize for that edge case.

                          1. 1

                            Does the email validation mean that you can get a cert with no A record and no DNS control?

                            1. 2

                              Yup! Let’s Encrypt didn’t want to deal with the headache of managing email at scale to automate that form of domain control, but there are a few RFC-standardized email addresses you can rely on, as zaynetro mentions. But the CA/Browser Forum baseline requirements only require (for “basic”/DV certs, anyways) that you prove you control a domain. There are lots of ways to do that, since that’s a social agreement.

                              1. 1

                                Sounds kind of crazy from the ACME perspective but email validation is acceptable to the CA/B baseline requirements and is basically the norm for DV certs for non-ACME issuers. The security implications aren’t great, and you need to make sure that e.g. no user can register one of the email addresses that’s acceptable to CA/B for this purpose, but it can be convenient for scenarios like issuing certificates for internal systems (not internet accessible) that use public domain names.

                                1. 1

                                  it can be convenient for scenarios like issuing certificates for internal systems (not internet accessible) that use public domain names

                                  I use DNS challenges for this purpose. Once I got tired of manually creating and cleaning the challenge-response records, I spent a few hours adapting one of the existing plugins to work with my DNS host.

                                  I like this better than injecting email into the process.

                                2. 1

                                  Looks like it: https://help.zerossl.com/hc/en-us/articles/360058295354-Verify-Domains

                                  To verify your domains via email, first, select one of the available verification email addresses and make sure you have access to the associated email inbox. Typically, you will be able to choose between the following types of email addresses for your specific domain:

                                  admin@domain.com, administrator@domain.com, hostmaster@domain.com, postmaster@domain.com, webmaster@domain.com

                              1. 1

                                I got a new door installed in the shed, but I need to do most of the finish work in my free time this week - casing, painting, etc. A little nervous about doing the casing as I don’t have a great way to do the miter cuts, but it’s only a shed so it’s not a big deal if it’s a little rough. I’ve already redone the stucco around the outside but I didn’t quite get a perfect color match, so for a second coat I’m going to try to get it closer on.

                                I also need to get another blog post done this week, and for some reason this is already shaping up to be a mysterious bug week at work. Found two cases of “works for me but not for you” today that I need to dig into. One is an API call that… worked from postman but not from CURL? either it’s something weird about headers and detecting encoding or I just typo’d one of them and didn’t figure it out because it was late in the day.

                                1. 8

                                  I like this line of thought, and agree that the cloud is still in the poorly composing Multics stage (https://news.ycombinator.com/item?id=27903720)

                                  Though I think “the Unix philosophy” deserves some more analysis, since it encompasses several related but distinct design choices. The blog post mentions coding the perimeter of M data formats and N operations; it also mentions the C language as a narrow waist for portablity. I would also add:

                                  • the syscall API and ABI – a narrow waist between applications and hardware. Notably, there is so much economic pressure here that Windows implemented POSIX in the 90’s, and it implemented Linux in the 2010’s with WSL (and I guess they did it again with WSL2 because the first cut was slow?)
                                    • The file system is an important special case of this. Notably, it’s suboptimal in many circumstances, like using NVMe hardware, but it’s still rational to have a “universal” API or lowest common denominator
                                  • The idea of file descriptors for ad hoc polymorphism. Related to this thread: https://lobste.rs/s/izpz2n/on_missed_opportunities_static_types (Types inhibit composition!)

                                  So teasing apart these issues might enlighten us on how exactly to apply it to the cloud. The 3 sentences by McIlroy, are important, but not the whole picture IMO. I’m thinking of framing it as “the Perlis-Thompson principle”, and “narrow waists”, though this thinking/writing is still in its earlier stages.

                                  I actually got an e-mail from Multics engineer Tom Van Vleck regarding my most recent blog post! That reply helped to shape my thinking, to the point where I’d say that a key part of the Unix philosophy is choosing the SECOND of these strategies:

                                  1. Static Data Types and Schemas (especially “parochial” types, as Rich Hickey puts it)
                                  2. Semi-structured text streams, complemented by regexes/grammars to recover structure

                                  I’d say that the current zeitgiest is biased toward #1, and Multics is more along the lines of #1. But the design that scales and evolves gracefully is actually #2! I know lots of people disagree with that, which is why I’m blogging about it. (It wouldn’t be interesting if everyone agreed.)


                                  Another part of the Unix philosophy is to be data-centric rather than code-centric. For the cloud, that means it should be protocol-centric, not service-centric. The CNCF diagram that people criticize is just a bunch of services with no protocols, which is exactly backwards IMO. It’s a brittle design that doesn’t evolve.

                                  Unix is of course data-centric, while proprietary OSes like Windows and iOS are very code-centric. Proprietary cloud platforms are code-centric for the same reasons.


                                  And there are a bunch of related arguments around codebase scaling and composition. This recent post by @mpweiher also references the Unix vs. Google video, and says that glue is O(N^2) between N features that interact:

                                  https://lobste.rs/s/euswuc/glue_dark_matter_software

                                  I watched the video again and it’s explicitly making the O(M + N) vs O(M * N) comparison. I think these are related issues but working through bunch of examples might enlighten us.

                                  https://lobste.rs/s/euswuc/glue_dark_matter_software#c_sppff7


                                  I also very much like the microservices vs. auth/metrics/config/monitoring/alerting matrix in this post. I’ve definitely felt that, and it does seem to be a huge problem with the cloud.

                                  IMO we’re still missing the equivalent of an “ELF file” and “process” in the cloud. I think OCI is making progress in that area. Docker again makes the “mistake” of being code-centric rather than data-centric (in quotes because it was an intentional design decision.)

                                  1. 3

                                    Interesting ideas!

                                    If we were to make a list of the “narrow, deep, and stable” interfaces that compose to make a Unix-like system, we could start with the ones you mention:

                                    • Syscalls (not so narrow these days, but they were)
                                    • File interface
                                    • Bytestream as data

                                    Those seem to work well together. Then I see a whole pile of “bolt on” interfaces that don’t compose nicely, sometimes overlap, and create a bunch of corner cases:

                                    • termios
                                    • Process groups
                                    • Signals
                                    • ioctls
                                    • IPC
                                    • fbdev
                                    • io_uring
                                    • epoll
                                    • kqueue … and a bunch more

                                    It’s like we’ve spent a five decades accreting new interfaces that were wide but shallow, or wide and deep, but not converging on other narrow, deep, and stable interfaces.

                                    1. 4

                                      Yeah I think “narrow and deep” is referring to Ousterhout’s recent book? I remember he specifically uses the Unix file system API as one of his examples.

                                      I agree classic Unix adheres to the principle better. Linux is pretty messy, although often it gets functionality before other Unixes.

                                      I commented on that here: https://lobste.rs/s/kj6vtn/it_s_time_say_goodbye_docker#c_nbe2co

                                      So yeah I think we have sort of a “design deficit” now. Unix had good bones and we built on it for 50 years. But I think it’s probably time to do some rethinking and redesign to have a stable foundation for more evolution. The cloud is not in good shape now … and part of that is being built on unstable foundations (e.g. the mess of Linux container mechanisms)

                                    2. 2

                                      Hey Andy, Thanks for the detailed thoughts here. There are probably at least 5 more blog posts I need to write that you’ve touched upon in this comment :) Stay tuned!

                                      1. 1

                                        Unix is of course data-centric, while proprietary OSes like Windows and iOS are very code-centric.

                                        Can you expand on this? My experience with modern Windows (read: PowerShell and modern .NET services) is that most tasks require very little work to get done. Structured data can be sent trivially between hosts in the shell and acting on that structured data in the shell is nearly trivial as you don’t really need to write any glue code to pipe data from cmdlet to cmdlet. Tools like sed, awk, and bash feel positively archaic in comparison to PowerShell.

                                        1. 2

                                          Importantly, the claim isn’t that data-centric is better than code-centric along all dimensions! It’s a tradeoff. You can argue that the code-centric / API-centric Windows style is easier to use. Types do make things easier to use locally.

                                          What I’m arguing is that the code-centric design ends up with more code globally (something of a tautology :) ), and that is bad in the long run. And also that it creates problems of composition. You end up with quadratic amounts of glue code.

                                          Although I don’t have a reference for this, it seems obvious to me that a Unix system has less code than Windows and is “simpler” to understand (think something like xv6). It’s not necessarily easier to use. You can build a lot of things on top of it, and that has been done, from iOS/Android to embedded to supercomputers.

                                          For a concrete example of being data-centric, I’d use /etc/passwd vs. however Windows stores the user database. I assume it must be in the registry, but it’s not a stable format? You use some kind of C API or .NET VM API or PowerShell API to get at it?

                                          TAOUP has some comments on the /etc/passwd format: http://www.catb.org/~esr/writings/taoup/html/ch05s01.html#passwd

                                          Again I’m not claiming that it’s great along all dimensions, only that it’s minimal and simple to understand :) You can parse it from multiple languages. Although there are also libc wrappers because parsing in C is a pain. (Multiple ways of accessing it is a feature not a bug; that’s a consequence of being data-centric.)

                                          I hope that helps; other questions are welcome and may help the blog posts on these topics :)

                                          1. 4

                                            I think the /etc/passwd example also highlights the limitations of this type of approach, though - in practice, on Linux, it is necessary to use higher-level APIs like PAM and NSS to interact with user information if you want to support other cases like LDAP or Kerberos user databases.i This situation can become remarkably painful on Linux exactly because some applications are “too aware of the data” and make assumptions about where it comes from that don’t hold on all systems.

                                            The data-centric nature of Unix requires that applications have a deeper understanding of the actual data, e.g. the fact that while users/groups often come from /etc/passwd there are several other places they can come from as well. The more code-centric approach in Windows does a better (although not perfect) job of abstracting this so that application developers don’t need to worry about various system configurations.

                                            Or in short: while Linux has the simply structured /etc/passwd interacting with it directly is almost always a bad idea. Instead you should probably use PAM, just like on Windows you would end up using SAM via various APIs. This feels like a fundamental limitation of a highly data-centric approach: it makes variation in the data source and format difficult to handle.

                                            1. 1

                                              Yeah PAM and NSS are interesting points. Same with the weirdness that seems to go on with DNS and name lookup these days. It’s mostly done through libc and plugins as far as I remember, and is far beyond /etc/hosts.

                                              Though again I’m not saying that the data-centric approach is cleaner or nicer to use! I’m saying it scales and evolves better :)

                                              If you’ve ever seen how Windows is used on say a digital sign or a voting machine, then that’s a picture of what I’m getting at. Windows is not very modular or composable. It’s mostly a big blob that you can take or leave. (I have seen and had success with COM; I’d say that’s more the exception than the rule.)

                                              If you need to use one PoweShell cmdlet then I believe you also need the whole .NET VM (and probably auto-updating, etc.)

                                              There are plenty of embedded devices (routers, things with sensors) that just use /etc/passwd. Ditto for containers. Those systems don’t use LDAP or Kerberos so the simple fallback is still used. I doubt you can make a “Windows container” as small as a Linux container, and that does matter for certain use cases.

                                              There is a just a lot of diversity to the use cases, and that involves some messiness. I may concede that the data-centric approach is harder to use; what I wouldn’t concede without more argument is that it’s a bad idea :) I’d actually say it’s less limited, but is possibly harder to use.


                                              I plan to write about these are two contrasting approaches to OS design:

                                              1. write typed wrappers or APIs for everything
                                              2. make the data formats more well-defined and improve parsing tools.

                                              Most people want #1 but I believe that #2 has desirable properties, including more graceful evolution and scaling.

                                              1. 1

                                                While your complaint about Windows not being very “subsettable” is true to a good extent, Microsoft does produce Windows Embedded and has invested significantly more in it over the past few years, relaunching it as Windows IoT. A minimum Windows IoT image is not nearly as compact in terms of storage as a minimal Linux image (they say you need 2GB of storage), but it does solve most of the classic failures of Windows on non-general devices by making nearly the entire operating system optional via a modular build. I haven’t dealt with Windows Embedded for some years but when I was doing some experimental work with XP-era Embedded, the network stack was an option you could leave out of your image, for example.

                                                The problem is that Windows Embedded and now Windows IoT see next to zero adoption, which I think reflects the motives of the companies that build these kinds of devices: they want a heavy, feature-complete, general-purpose operating system, because it’s easier to develop and test on those than it is on a minimal OS. Containers and etc have reduced the gap in ease of use here but it still definitely exists, we’ve all dealt with the at least frustration of trying to figure out an issue on an embedded device only to discover it doesn’t have some tool we’ve come to expect like sed. I think the Windows developer base has just become extremely used to all targets being complete systems they can TeamViewer to and poke around like their own laptop, which is why we still see billboards running Windows 10 Pro. I think a lot of MS’s strategy around PowerShell for example is trying to turn that ship around, for example with Windows Server now generally not having a GUI until you force it to install one (via PowerShell session).

                                                I guess what I’m arguing is that the difference here is, in my opinion, less technical than it is cultural. There aren’t a lot of technical aspects of Windows that require that it be a more complex environment, but Windows is usually targeted by desktop developers who are only used to working with complete desktop systems. Embedded devices tended to end up with Linux because the open-source kernel could be built for unusual architectures, while containerization basically fell out of features of the Linux kernel that Microsoft failed to compete with—but these features are all modern additions to the kernel that use fairly structured APIs, like most newer additions.

                                                I don’t meant to be too argumentative, I think you do have a point that I agree with, I just think the actual situation is a lot blurrier than UNIX derivatives having gone one route and Windows the other - both platforms contain a huge number of counterexamples. A core part of the Windows architecture, the registry, is a well-structured data store. Linux GUI environments are just as dominated by API-mediated interactions as Windows, and the whole “everything is a file” concept usually ends when you go into graphics mode. Which perhaps goes to explain why all these graphics-centric applications like kiosks tend to be running Windows… Linux doesn’t really get you that many advantages in the graphical world if you want to have the comforts of modern GUI development, which tend to require bringing along the whole Gtk ecosystem of services and APIs if not something like Electron.

                                                1. 1

                                                  Hm yeah I don’t really see any disagreement here? The Windows Embedded / IoT cases seem to support my point.

                                                  The point is basically that Windows and Unix (and Multics and Unix) have fundamentally different designs, and this has big consequences. They scale, evolve, and compose differently because of it.

                                                  This is both technical and cultural. Being data-centric is one value / design philosophy that Unix has; another is using language-oriented composition (textual data formats and the shell).

                                                  I hope to elaborate a lot on the Oil blog, and will be interested in comments from people with Windows expertise. The SSH example I gave in another thread is interesting to me too (e.g. compare how Windows does it)

                                                  https://lobste.rs/s/wprseq/on_unix_composability#c_wjyjwq

                                      1. 17

                                        This article has everything: databases, rad, different visions of computing as a human field of endeavor, criticism of capitalism. I commend it to everyone.

                                        1. 13

                                          Criticism of capitalism is, in theory, my main thesis, but I find it difficult to convey in a way that doesn’t get me a lot of angry emails with valid complaints, because the issue is complex and I can’t fully articulate it in a few paragraphs. But it is perhaps my core theory that capitalism has fundamentally diminished the potential of computing, and I hope to express that more in the future.

                                          1. 3

                                            But it is perhaps my core theory that capitalism has fundamentally diminished the potential of computing, and I hope to express that more in the future

                                            I am on a team that is making a documentary about the history of personal computing. One of the themes that has come out is how the kind of computing that went out to consumers in the early 80s on was fundamentally influenced by wider socioeconomic shifts that took place beginning in the 70s (what some call a “neoliberal turn”). These shifts included, but were not limited to, the elevation of shareholder primacy and therefore increased concentration on quarterly reports and short-termism.

                                            These properties were antithetical to those that led to what we would say were the disproportionate advances in computing (and proto-personal computing) from the 50s, 60s, and 70s. Up until the 80s, the most influential developments in computing research relied on long-term, low-interference funding – starting with ARPA and ultimately ending with orgs like PARC and Bell Labs. The structures of government and business today, and for the past few decades, are the opposite of this and therefore constitutionally incapable of leading to huge new paradigms.

                                            One more note about my interviews. The other related theme that has come out is that what today we call “end user programming” systems were actually the goal of a good chunk of that research community. Alan Kay in particular has said that his group wanted to make sure that personal computing “didn’t become like television” (ie, passive consumption). There were hints of the other route personal computing could have gone throughout the 80s and 90s, some of which are discussed in the article. I’d add things like Hypercard and Applescript into the mix. Both were allowed to more or less die on the vine and the reasons why seem obvious to me.

                                            1. 1

                                              These properties were antithetical to those that led to what we would say were the disproportionate advances in computing (and proto-personal computing) from the 50s, 60s, and 70s. Up until the 80s, the most influential developments in computing research relied on long-term, low-interference funding – starting with ARPA and ultimately ending with orgs like PARC and Bell Labs. The structures of government and business today, and for the past few decades, are the opposite of this and therefore constitutionally incapable of leading to huge new paradigms.

                                              This is something I’ve been thinking about a while - most companies are incapable of R&D nowadays; venture capital funded startups have taken a lot of that role. But they can only R&D what they can launch rapidly and likely turn into a success story quickly (where success is a monopoly or liquidity event).

                                            2. 3

                                              As with so many things. But I think mass computing + networking and our profession have been instrumental in perfecting capitalism.

                                              Given the values that were already dominating society, I think this was inevitable. This follows from my view that the way out is a society that lives by different values. I think that this links up with our regularly scheduled fights over open source licenses and commercial exploitation, because at least for some people these fights are at core about how to live and practice our craft in the world as it is, while living according to values and systems that are different from our surrounding capitalist system. In other words, how do we live without being exploited employees or exploiting others, and make the world a marginally better place?

                                              1. 2

                                                Complaints about the use of the word, and maybe calling you a socialist, or something?

                                                I wouldn’t do that to you, but I do have to “mental-autocorrect” capitalism into “We may agree software developers need salary and some SaaS stuff is useful, but social-media-attention-rent-seekers gain power, which sucks, so he means that kind of ‘capitalism’”.

                                                There should be a word, like cronyism is the right word for what some call capitalism, or a modifier like surveillance capitalism.

                                                1. 3

                                                  But I am a socialist. The problem is this: the sincere conviction that capitalist organization of global economies has diminished human potential requires that I make particularly strong and persuasive arguments to that end. It’s not at all easy, and the inherent complexity of economics (and thus of any detailed proposal to change the economic system) is such that it’s very difficult to express these ideas in a way that won’t lead to criticism around points not addressed, or of historic socialist regimes. So is it possible to make these arguments about the history of technology without first presenting a thorough summary of the works of Varoufakis and Wolff or something? I don’t know! That’s why I write a blog about computers and not campaign speeches or something. Maybe I’m just too burned out on the comments I get on the orange website.

                                                  1. 1

                                                    Sure, I appreciate that, though it would maybe attract bad actors less if there was some thread of synopsis that you could pull on instead of “capitalism”.

                                                    I think the problem is broad terms, because they present a large attack surface, though I do realize people will also attack outside the given area.

                                                    I’m also saddened by a lot of what’s going on in ICT, but I wouldn’t attribute it blindly to “capitalism”, but I don’t have all the vocabulary and summaries, if you will, to defend that position.

                                                    One’s capitalism is anyway different from another’s, so the definitions must be laid out. Maybe all of Varoufakis isn’t needed every time?

                                                    Nor am I convinced we’ll reach anything better with socialist or other governmental interventions. An occasional good law may be passed, or money handouts that lead to goodness, but each of those will lose in the balance to detrimental handouts/malfeasance, corruption, unintended consequences, and bad laws.

                                                    Maybe some kind of libertarian-leaning world where people have a healthy dose of socialist values but are enlightened enough to practice them voluntarily?

                                                2. 1

                                                  I would love to see the hoops you jump through to express that. Honestly. It seems so alien to my worldview that anyone making that claim (beyond the silly mindless chants of children which I’m assuming is not the case here) would be worth reading.

                                                  1. 8

                                                    I’ve made a related argument before, which I think is still reasonably strong, and I’ll repeat the core of here.

                                                    In my experience, software tends to become lower quality the more things you add to it. With extremely careful design, you can add a new thing without making it worse (‘orthogonal features’), but it’s rare that it pans out that way.

                                                    The profit motive drives substantial design flaws via two key mechanisms.

                                                    “Preventing someone from benefiting without paying for it” (usually means DRM or keeping the interesting bits behind a network RPC), and “Preventing someone from churning to another provider” (usually means keeping your data in an undocumented or even obfuscated format, in the event it’s accessible at all).

                                                    DRM is an example of “adding a new thing and lowering quality”. It usually introduces at least one bug (sony rootkit fiasco?).

                                                    Network-RPC means that when your network connection is unreliable, your software is also unreliable. Although I currently have a reliable connection, I use software that doesn’t rely on it wherever feasible.

                                                    Anti-churn (deliberately restricting how users use their own data) is why e.g. you can’t back up your data from google photos. There used to be an API, but they got rid of it after people started using it.

                                                    I’m not attempting to make the argument that a particular other system would be better. However, every human system has tradeoffs - the idea that capitalism has no negative ones seems ridiculous on the face of it.

                                                    1. 1

                                                      Those are shitty aspects to a lot of things, and those aspects are usually driven by profits, although not always the way people think. I’ll bet dollars to donuts that all the export features Google removes are done simply because they don’t want to have to support them. Google wants nothing less than they want to talk to customers.

                                                      But without the profit motive in the first place, none of these things would exist at all. The alternatives we’ve thought up and tried so far don’t lead to a world without DRM, they lead to a world where media is split into that the state approves and nobody wants to copy, and that where possession of it gets you a firing squad, whether you paid or not.

                                                      1. 6

                                                        But without the profit motive in the first place, none of these things would exist at all.

                                                        It’s nonsensical to imagine a world lacking the profit motive without having any alternative system of allocation and governance. Nothing stable could exist in such a world. Some of the alternative systems clearly can’t produce software, but we’ve hardly been building software globally for long enough to have a good idea of which ones can, what kinds they can, or how well they can do it (which is a strong argument for the status quo).

                                                        As far as “made without the profit motive” go, sci-hub and the internet archive are both pretty neat and useful (occupying different points on the legal-in-most-jurisdictions spectrum). I quite like firefox, too.

                                                    2. 3

                                                      “Capitalism” is a big thing which makes it difficult to talk about sensibly, and it’s not clear what the alternatives are. That said, many of the important aspects of the internet were developed outside of commercial considerations:

                                                      • DARPA was the US military

                                                      • Unix was a budget sink because AT&T wasn’t allowed to go into computing, so they just shunted extra money there and let the nerds play while they made real money from the phone lines

                                                      • WWW was created at CERN by a guy with grant money

                                                      • Linux is OSS etc.

                                                      A lot of people got rich from the internet, but the internet per se wasn’t really a capitalist success story. At best, it’s about the success of the mixed economy with the government sponsoring R&D.

                                                      On the hardware side, capitalism does much better (although the transistor was another AT&T thing and NASA probably helped jumpstart integrated circuits). I think the first major breakthrough in software that you can really peg to capitalism is the post-AlphaGo AI boom, which was waiting for the GPU revolution, so it’s kind of a hardware thing at a certain level.

                                                      1. 2

                                                        I still disagree, but man it’s nice to just discuss this sort of thing without the name-calling and/or brigading (or worse) you see on most of the toobs. This sort of thing is pretty rare.

                                                      2. 2

                                                        Obviously not op, but observe the difference between the growth in the scale and distribution of computing power, and what has been enabled, over the last 40 years.

                                                        Business processes have been computerized and streamlined, entertainment has been computerized, and computerized communications especially group communications like lobsters or other social media have arrived. That’s not nothing, but it’s also nothing that wasn’t imaginable at the start of that 40 year period. We haven’t expanded the computer as a bicycle of the mind - consider simply the lack of widespread use of the power of the computer in your hand to create art. I put that lack of ambition down to the need to intermediate, monetize, and control everything.

                                                        1. 1

                                                          And additionally the drive to drive down costs means we have much less blue sky research and ambition; but also means that things are done to the level that they’re barely acceptable. We are that right now with the security situation: everything is about playing whackamole quicker than hackers, rather than investing in either comprehensive ahead of time security practices or in software that is secure by construction (whatever that would look like).

                                                      3. 1

                                                        What I used to tell them is it’s basically a theory that says each person should be as selfish as possible always trying to squeeze more out of others (almost always money/power), give less to others (minimize costs), and put as many problems on them as possible (externalities).

                                                        The first directly leads to all kinds of evil, damaging behavior. There’s any number of schemes like rip-offs, overcharging, lockin, cartels, disposable over repairable, etc. These are normal, rather than the exception.

                                                        The second does every time cost-cutting pressure forces the company to damage others. I cite examples with food, medicine, safety, false career promises, etc. They also give less to stakeholders where fewer people get rewarded and get rewarded less for work put in. Also, can contrast to utilitarian companies like Publix that gives employees benefits and private stock but owners still got rich. Or companies that didn’t immediately lay off workers during recessions. An easy one is most can relate to is bosses, esp executives, paid a fortune to do almost nothing for the company vs workers.

                                                        Externalities affect us daily. They’re often a side effect of the other two. Toxic byproducts of industrial processes is a well known one. Pervasive insecurity of computers, from data loss to crippling DDOS’s to infrastructure at risk, is almost always an externality since the damage is someone else’ problem but preventing it would be supplier’s. You see how apathy is built-in when the solution is permissible, open-source, well-maintained software and they still don’t replace vulnerable software with it.

                                                        Note: Another angle, using game of Monopoly, was how first movers or just lucky folks got an irreversible, predatory advantage over others. Arguing to break that up is a little harder, though.

                                                        So, I used to focus on those points, illustrate alternative corporate/government models that do better, and suggest using/tweaking everything that already worked. Meanwhile, counter the abuse at consumer level by voting with wallet, sensible regulations anywhere capitalist incentives keep causing damage, and hit them in court with bigger damages that preventing it would cost. Also, if going to court, I recommend showing where it was true how easy or inexpensive prevention was asking the court basically orders it. Ask them to define reasonable, professional standard as not harming stakeholders in as many cases as possible.

                                                        Note: Before anyone asks, I don’t have the lists of examples anymore or just inaccessible. Side effect of damaged memory is I gotta stay using it or I lose it.

                                                    1. 10

                                                      An interesting factor in the demise of the desktop database that wasn’t really mentioned in the otherwise excellent article:

                                                      If the premise of Access and the like is that you can port your existing (paper based) business processes and records to it, well… the world has kinda run out of those things for the most part. We’ve run out of old organizations to computerize. New organizations are gonna start out by, of course, looking for more pre-made tools rather than the more freeform ones. They don’t have processes yet, so they look for pre-made tools to discover which processes are common, which ones suit their needs. And I suppose it’s quite rare that would they end up in the spot where these tools don’t have enough customization already but it’s way too early for fully custom development.

                                                      1. 7

                                                        This is a great thought and I think it is an important part of the puzzle. My father, who worked in corporate accounting, once made an observation something like this: early in computerization the focus was on using computers to facilitate the existing process. Later on, roughly around the consolidation of the major ERPs (Oracle and SAP), it became the opposite: the business adjusted their processes to match the computer system. You can either put a positive spin on this (standardization onto established best practices) or a negative spin (giving in to what the software demands). It’s also not completely true, as products like Oracle and SAP demonstrate with their massive flexibility.

                                                        But I think there’s an important kernel of truth: there’s not a lot of talk of “computerize the process” these days. Instead, it’s usually framed as “let’s get a new process based on a computer.” That means there’s fundamentally less need for in-house applications. I don’t think it’s clearly a bad thing either, but it definitely has some downsides.

                                                      1. 4

                                                        I get a 403 Forbidden.

                                                        1. 7

                                                          Me too now. As they themselves say:

                                                          As a Professional DevOps Engineer I am naturally incredibly haphazard about how I run my personal projects.

                                                          😁

                                                          1. 6

                                                            Sorry, I think my Private Cloud has a bad power supply which is having a knock-on effect of upsetting the NFS mounts on the webserver. I’m acquiring a replacement right now, and in the meantime I am going to Infrastructure-as-Code it by adding a cronjob that fixes the NFS mount.

                                                            1. 1

                                                              Me too.

                                                              You can use https://archive.md/lFfIn

                                                            1. 1

                                                              This article describes how DesqView works with XMS, but that implies a 286+ with 1Mb+ of memory. How did it perform task switching on an 8088 with 640Kb? I thought the MS-DOS task switcher was capable of swapping to disk, but that’s only feasible because it’s a full screen task switcher so having a noticeable delay while switching tasks is expected - a windowing environment presumably needs to be able to switch tasks in less than a second.

                                                              1. 2

                                                                Good question… memory management options on DOS machines is a topic that deserves a full post (I’ve put it on my list), but I probably elided a little too much from this message. A quick rundown on the topic: the 8086 thru 80186 only had a 20-bit address bus and so could map 1MB, a good chunk of which was “reserved-ish” for hardware purposes, leaving 640k. The 286 had a bigger address bus and could support 16MB, the 386 even more so, but this matters less than you would think in practice because virtually all DOS software at this point ran in real mode without the ability to use any of that extra memory. The solution was (skipping some complexity here) a device driver that used the MMU to remap sections of the past-1MB memory down into the real mode address range for programs to use. And that’s what we usually call XMS, although there were other conventions as well.

                                                                That makes sense, but what about, say, the 8088? Well, it turns out the solution is basically the same: remap “extended” memory into the 640k standard range. But since the 8088 has only 20 address wires and no MMU, it can’t be done onboard. Instead, it was done off-die by an expansion card that basically intercepted the memory bus to remap different sections of expanded memory into a 64kb “window” in the real mode memory space. Running software had to hit specific interrupts to prompt the expanded memory card to change the mapping. Such controllers on ISA cards were pretty readily available in 4MB and 8MB sizes, and while I don’t know this for sure, I’m making a pretty confident assumption that DESQview on 8088 required one.

                                                                The standards for these get confusing due to competition and fast change… EMS (extended memory specification) is typically associated with these 8088 systems, but I think there were at least a few other less common standards, and IBM had XMA which you could say is the ancestor to XMS. In late pre-286 machines there was sometimes an on-motherboard EMS controller that almost felt like an MMU. But in short, an “expanded memory manager” defined broadly was software on the 80386 but hardware on the 8088. Further complicating things a lot of 80386 memory managers could emulate the old-style hardware memory page switching so that you could use software written against EMS. These didn’t play nicely with each other so you might have to manually choose to allocate certain memory ranges to either extended memory or expanded memory, and trying to keep “extended” vs “expanded” straight kind of typifies how annoying this must have been for users. I got a desktop technician certification on the very trail end of any of this being relevant and remember serious headaches over all this.

                                                                DESQview was also limited on non-80386, although I’m not sure of the exact issues. I know for example that later versions of DESQview were capable of windowing direct-drawing programs (e.g. vga raster programs) on the 386 by use of the MMU to remap the framebuffer, but could only run such programs full-screen on earlier processors.

                                                                1. 2

                                                                  For the top of the third paragraph, you mean expanded, right? (EMS == Expanded, XMS == Extended?)

                                                                  XMS worked by throwing the CPU into protected mode so it could address more memory temporarily, then thunking back to real mode. That’s what HIMEM.SYS was for. No remapping is needed here. But as you suggest, this isn’t possible on an 8088, since it didn’t have physical addressing.

                                                                  Remapping memory down into real mode was possible on the 386, which had something resembling an MMU. That’s what EMM386 was for, which could use the CPU to implement EMS. As far as I know the 286 didn’t have enough support for remapping, but still had protected mode so could still use pure XMS.

                                                                  Hardware EMS existed but never seemed widespread. It was a way to upgrade an 8088 system without replacing it. But by the time 1Mb+ of memory started to be desirable, there was no reason to get an 8088, so at least in my neighborhood, these were rare. I had an 8088 laptop with 128Kb of EMS, but was never able to use it due to lack of a driver (sigh.)

                                                                  The way I read their literature, DESQview supported a pure 640Kb 8088 environment, no EMS. Quarterdeck were the masters of PC memory management in their day, and they may have had a lot of tricks I don’t know about. Obviously it’s possible to physically copy memory around within 640Kb, but because that memory is effectively a stack, the only way I can imagine this working is to firstly reserve a swap area upfront then allow applications to run in the remainder, and applications can then be copied from the remainder to the swap area. But that means the size of this region has to be statically provisioned before any application can run, which seems gross. Putting the swap area at the other end of RAM is dangerous since DOS can’t describe it as allocated and an app would just grow into it. Since it’s already a TSR, perhaps they intercepted calls to query memory size and just lie, and hope that programs won’t inadvertently use unprotected but theoretically unavailable addresses?

                                                                  1. 1

                                                                    Putting the swap area at the other end of RAM is dangerous since DOS can’t describe it as allocated and an app would just grow into it. Since it’s already a TSR, perhaps they intercepted calls to query memory size and just lie, and hope that programs won’t inadvertently use unprotected but theoretically unavailable addresses?

                                                                    This doesn’t sound right. DOS must know both where the beginning and and of allocatable memory lies, otherwise, it might try to overallocate on a computer that has only 512KB instead of 640. I recall that if you had a VGA card, there was a way to convince DOS to use video RAM beyond 640KB all the way to b800h where text mode memory starts, for an extra 96KB of conventional memory.

                                                                    1. 1

                                                                      DOS must know both where the beginning and and of allocatable memory lies, otherwise, it might try to overallocate on a computer that has only 512KB instead of 640.

                                                                      Right, I agree - that’s what I was trying to say in the last sentence. It has to know the end of memory, but has no way to describe randomly allocated regions within the conventional area because it was single tasking so memory was allocated as a stack. I don’t know what this call looks like and what semantics it has though. Since application memory access isn’t really controlled by DOS, all this can be is a guideline though - a program can just access an address and see what happens, and if they do, they’d trash anything that’s there.

                                                                      Edit: Another possibility - did DOS start allocating high addresses and work downwards? This would seem logical and would make using those addresses above 640Kb relatively straightforward by just loading DOS at the desired address and applications don’t change. But if that happened, it really would be impossible to allocate memory at the “other end”, since the end of memory is a well known value (zero.)

                                                                      I recall that if you had a VGA card, there was a way to convince DOS to use video RAM beyond 640KB all the way to b800h where text mode memory starts, for an extra 96KB of conventional memory.

                                                                      I didn’t know this was possible with VGA. I remember doing it with CGA under OS/2, even to the point of running real mode Windows with 700Kb of conventional memory, although CGA is … CGA.

                                                                      Edit: Wikipedia suggest that this was possible on MDA/Hercules/CGA, but not with EGA; didn’t VGA include EGA compatibility?

                                                                      1. 1

                                                                        Memory for Hercules graphics is at B000h, so you can safely map memory into A000h-AFFFh.

                                                                        Color text mode memory is at B800h, so you can map memory below that as long as you stay out of graphics mode. That’s true for CGA, EGA, and VGA.

                                                                        What I remember doing (and I could be mistaken on this; it was a long time ago and I was in high school) is using VGA video memory at A000h while in text mode. But maybe I remember wrong and just mapped memory there with EMM386.

                                                                        To get DOS to use that memory as conventional memory you also have to relocate the 1KB EBDA.

                                                                    2. 1

                                                                      Oh yeah, I definitely confuse expanded and extended memory at least a couple times every time I bring it up. That brochure is a great find, I too am surprised that it could run on an 8088 without an Intel Above card or something. I’m probably going to spend some time messing around with DESQview myself in the future because I want to understand the later features better, so maybe I’ll figure something out… I can see moving around in the 640kb but that would be really risky. It’s possible they relied only on disk swapping in that case. I previously wrote about Visi On for which disk swapping was arguably a major weakness, but this was a few years later and it was probably an easier sell.