1.  

    If you mean what my personal server is, it’s an IBM POWER6. I freely admit to being an outlier :)

    1.  

      I’m interested, what specs? and what OS do you use?

      1.  

        It’s a “baby” 2-way SMT-2, so four logical CPUs, 16GB of RAM, RAID, etc. I run AIX on it. Admittedly it’s starting to show its age on CPU-bound tasks, but as a server, it’s still doing well.

        I like the fact I can install PCI cards in it while it’s running and powered up.

    1. 7

      There’s some value in supporting odd platforms, because it excercises portability of programs, like the endianness issue mentioned in the post. I’m sad that the endian wars were won by the wrong endian.

      1.  

        I’m way more happy about the fact that the endian wars are over. I agree it’s a little sad that it is LE that won, just because BE is easier to read when you see it in a hex dump.

        1.  

          Big Endian is easy for us only because we ended up with some weird legacy of using Arabic (right-to-left) numbers in Latin (left-to-write) text. Arabic numbers in Arabic text are least-significant-digit first. There are some tasks in computing that are easier on little-endian values, none that are easier on big-endian, so I’m very happy that LE won.

          If you want to know the low byte of a little-endian number, you read the first byte. If you want to know the top byte of a little-endian number, you need to know its width. The converse is true of a big-endian number, but if you want to know the top byte of any number and do anything useful with it then you generally do know its width because otherwise ‘top’ doesn’t mean anything meaningful.

          1.  

            Likewise, there are some fun bugs only big endian can expose, like accessing a field with the wrong size. On little endian it’s likely to work with small values, but BE would always break.

        2.  

          Apart from “network byte order” looking more intuitive to me at first sight, could you eloborate why big endian is better than little endian? I’m genuinely curious (and hope this won’t escalate ;)).

          1. 10

            My favorite property of big-endian is that lexicographically sorting encoded integers preserves the ordering of the numbers itself. This can be useful in binary formats. Since you have to use big-endian to get this property, a big-endian system doesn’t need to do byte swapping before using the bytes as an integer.

            Also, given that we write numbers with the most significant digits first, it just makes more “sense” to me personally.

            1. 5

              Also, given that we write numbers with the most significant digits first, it just makes more “sense” to me personally.

              A random fact I love: Arabic text is right-to-left, but writes its numbers with the same ordering of digits as Latin texts… so in Arabic, numbers are little-endian.

              1.  

                Speaking of Endianness: In Arabic, relationships are described from the end farthest from you to the closest, as in if you were to naively describe the husband of a second cousin, instead of saying “my mother’s cousin’s daughter’s husband” you would say “the husband of the daughter of the cousin of my mother” and it makes it insanely hard to hold it in your head without a massive working memory (because you need to reverse it to actually grok the relationship) but I always wonder if it’s because I’m not the most fluent Arabic speaker or if it’s a problem for everyone that speaks it.

                1.  

                  My guess is that it is harder for native speakers as well, but they don’t notice it because they are used to it. A comparable case I can think of is a friend of mine who is a native German speaker who came to the States for a post-doc. He commented that after speaking English consistently for a while, he realized that German two digit numbers are needlessly complicated. Three and twenty is harder to keep in your head than twenty three for the same reason.

                  1.  

                    German has nothing to Danish.

                    95 is “fem og halvfems” - “five and half-five”, where the final five refers to five twentys (100), and the “half” refers to half of 20, i.e. 10, giving 90.

                    It’s logical once you get the hang of it…

                    In Swedish it’s “nittiofem”.

            2.  

              Little-endian vs. big-endian has a good summary of the trade-offs.

              1.  

                I wondered this often and figured everyone just did the wrong thing, because BE seems obviously superior. Just today I’ve been reading RISC-V: An Overview of the Instruction Set Architecture and noted this comment on endianness:

                Notice that with a little endian architecture, the first byte in memory always goes into the same bits in the register, regardless of whether the instruction is moving a byte, halfword, or word. This can result in a simplification of the circuitry.

                It’s the first time I’ve noticed something positive about LE!

                1.  

                  From what I hear, it mostly impacted smaller/older devices with small buses. The impact isn’t as big nowadays.

                2.  

                  That was a bit of tongue-in-cheek, so I don’t really want to restart the debate :)

                  1.  

                    Whichever endianness you prefer, it is the wrong one. ;-)

                    Jokes aside, my understanding is that either endianness makes certain types of circuits/components/wire protocols easier and others harder. It’s just a matter of optimizing for the use case the speaker cares about more.

                  2.  

                    Having debugged on big-endian for the longest time, I miss “sane” memory dumps on little-endian. It takes a bit more thought to parse them.

                    But I started programming on the 6502, and little-endian clearly makes sense when you’re cascading operations 8 bits at a time. I had a little trouble transitioning to the big-endian 16-bit 9900 as a result.

                  1. 21

                    I mostly agree with this, but it’s an interesting culture shift. Back when I started to become involved, open source was largely maintained by poor people. Niche architectures were popular because a student could get a decent SPARC or Alpha machine that a university department was throwing away. It wasn’t as fast as a decent x86 machine but it was a lot faster than an x86 machine you could actually afford. The best-supported hardware was the hardware that was cheap for developers to buy.

                    Over the last couple of decades, there’s been a shift towards people being paid to maintain open source things and a big split between the projects with big corporate-backed foundations behind them and the ones that are still maintained by volunteers. The Z-series experience from the article highlights this split. There’s a big difference in perspective on this kind of thing is very different if you’re a volunteer who has been lovingly supporting Alpha for years versus someone paid by IBM to maintain Z-series support.

                    Even among volunteers there’s sometimes a tension. I remember a big round of applause for Marcel at BSDCan some years ago when he agreed to drop support for Itanium. He’s been maintaining it almost single-handed and there were a bunch of things that were Itanium-only and could be cleaned up once Itanium was removed. It’s difficult to ask a volunteer to stop working on a thing because it’s requiring other volunteers to invest more effort.

                    1. 9

                      Speaking from experience: PowerPC is in a bit of an interesting state regarding that, with varying factions:

                      • IBM selling Power systems to legacy-free Linux customers; they only support and would prefer you use 64-bit little endian
                      • IBM also has to support AIX and i, which are 64-bit big endian operating systems
                      • Embedded network equipment on 32-bit or crippled 64-bit cores
                      • Power Mac hobbyists with mostly 32-bit and some 64-bit equipment
                      • Talos users, using their fast owner controlled POWER9 systems which are mostly little endian Linux users, some big endian

                      Sometimes drama happens because the “sell PPC Linux” people only care about ppc64le and don’t do any of the work (or worse, actively discontinue support) for 32-bit/BE/older cores (i.e Go dropped all non-POWER8/9 support, which is like only support Haswell or newer), which pisses off not just hobbyists, but also the people who make deep embedded networking equipment or supporting AIX.

                      1.  

                        I am duty bound to link to https://github.com/rust-lang/rust/issues/59932. Rust currently sets wrong baseline for embedded network equipment PowerPC systems.

                        1.  

                          I resemble that remark, having a long history with old POWER, Power Macs, a personal POWER6 and of course several POWER9 systems. It certainly is harder to do development with all those variables, and the 4K/64K page size schism in Linux is starting to get worse as well. I think it’s good discipline to be conscious of these differences but I don’t dispute it takes up developer time. It certainly does for me even with my small modest side projects.

                        2. 6

                          The best-supported hardware was the hardware that was cheap for developers to buy.

                          That fact contributes to the shift away from non mainstream ISAs, I suppose? In this decade the hardware that is cheapest for developers to buy is the most mainstream stuff.

                          Nobody buys and discards large amounts of POWER or SPARC or anything. If you are a freegan and want to acquire computers only out of dumpster(s), what I believe you will find is going to be amd64, terrible Android phones and obsolete raspis? Maybe not even x86 - it’s on the cusp of being too old for organisations to be throwing it away in bulk. e.g. the core2duo was discontinued in 2012 so anyone throwing that away would be doing so on like an 8 year depreciation cycle.

                          Just doing a quick skim of eBay, the only cheap non-mainstream stuff I can see is old g4 mac laptops going for about the price of a new raspi. Some of the PPC macs are being priced like expensive antiques rather than cheap discards. It looks like raspis are about as fast anyway. https://forums.macrumors.com/threads/raspberry-pi-vs-power-mac-g5.2111057/

                          It’s difficult to ask a volunteer to stop working on a thing because it’s requiring other volunteers to invest more effort.

                          Ouuuch :/

                          1.  

                            Nobody buys and discards large amounts of POWER or SPARC or anything.

                            You can (because businesses do buy them), but they’re be very heavyweight 4U boxes at minimum. The kind that nerds like me would buy and run. They’re cheapish (like, $200 if you get a good deal) for old boxes, but I doubt anyone in poverty is getting one.

                            You’d also be surprised what orgs ewaste after a strict 5-10 year life cycle. I hear Haswell and adjacent business desktops/laptops are getting pretty damn cheap.

                            1.  

                              I got burned trying to get Neat Second Hand Hardware once in the late oughts. Giant UltraSPARC workstation, $40. Turns out that the appropriate power supply, Fiber Channel hard drives, plus whatever the heck it took to connect a monitor to the thing… high hundreds of dollars. I think I ended up giving it to a friend who may or may not have strictly wanted it, still in non-working condition.

                              1.  

                                They’re cheapish (like, $200 if you get a good deal) for old boxes

                                I was seeing prices about 2x higher than that skimming eBay. I also see very few of them - it’s not like infiniband NICs where there are just stacks and stacks of them. Either way, that’s much more money than trash amd64.

                                You’d also be surprised what orgs ewaste after a strict 5-10 year life cycle. I hear Haswell and adjacent business desktops/laptops are getting pretty damn cheap.

                                Sure. I was bringing up Core2Duo as a worst-case for secondhand Intel parts that someone would probably throw in a dumpster.

                                1.  

                                  Sure. I was bringing up Core2Duo as a worst-case for secondhand Intel parts that someone would probably throw in a dumpster.

                                  Yeah, absolutely. It freaks me out, but a Core 2 Duo is about a decade and a half old. It’s about as old as a 486 was in 2006. These are gutter ewaste, but they’re still pretty useful and probably the real poverty baseline. Anything older is us nerds.

                          1. 2

                            Heh, if only I could ever find a BeBox… :D The lights on the front (attempting to) show the utilization on the CPUs are fantastic.

                            Crypto Ancienne, mentioned in the article, is a fantastic piece of software. For example, I’ve used it’s provided carl program on a 68k non-turbo NeXTstation to load TLS-only sites with success. Some sites are even kind enough to not time it out!

                            1. 2

                              They really do reflect the state of the machine. When I’m running (the newly fixed) SheepShaver on it, they’re maxed out.

                            1. 3

                              My goscreen alias looks like this:

                              stty erase ^? ; screen -wipe ; screen -h 500 -d -R

                              i.e., wipe dead sessions, then reattach to any live session, creating a new one if necessary. If I get disconnected, I log back in, type goscreen and I’m back where I was.

                              1. 1

                                What do you need the:

                                stty erase ^?
                                

                                Exotic keyboard layout, so backspace doesn’t… Backspace?

                                1. 1

                                  No, it’s a Mac keyboard and Delete is in that position.

                                  1. 2

                                    ba dum tss

                                    when did they start doing that? i have a 2007 mac keyboard and the key in the backspace position works as a backspace, as you would expect (though it is labeled “delete”)

                              1. 15

                                Floodgap has been hosted on my own hardware with a server-grade line since 2003. One of the bedrooms here is where the servers live. There are UPSes and a portable A/C, and continuous temperature and noise monitoring. I should do a post about that.

                                1. 10

                                  I use daily Talos workstation, which is powerpc64 / powerpc64le, but I guess it doesn’t count since Rust is available for that.

                                  1. 2

                                    wow, those are not cheap. What made you buy one of those instead of a beefed up x86_64 workstation?

                                    1. 4

                                      I bought mine because I have more money than sense, I like PowerPC, and I wanted to support an alternative that actually is comparable in terms of performance. Plus, it’s not an exceptionally exotic architecture. People think it disappeared after the last G5 rolled off the production line and that’s hardly the case. It did largely disappear from the workstation market but it’s not that hard to bring sexy back.

                                      Your next question is why not buy ARM, and the simple reason is there wasn’t anything I liked as an ARM workstation at the time that was in that performance class, and anyway, ARM will survive just fine.

                                      1. 4

                                        I actually got it for free for development purposes, but I would have bought Blackbird anyway.

                                        I basically wanted FOSS firmware, I already used coreboot and didn’t like what AMD and Intel did with requiring blobs.

                                        1. 1

                                          Out of curiosity: How often do you run into problems where something does not work b/c you are on PowerPC?

                                          1. 7

                                            It really depends on what you want to run.

                                            If you pretty much use only FOSS software, you’re all set. So e.g. Skype, Steam or Zoom won’t work.

                                            Obviously, that also rules out Wine, but there is PPC64LE port in progress, which will be able to run amd64 Windows binaries when used with Hangover.

                                            There is also box86 port in progress, which will help with Linux i386 binaries.

                                            This is all on Linux.

                                            If you use FreeBSD or OpenBSD, the experience is worse. On FreeBSD, there’s currently no graphic acceleration, but things are pretty much usable (I’m actually a FreeBSD ports dev).

                                            I have no info about OpenBSD, but I think they still don’t have Rust working, which rules out Firefox at the moment.

                                            TLDR; It will work just great, if you’re already on Linux and running everything FOSS.

                                    1. 19

                                      I work on porting software to IBM i; that’s a platform you might know better as OS/400. It’s popular with businesses, usually in the retail/financial/logistics/etc spaces. Those kinds of shops are usually pretty isolated from trends in the tech industry, but they’re everywhere. Chances are you’ve worked with or seen them and never really thought of it.

                                      Most of the software I target runs in the AIX compat layer; AIX itself is technically POSIX compliant, but it really stretches the boundaries of compliance. All that AIX stuff is PowerPC; the CPUs are actually relevant/competitive. Actual native software is even weirder and is basically EBCDIC WebAssembly, to tl;dr it.

                                      1. 2

                                        Do you work for IBM? What type of system do you run that on?

                                        Floodgap’s main server has been AIX (first on an Apple Network Server 500 and now on a POWER6) since its first existence, and I used to do work on a workstation with 3.2.5. There’s also a ThinkPad “800” and 860 around here. However, IBM’s kind of hostile to us AIX hobbyists and I dislike having to dig out an HMC to do any reconfiguration with the LPAR. And IBM i (and OS/400) are worse, given that the entire system is one big vendor lock-in.

                                        1. 4

                                          I don’t work for IBM. The box I use is a hosted LPAR on a POWER9.

                                          I never got into AIX except as a faster way to cross-compile; smit is a poor substitute for real administration on a 5250 (It’s still better than HP-UX though.). One dirty secret is as much as IBM wants you to use the HMC, you don’t really have to for single systems; due to the screaming of i users who don’t want to use VIOS, let alone an HMC, you can totally do basic administration without an HMC.

                                          I don’t know if there’s AIX hobbyists, but I’m involved with a community for i hobbyists.

                                          1. 1

                                            If there’s not an AIX hobby club, then let it begin with me. (Jokes aside, someone used to call themselves the “MCA Mafia.”) But how would you reconfigure RAM allocation and so forth? On this POWER6, ASMI doesn’t really have any options for that.

                                            smit happens, but smit is definitely better than sam, I agree!

                                            1. 1

                                              Oh, the Ardent Tool of Capitalism?

                                              It’s been a long while since I looked at ASMI. If you’re running i as the dom1, then i can actually act as a mini-VIOS, with some limitations (i.e no SEAs, you have to bridge virtual ports to something).

                                        2. 2

                                          AS/400 is a neat system. Very high uptimes.

                                          Also, I was always impressed with AIX on rs/6000s or HP PA RISC hardware (hpux was not so good).

                                        1. 7

                                          Typing this on a Talos II with Firefox 85 running in Fedora. There are examples of every one of those CPUs (except s390x, eheheh, and I’m working on landing a cheap Itanium) in this room, but the T2 is my daily driver. It has two 8-core POWER9 CPUs.

                                          Rust works fine on ppc64le, but there isn’t a Rust for 32-bit PowerPC on OS X, which is why TenFourFox won’t advance further (it’s possible with a lot of compromises to make Firefox 52 work, but not anything after Firefox 54). Maintaining toolchains sucks.

                                          1. 1

                                            wasm2c

                                            1. 2

                                              How would that help? I still have to compile the Rust to wasm, and on top of that right now wasm generally assumes little-endian and no version of PowerPC Mac OS X has thread-local storage. Something like mrustc might be better but it’s not really ready for primetime.

                                              1. 1

                                                Rust to wasm is trivial. I know folks that used wasm to get modern LLVM based code onto old unix systems (AIX, Ultrix)

                                                mrustc was only designed to bootstrap the compiler. It can be used for other purposes, but that wasn’t its goal.

                                          1. 8

                                            Daily drivers for workstations? Yeah it’s pretty much x86/x86_64/ARM/ARM64, with a smattering of POWER and SPARC.

                                            Production systems? There’s a ton of IBM z in critical business environments. MIPS is popular as an embedded processor, especially in the networking space. POWER and SPARC are still around in the server space and it wasn’t that long ago that I remember large installations of PA-RISC. The M68k still floats around as an embedded or cheap processor.

                                            But for workstations, I’d be willing to guess you’d cover 99.5% of people with x86/x86_64/ARM/ARM64.

                                            (What’s gonna happen to HP-UX? There doesn’t seem to be any plans to port it to anything and the only supported architectures are both discontinued…)

                                            (There’s also the Longsoon architecture that is required to be used in certain things in China, and is used in those places and nowhere else really. Same goes for a few very specific avionics/process control architectures in the rest of the world.)

                                            (Oh and the plethora of embedded processors with Harvard architectures or weird word sizes or what-have-you. It’s a whole different world there.)

                                            1. 4

                                              I have a big soft spot for PA-RISC. My first job out of college was on a K250 running 10.20 (I’m OLD! I’m SO OLD!).

                                              1. 6

                                                My youngest coworker at my current job asked me who Johnny Cash was. I died inside a little.

                                                1. 1

                                                  So, who is that ?

                                                  1. 7

                                                    The inventor of money. That’s why we sometimes call it cash.

                                            1. 28

                                              MIPS is everywhere, still. Including in network gear, wireless, IoT, and other embedded applications.

                                              1. 8

                                                This. While it seems to me that most high-end network gear is slowly migrating towards ARM, MIPS keeps turning up in odd places. I recently dug around in the weird world of handheld video game consoles designed to run emulators, and found this spreadsheet compiled by the fine folks here. I was surprised to see a relatively large number of CPU’s with “XBurst” architecture, which MIPS32 plus some DSP extensions.

                                                I have a friend who recently got an internship at a company to help optimize their AS/400-based database infrastructure, and it looks like the current IBM systems are still backwards-compatible with S/390 programs. So while you might not see s390 much it’s probably not going away quickly.

                                                I believe Alpha, PA-RISC and IA-64 are officially deprecated these days, so nobody is making new ones and nobody seems to want to. To my surprise, it appears that people are still manufacturing SPARC hardware though.

                                                1. 3

                                                  Mostly Fujitsu, but even they are doing more aarch64.

                                                  1. 3

                                                    it looks like the current IBM systems are still backwards-compatible with S/390 programs

                                                    My understanding is that IBM Z stuff today is extremely compatible with System/360 programs from the mid-’60s.

                                                    1. 2

                                                      So while you might not see s390 much it’s probably not going away quickly.

                                                      For legacy applications on MVS and friends, yeah, but IBM basically killed 31-bit Linux.

                                                      To my surprise, it appears that people are still manufacturing SPARC hardware though.

                                                      There’s still a market for legacy Solaris systems.

                                                      1. 1

                                                        How frequently are these legacy Solaris systems updated? How frequently are IBM Z systems updated? I heard (might be unsubstantiated) that some mainframes still run 20 year old Perl, even though the OS gets updates.

                                                        1. 1

                                                          Depends how much they care; if they do, they’ll keep their ancient application onto newer Solaris on newer hardware (i.e M8).

                                                          The 20-year-old-Perl makes me think you’re talking USS on z/OS (aka MVS); that’s a world I know very little of.

                                                      2. 1

                                                        IBM i (née AS/400) is all on PowerPC these days. It’s a very different system from s390/mainframe/zOS

                                                    1. 8

                                                      contrary to the README above, the repository contains no source code

                                                      It does! It’s not in the mainline source tree, but is tagged. Here is what seems to be the most recent version.

                                                      1. 4

                                                        “Note that a full binary cannot be generated from this source.”

                                                      1. 2

                                                        Reviewing documentation for work. Maybe getting a little more work done on the port of Crypto Ancienne to SunOS 4.1 now that I have gcc working properly again.

                                                        1. 9

                                                          It seems to have been part of a larger scale issue. I was also victimized; Floodgap was down for most of the night and part of the morning but I got control of the domain back. In my case it was social engineering, pure and simple; they gave fraudulent documentation to the rep and took over everything. A little more here if you’re interested: https://tenfourfox.blogspot.com/2021/01/floodgapcom-down-due-to-domain-squatter.html

                                                          1. 2

                                                            The name in particular is interesting. Wonder if they’re trying to ride Apple’s coattails.

                                                            1. 1

                                                              I’m not sure Gopher has the footgun problem you describe, or at least not in the same fashion, because how the resource is handled is inherent in the URL. If you’re handed a gopher URL with a 9 item-type, that immediately tips you off it’s a binary file.

                                                              1. 2

                                                                Is it defined how Gopher clients handle executables? The answer is, it doesn’t.

                                                                Here is an excerpt proof (https://tools.ietf.org/html/rfc1436 S3.8):

                                                                Note that for type 5 or type 9 the client must be prepared to read until the connection closes. There will be no period at the end of the file; the contents of these files are binary and the client must decide what to do with them based perhaps on the .xxx extension.

                                                                Thus Gopher has the footgun problem too.

                                                              1. 1

                                                                Excellent work! Any thoughts about Tru64?

                                                                1. 1

                                                                  You can boot tru64 : https://github.com/lenticularis39/axpbox/wiki/Guest-support but it does not install yet. So a sort of live environment works (the install environment). There are graphics as well: https://github.com/lenticularis39/axpbox/wiki/VGA#tru64-console

                                                                1. 15

                                                                  Regarding the gripe about Gemini - retrocomputing was never its goal. It was about reforming the browsing experience of the modern user, where code execution or unexpected downloads cannot happen behind your back. Guaranteed TLS was deemed table stakes - for each person who complains about it, there is another who would never touch Gemini if all/much of their browsing was trivially observable by third parties. Gemini was never intended to supplant gopher. The protocol author mentioned continues to maintain both gopher and gemini sites, and gopher would be the right choice when encryption is inappropriate, such as retrocomputing or amateur radio.

                                                                  1. 8

                                                                    From the Gemini FAQ:

                                                                    Gemini may be of interest to people who are: […]

                                                                    • Interested in low-power computing and/or low-speed networks

                                                                    So it does seem that there’s some tension there…

                                                                    1. 4

                                                                      I don’t quite know what to think of the TLS requirement in Gemini, either, but low-power computing and/or low-speed networks doesn’t necessarily mean old computers and networks. Modern low-power machines with low-speed connections can handle TLS just fine. See e.g. this thread: https://lists.orbitalfox.eu/archives/gemini/2020/002466.html for an older example of someone running a Gemini client on an ESP32.

                                                                      (Full disclosure: not under this alias – which, for better or for worse, I ended up using in some professional settings – but I am running a Gemini-related project. I have zero investment in it, it’s just for fun, and I was one coin toss away from using Gopher, I’m just sort of familiar with the protocol).

                                                                      1. 1

                                                                        Yes, I think it’s a relative statement as well. Low-power systems today are magnitudes more performant. The little 68030 I did some testing on takes over 20 seconds to complete a TLS 1.2 transaction, but even a few years old embedded systems today will run rings around that.

                                                                        For retro systems, I still say Gopher is the best fit.

                                                                      2. 2

                                                                        Yes, contrasting this with @jcs’s post, it does look like a dichotomy.

                                                                      3. 5

                                                                        But then, why reinvent the wheel? Instead of implementing a whole new protocol, a more sensible decision would have been to simply develop a modern HTML 3.2 browser without the JS crap. Just freeze the pinnacle of HyperText before the web became the edge of Hell it is today.

                                                                        1. 4

                                                                          My memories of those days weren’t so halcyon, just table soup.

                                                                          1. 4

                                                                            See the Gemini FAQ section 2.5

                                                                            1. 2

                                                                              It’s because the point of Gemini is to be intentionally exclusionary.

                                                                            2. 2

                                                                              I agree about Gemini, The one thing I wish they had done differently is used much much simpler crypto for integrity and not bother about confidentiality. Pulling in TLS was a shame as it missed out on a great opportunity.

                                                                              1. 2

                                                                                So what crytpo, and what libraries for which languages exist for it? I ask because the wisdom is not to invent crypto, nor implement it yourself.

                                                                            1. 5

                                                                              tldr; we screwed up security the last few times, but we think we got it right this time.

                                                                              1. 5

                                                                                A bit more privacy than security, because, yes, privacy is a spectrum and folks are continuously trying to improve 🙂

                                                                                1. 4

                                                                                  I do think it makes more sense than ESNI, which was just a patch. Encrypting the entire hello in retrospect simply covers everything.

                                                                                1. 2

                                                                                  Maybe it’s time to bring (back?) proxies that accept unencrypted HTTP/1.0 requests, negotiate a modern version of TLS with the destination and rewrite the html to allow for seamless navigation on older browsers.

                                                                                  1. 5

                                                                                    For occasional web browsing from OS 9, I have Squid running on a local server, acting as an HTTPS proxy. The client still connects over HTTPS, but the Squid server accepts older protocols, which the destination usually doesn’t accept.

                                                                                    1. 2

                                                                                      How do you have Squid configured? Is this using bumping?

                                                                                      1. 4

                                                                                        Yes, here’s the configuration that I got working. A lot of it is likely redundant

                                                                                    2. 2

                                                                                      Since legacy software shouldn’t be exposed to the wide Internet without at least some protective layer, I think HTTPS-to-HTTP proxies is a preferable option. There are some projects, though they aren’t as easy to use as I hoped.

                                                                                      A proxy server can also perform some other adjustments to make pages more accessible to legacy browsers, e.g. inject polyfills as needed.

                                                                                      1. 2

                                                                                        Or, use a period browser that can be taught to forward HTTPS on (disclaimer: my project, previously posted): https://oldvcr.blogspot.com/2020/11/fun-with-crypto-ancienne-tls-for.html