Threads for mechazoidal

  1. 2

    Alerted by this tweet discovering that a Spanish anime magazine distributed entire websites through their promotional CDs, I nearly spit my drink out discovering my own fandom screen-captures from nearly 30 years ago.

    1. 1

      I like how 10 years ago someone was looking for a specific version of a program and has now just found it on DiscMaster this tweet covers the saga.

    1. 9

      Interesting that they have 8-inch disks in stock. As I recall, these are still used in the US nuclear launch control systems. I wonder if this is company is the weakest link in US nuclear security. If so, that’s a somewhat terrifying thought.

      1. 19

        I mean, honestly, the best thing for the world would be if the nukes all just secretly didn’t work, but none of the other countries knew that, so I for one welcome our obsolete floppy drive overlords.

        1. 6

          On the subject,

          The United States has not detonated a nuclear weapon in thirty years.

          This raises a troubling question: do they still work?

          Because the only designs that have been subject to “real” quality control testing (by detonation of sample units) are more than 30 years old and thus have undergone radioactive decay, mechanical wear, maintenance and refurbishment activities, etc., and an increasing portion of the stockpile consists of designs that have never been tested, this assurance must make the directors a bit nervous. Our confidence in the nuclear stockpile today rests on subcritical testing, testing of individual components, and increasingly, computational modeling. It is exactly because of the challenge of stockpile assurance that a surprising portion of the world’s most powerful computers are owned and operated by the Department of Energy.

          https://computer.rip/2022-09-13-the-nevada-national-security-site-pt-2.html

          1. 5

            I mean, honestly, the best thing for the world would be if the nukes all just secretly didn’t work, but none of the other countries knew that

            That’d make a great science fiction short story. Nuclear war comes, but then all the scientists involved have to admit they’ve independently sabotaged their own arsenals to prevent armageddon.

            1. 4

              Dr. Strangelove with a conscience: “Of course, it makes no sense to actually fire the rockets, so they are all sabotaged.”

              1. 2

                Could also be a weird inverse-Hanlon twist … “No-one ever knew whether the failures were deliberate or accidental, but they saved humanity.”

            2. 3

              It’s more that if you can introduce a PDP-11 exploit on an unformatted 8” floppy that causes things to detonate in their silos then you may have some problems. I hope the DoE does something to new floppies to prevent anything malicious coming in, but I doubt that they have many people who understand security for these systems left. Most of them retired a while ago.

              1. 1

                If they don’t work it will be secretly, regardless if anyone knows the truth - https://wikileaks.org/trident-safety/.

              2. 2

                Looks like the USAF finally moved to SSDs in 2019.

              1. 4

                Reposting from a work chat about this exact article …

                This actually feels eerily reminiscent of discussions about machine code vs. assembler vs. high level languages in programming … it’s not “real programming” if you’re not programming on the silicon vs. programming is now guiding the machine tools to generate code.

                “It’s not programming art if you’re letting a compiler AI model handle the details”.

                1. 2

                  It does seem to take a skill to generate good AI images. I don’t need to watch 30 hours of Bob Ross to acquire those skills. Skills like brush strokes & color choices aren’t really important with this new medium, but there’s clearly skills that I don’t yet possess. But I can also stumble into a semi-decent output without the skill every now and then too.

                  1. 5

                    Prompt engineering is a genuinely tough thing. You really have to think outside the box and it can sometimes require a few rounds of iteration with manually scribbling on prior outputs to get what you want. It’s not going to make artists go away any time soon. I use a lot of AI-generated art on my blog but I pay a lot of money to commission artists for things like the stickers you see on the blog.

                    1. 1

                      The phrase “prompt engineering” threw me for a bit, but I like it!

                    2. 2

                      Between the article and this comment, I now wonder if we’re seeing the dawn of “Poser, but for 2D art”.

                      Which isn’t bad, since Poser is “good enough” for a lot of tasks, but hasn’t replaced professional 3D artists.

                  1. 6

                    Sadly, the “more magic” story seems to have been debunked by Tom Knight himself in 1996.

                    That is, the part about crashing the computer and not the actual switch itself.

                    1. 2

                      I must pedantically point out that just because it wasn’t installed for the reasons the story describes, doesn’t mean it didn’t behave as the story describes. XD

                      I recently worked on debugging what turned out to be a hardware issue. A microcontroller on an I/O board occasionally crashed and took other components with it, and the best way to trigger it intentionally was to touch the system’s power button. Not press, just touch. I think it was eventually found to be a process flaw: something in the way that the PCB and the parts on it were manufactured, soldered, cleaned with solvent, and assembled was causing a shielding layer of epoxy or something on the board to be damaged just enough to occasionally let current leak through to places it shouldn’t be going. The big capacitor that is your body touching the (metal) power button made enough electrons move around to crash the microcontroller. Apparently sometimes you didn’t even have to touch it, just get your finger near by. As far as I know, the fix was to use a different solvent during board assembly.

                      1. 3

                        This reminds me of one of my favourite things about electronics. Sometimes an electronic engineer will design a circuit, test it, find out that it fails, try to debug it with an oscilloscope, find that it works reliably but only when the oscilloscope probes are in, and then finally give up and fix it permanently by adding a 22pF capacitor at each spot where the oscilloscope probes were connected, in order to simulate an oscilloscope being attached and thereby make the circuit behave how it did with the scope plugged in. :)

                        1. 1

                          Yep, that sort of debugging happens everywhere. Building physical devices often has a “fit-and-finish” step, where you put everything together, try to make it work, and then by hand file down or smooth out any bits that don’t quite mesh together right. Getting your design and manufacturing process to the point where you don’t need this sort of manual fiddling all the time takes either a lot of hands-on experience or a lot of iteration or both. Sound familiar, software people designing complex systems?

                          But that sort of iteration is also necessary for automation and interchangable parts to work, which lets you make majillions of those devices, and so the high up-front cost pays itself off by letting you scale up. The thing about software is that you can do that sort of iteration very fast and in very small pieces, and now with the internet you get the benefits almost instantly, and so the cost-benefit scale is very different.

                          1. 1

                            But that sort of iteration is also necessary for automation and interchangable parts to work, which lets you make majillions of those devices, and so the high up-front cost pays itself off by letting you scale up

                            This is why I think of engineering as the process of paying a very expensive person to make something else cheaper. :)

                        2. 1

                          Oh, certainly! I found these new details a few weeks ago while looking up the story for our junior engineers, as this was always the classic lesson of “seemingly-unconnected devices can still influence each other at the electrical level”.

                          Or to put it another way, “next time, politely tell the internal helpdesk customer to turn off their desktop plasma globe when trying to use their company-issued yubikey”.

                        3. 1

                          That’s so cool that we can see the switch!

                          1. 2

                            Or from a different angle, tree-sitter:text-editors::llvm:compilers

                            1. 2

                              There is also zed which is still in early development. Written in Rust by the creators of Atom, the Real-Time collaboration looks pretty interesting!

                              1. 2

                                Ah, looks like the zed team also built tree-sitter?

                                1. 2

                                  Not only the name is almost the same, in fact zee’s feature list ends with: “a pragmatic editor, not a research endeavour into CRDTs” which -I suspect- is a clear reference to zed.

                                  1. 2

                                    I took that as more of a dig at xi-editor, but I’ve known about that longer than I’ve known about zed.

                                2. 1

                                  IMO emacs is about the introspection and customization more than the keybindings (which is why I use evil-mode :) ).

                                  It’s definitely interesting that both helix and zee are terminal editors. I think that prevents you from doing a lot of ‘cool’ things like widgets with arbitrary graphics or embedding a markdown preview, but I think the only ‘reasonable’ choices for cross-platform are either Qt or Electron. And if you want people to be able to make widgets easily without having to learn a bespoke scripting system, you’re basically stuck with Electron. :/

                                1. 8

                                  So, as a weird aside, Kevin Mitnick is, to this day, widely hated on the WeLL, a BBS that is still around that he hacked in 1995

                                  1. 6

                                    I dreamt of being on the WeLL, in the 90’s. I was too poor and too far away to have an account there, but I read about it in Wired and thought of how cool it would be.

                                    I ended up joining years later, just for fun, but ended up canceling a couple of months later as it ended up being too much money per month for what was essentially a mid-life crisis purchase. :)

                                    1. 5

                                      I think we read the same article. Did it lead with a WeLL member describing meeting his birth mother?

                                      Back then (this was when “The Web” was cool, kids) it was presented as “the text-only meeting place for the people who designed the Web”. IIRC Bruce Sterling was active there. Maybe still is.

                                      Even then, to me, it had the faint whiff of men in older middle-age in loose jean and comfy sneakers. Maybe I was unfair.

                                      1. 2

                                        Sterling is indeed still active on the WeLL, he regularly sits in on the yearly State of the World discussions.

                                      2. 3

                                        I joined in the late 90s and still host the Linux conference there. It was far more vital back in the 90s and I can understand not being terribly impressed with it if you joined later. It is like a second home to me now though, and if you stick around the same people online for 25 years they eventually grow on you :)

                                        1. 1

                                          Heh. I may have to join again. Nostalgia is creeping up on me.

                                    1. 15

                                      You lost me at “the great work from homebrew”

                                      Ignoring UNIX security best practices of the last. I dunno, 30, 40 years, and then actively preventing people from using the tool in any fashion that might be more secure, and refusing to acknowledge any such concerns is hardly “great work”.

                                      I could go on about their abysmal dependency resolution logic, but really if the security shit show wasn’t enough to convince you, the other failings won’t either.

                                      But also - suggesting Apple ship a first party container management tool because “other solutions use a VM”, suggests that either you think a lot of people want macOS containers (I’m pretty sure they don’t) or that you don’t understand what a container is/how it works.

                                      The “WSL is great because now I don’t need a VM” is either ridiculously good sarcasm, or yet again, evidence that you don’t know how something works. (For those unaware, WSL2 is just a VM. Yes it’s prettied up to make it more seamless, but it’s a VM.).

                                      1. 23

                                        I don’t know what’s SO wrong about Homebrew that every time it’s mentioned someone has to come and say that it sucks.

                                        For the use case of a personal computer, Homebrew is great. The packages are simple, it’s possible and easy to install packages locally (I install mine in ~/.Homebrew) and all my dependencies are always up to date. What would a « proper » package manager do better than Homebrew that I care about? Be specific please because I have no idea what you’re talking about in terms of security « shit show » or « abysmal » dependency resolution.

                                        1. 12
                                          • A proper package manager wouldn’t allow unauthenticated installs into a global (from a $PATH perspective) location.
                                          • A proper package manager wouldn’t actively prevent the user from removing the “WTF DILIGAF” permissions Homebrew sets and requiring authenticated installs.
                                          • A proper package manager that has some form of “install binaries from source” would support and actively encourage building as an untrusted user, and requiring authentication to install.
                                          • A proper package manager would resolve dynamic dependencies at install time not at build time.
                                          • A proper open source community wouldn’t close down any conversation that dares to criticise their shit.
                                          1. 11

                                            Literally none of those things have ever had any impact on me after what, like a decade of using Homebrew? I’m sorry if you’ve run into problems in the past, but it’s never a good idea to project your experience onto an entire community of people. That way lies frustration.

                                            1. 5

                                              Who knew that people would have different experiences using software.

                                              it’s never a good idea to project your experience onto an entire community of people

                                              You should take your own advice. The things I stated are objective facts. I didn’t comment on how they will affect you as an individual, I stated what the core underlying issue is.

                                              1. 6

                                                You summarized your opinion on “proper” package managers and presented it as an authoritative standpoint. I don’t see objectiveness anywhere.

                                            2. 3

                                              I don’t really understand the fuss about point 1. The vast majority of developer machines are single user systems. If an attacker manages to get into the user account it barely matters if they can or cannot install packages since they can already read your bank passwords, SSH keys and so on. Mandatory relevant xkcd.

                                              Surely, having the package manager require root to install packages would be useful in many scenarios but most users of Homebrew rightfully don’t care.

                                            3. 8

                                              As an occasional Python developer, I dislike that Homebrew breaks old versions of Python, including old virtualenvs, when a new version comes out. I get that the system is designed to always get you the latest version of stuff and have it all work together, but in the case of Python, Node, Ruby, etc. it should really be designed that it gets you the latest point releases, but leaves the 3.X versions to be installed side by side, since too much breaks from 3.6 to 3.7 or whatever.

                                              1. 8

                                                In my opinion for languages that can break between minor releases you should use a version manager (python seems to have pyenv). That’s what I do with node: I use Homebrew to install nvm and I use nvm to manage my node versions. For Go in comparison I just use the latest version from Homebrew because I know their goal is retro compatibility.

                                                1. 5

                                                  Yeah, I eventually switched to Pyenv, but like, why? Homebrew is a package manager. Pyenv is a package manager… just for Python. Why can’t homebrew just do this for me instead of requiring me to use another tool?

                                                  1. 1

                                                    Or you could use asdf for managing python and node.

                                                  2. 7

                                                    FWIW I treat Homebrew’s Python as a dependency for other apps installed via Homebrew. I avoid using it for my own projects. I can’t speak on behalf of Homebrew officially, but that’s generally how Homebrew treats the compilers and runtimes. That is, you can use what Homebrew installs if you’re willing to accept that Homebrew is a rolling package manager that strives always to be up-to-date with the latest releases.

                                                    If you’re building software that needs to support a version of Python that is not Homebrew’s favored version, you’re best off using pyenv with brew install pyenv or a similar tool. Getting my teams at work off of brewed Python and onto pyenv-managed Python was a short work that’s saved a good bit of troubleshooting time.

                                                    1. 2

                                                      This is how I have started treating Homebrew as well, but I wish it were different and suitable for use as pyenv replacement.

                                                      1. 2

                                                        asdf is another decent option too.

                                                      2. 5

                                                        I’m a Python developer, and I use virtual environments, and I use Homebrew, and I understand how this could theoretically happen… yet I’ve literally never experienced it.

                                                        it should really be designed that it gets you the latest point releases, but leaves the 3.X versions to be installed side by side, since too much breaks from 3.6 to 3.7 or whatever.

                                                        Yep, that’s what it does. Install python@3.7 and you’ve got Python 3.7.x forever.

                                                        1. 1

                                                          Maybe I’m just holding it wrong. :-/

                                                        2. 3

                                                          I found this article helpful that was floating around a few months ago: https://justinmayer.com/posts/homebrew-python-is-not-for-you/

                                                          I use macports btw where I have python 3.8, 3.9 and 3.10 installed side by side and it works reasonably well.

                                                          For node I gave up (only need it for small things) and I use nvm now.

                                                        3. 8

                                                          Homebrew is decent, but Nix for Darwin is usually available. There are in-depth comparisons between them, but in ten words or less: atomic upgrade and rollback; also, reproducibility by default.

                                                          1. 9

                                                            And Apple causes tons of grief for the Nix team every macOS release. It would be nice if they stopped doing that.

                                                            1. 2

                                                              I stopped using Nix on macOS after it is required to create an separate unencrypted volume just for Nix. Fortunately, NixOS works great on VM.

                                                              1. 2

                                                                It seems to work on an encrypted volume now at least!

                                                          2. 4

                                                            I really really hate how homebrew never ask me for confirmation. If I run brew upgrade it just does it. I have zero control over it.

                                                            I come from zypper and dnf, which are both great examples of really good UX. I guess if all you know is homebrew or .dmg files, homebrew is amazing. Compared to other package managers, it might even be worse than winget….

                                                            1. 2

                                                              If I run brew upgrade it just does it

                                                              … yeah? Can we agree that this is a weird criticism or is it just me?

                                                            2. 2

                                                              Overall I like it a lot and I’m very grateful brew exists. It’s smooth sailing the vast majority of the time.

                                                              The only downside I get is: upgrades are not perfectly reliable. I’ve seen it break software on upgrades, with nasty dynamic linker errors.

                                                              Aside from that it works great. IME it works very reliably if I install all the applications I want in one go from a clean slate and then don’t poke brew again.

                                                            3. 4

                                                              you think a lot of people want macOS containers (I’m pretty sure they don’t)

                                                              I would LOVE macOS containers! Right now, in order to run a build on a macOS in CI I have to accept whatever the machine I’m given has installed (and the version of the OS) and just hope that’s good enough, or I have to script a bunch of install / configuration stuff (and I still can’t change the OS version) that has to run every single time.

                                                              Basically, I’d love to be able to use macOS containers in the exact same way I use Linux containers for CI.

                                                              1. 1

                                                                Yes!!

                                                                1. Headless macos would be wonderful
                                                                2. Containers would be fantastic. Even without the docker-like incremental builds, something like FreeBSD jails or LXC containers would be very empowering for build environments, dev servers, etc
                                                                1. 1

                                                                  Containers would be fantastic. Even without the docker-like incremental builds, something like FreeBSD jails or LXC containers would be very empowering for build environments, dev servers, etc

                                                                  These days, Docker (well, Moby) delegates to containerd for managing both isolation environments and image management.

                                                                  Docker originally used a union filesystem abstraction and tried to emulate that everywhere. Containerd provides a snapshot abstraction and tries to emulate that everywhere. This works a lot better because you can trivially implement snapshots with union mounts (each snapshot is a separate directory that you union mount on top of another one) but the converse is hard. APFS has ZFS-like snapshot support and so adding an APFS snapshotter to containerd is ‘just work’ - it doesn’t require anything else.

                                                                  If the OS provides a filesystem with snapshotting and a isolation mechanism then it’s relatively easy to add a containerd snapshotter and shim to use them (at least, in comparison with writing a container management system from scratch).

                                                                  Even without a shared-kernel virtualisation system, you could probably use xhyve[1] to run macOS VMs for each container. As far as I recall, the macOS EULA allows you to run as many macOS VMs on Apple hardware as you want.

                                                                  [1] xhyve is a port of FreeBSD’s bhyve to run on top of the XNU hypervisor framework, which is used by the Mac version of Docker to run Linux VMs.

                                                              2. 2

                                                                Ignoring which particular bits of Unix security practices is problematic? There are functionally no Macs in use today that are multi-user systems.

                                                                1. 3

                                                                  All of my macs and my families macs are multi-user.

                                                                  1. 2

                                                                    The different services in OS are running as different users. It is in general good thing to run different services with minimal required privileges, different OS provided services run with different privileges, different Homebrew services run with different privileges, etc. So reducing the blast radius, even if there is only one human user is a pro, as there are often more users at once, just not all users are meatbags.

                                                                  2. 1

                                                                    I’ve been a homebrew user since my latest mac (2018) but my previous one (2011) I used macports, given you seem to have more of an understanding of what a package manager should do than I have, do you have any thoughts on macports?

                                                                    1. 4

                                                                      I believe MacPorts does a better job of things, but I can’t speak to it specifically, as I haven’t used it in a very long time.

                                                                      1. 1

                                                                        Thanks for the response, it does seem like it’s lost its popularity and I’m not quite sure why. I went with brew simply because it seemed to be what most articles/docs I looked at were using.

                                                                        1. 3

                                                                          I went with brew simply because it seemed to be what most articles/docs I looked at were using.

                                                                          Pretty much this reason. Homebrew came out when macports was still source-only installs and had some other subtle gotchas. Since then, those have been cleared up but homebrew had already snowballed into “it’s what my friends are all using”

                                                                          I will always install MP on every Mac I use, but I’ve known I’ve been in the minority for quite awhile.

                                                                          1. 1

                                                                            Do you find the number of packages to be comparable to brew? I don’t have a good enough reason to switch but would potentially use it again when I get another mac in the future.

                                                                            1. 3

                                                                              I’ve usually been able to find something unless it’s extremely new, obscure, or has bulky dependencies like gtk/qt or specific versions of llvm/gcc. The other nice thing is that if the build is relatively standard, uses ‘configure’ or fits into an existing PortGroup, it’s usually pretty quick to whip up a local Portfile(which are TCL-based so it’s easy to copy a similar package config and modify to fit).

                                                                              Disclaimer: I don’t work on web frontends so I usually don’t deal with node or JS/TS-specific tools.

                                                                              1. 3

                                                                                On MacPorts vs Homebrew I usually blame popularity first and irrational fear of the term ‘Ports’ as in “BSD Ports System”, second. On the second cause, a lot of people just don’t seem to know that what started off as a way to have ‘configure; make; make install’ more maintainable across multiple machines has turned into a binary package creation system. I don’t anything about Homebrew so I can’t comment there.

                                                                    1. 1

                                                                      Easing my work-related headache(DNS, yubikeys, and vault, oh my) with even more hours in Super Robot Wars 30.

                                                                      Stopping in at my friends’ new beer+cider taproom to be an early mug club member.

                                                                      Possibly another attempt at getting my auto-ripper FreeNAS jail to recognize CDs on insert, not just DVDs.

                                                                      1. 8

                                                                        Oh excellent, this is PragmatIC’s work. I first saw this in 2017 when they were still at the centimeter level with 10 layers. It’s not yet using conventional ink-printing technologies, but this is still promising.

                                                                        1. 1

                                                                          No work for a while after finishing a contract and feeling motivated this week so picking up a pet project and learning Vue + Vuetify for the front end. I get really bogged down in trying to make web things basically aesthetically pleasing and never getting over the CSS where I’m proficient enough to be able to do things quickly, so it’s often painful Getting Things Done. I’m hoping Vue+Vuetify will be a nice middleground between customisability and speed of development.

                                                                          Also recently started taking ritalin. Any lobste.rs ADHD/ritalin/adderall peeps?

                                                                            1. 1

                                                                              I suggest you do not skip taking it at the regular times. It is so easy to see the mood changes!

                                                                              1. 1

                                                                                Super interested if you want to share more about that as I’m planning to go on and off it as I need to, but there’s a larger story there.

                                                                            1. 12

                                                                              Yahoo Pipes was a lot of fun. A highlight of the Web 2.0 “mash-up” era. I remember making a little homepage that pulled posts from a half-dozen different sources. Does anyone what happened to Pipes after Yahoo shut it down? Has there been a Pipes revival somewhere?

                                                                              1. 4

                                                                                Isn’t that https://ifttt.com/ ?

                                                                                1. 9
                                                                                  1. 4

                                                                                    It’s been many years since I used IFTTT, but last I checked it was vastly simpler. You can’t do complex pipelines like the one shown here - it really is “if (this), then (that)” and you can’t chain them together.

                                                                                    I didn’t know about Yahoo Pipes while it was still up so I never got a chance to use it, but AFAICT the closest thing I’ve seen to it so far is Huginn.

                                                                                    1. 2

                                                                                      Huginn looks amazing! This could really help with a lot of work stuff where IFTTT just doesn’t have the flexibility or programmability that I need. Thanks!

                                                                                  2. 3

                                                                                    Along with huginn, riko is also on my list to try: pull-based and no web GUI, but self-contained with native RSS/Atom.

                                                                                  1. 11

                                                                                    As an aside: It is disturbing that we are at the point where we do need to fuzz instruction sets, and that we actually do find hidden instructions in them.

                                                                                    1. 8

                                                                                      Hahaha. Any self-respecting state-run cybersecurity agency will eventually have to ask itself: “Why don’t we see inside?” Followed by: “How do we check it’s safe?”

                                                                                      Interesting times, indeed.

                                                                                        1. 2

                                                                                          I suspect CPU manufacturers have been fuzzing instruction sets for a long time as part of their own verification processes.

                                                                                          1. 2

                                                                                            That’d be sad, as it should be the territory of formal verification.

                                                                                            1. 15

                                                                                              I’m sure formal verification is also used. But formal verification can only prove things which are wrong with some model of the system; only testing (including fuzz testing) can be used to find faults which exist in the physical implementation of the system but which aren’t in the formalized model. As an example, a formal model of DRAM would probably not capture the physics involved in the rowhammer attack. As another example, I once watched a talk where someone discussed a hardware product with some authentication mechanism; the programming was perfect and would only let users with a key through, but because it was a physical device, you could cause a brown-out at the exact right time to reset some memory cell and get access without a key; a formal model probably wouldn’t catch this either.

                                                                                              Then you have to consider the fact that CPU manufacturers are constantly trying to walk an extremely fine line where they have maximum performance with good reliability. This is an endeavor which is in large parts physics rather than logic. You may have formally proven formally that the entire CPU works perfectly as long as your transistors work as ideal transistors, but your formal model can’t model exactly at which point some electron quantum tunnels past a barrier or exactly in which cases you have a slight brown-out in one part of the chip which occasionally causes a transistor to switch slower than some other part of the system expected. These are things which can only be discovered through huge amounts of real world testing combined with a good theoretical understanding of how these things work.

                                                                                              At least that’s my take on it. I don’t know any of this for sure, it just seems reasonable based on my limited knowledge. If I’m wrong, please correct me.

                                                                                        1. 1

                                                                                          A friend gave me a few surplus ones, they’re great and it’s easy to flash stock OpenWRT onto them. I’m using one in passthrough mode as a Wifi AP extender to my main OpenWRT router.

                                                                                          1. 2

                                                                                            First of all, most of our software technology was built in companies (or corporate labs) outside of academic Computer Science.

                                                                                            Is this true? This seems like a pretty wild claim.

                                                                                            The risk-aversion and hyper-professionalization of Computer Science is part of a larger worrisome trend throughout Science and indeed all of Western Civilization

                                                                                            Why specifically western civilization? I would say this is more of an issue of how capital functions, not because of Western Culture™️

                                                                                            1. 4

                                                                                              Bell Labs, Xerox PARC, SRI, BBN invented a pretty jaw-dropping swath of tech in the 60s and 70s: packet switched networks and the ARPAnet (together with MIT), Unix, the mouse and the GUI, online collaboration, text editing as we know it, bitmap displays, Ethernet, the laser printer, the word processor, file servers, (much of) object-oriented programming, digital audio and computer music (Max Matthews)…

                                                                                              On the hardware side, it was nearly all corporate — Bell Labs, TI, Fairchild, Intel… (though IIRC, Carver Mead at Caltech is the father of VLSI, and RISC was invented at UC Berkeley.)

                                                                                              1. 2

                                                                                                To back up snej’s comment, there are several great history books on this period:

                                                                                                Gertner’s “The Idea Factory” is also a good read, but focuses more on the overall history of Bell Labs.

                                                                                              1. 3

                                                                                                Whole lot of nostalgia this week. I was more into the Unreal Tournament mod scene, but it was the same process of discovery and a wide variety of tools. I’ve mentioned time and again that I learned Ruby in order to understand UnrealScript coroutines, which lead to a professional Rails job years later.

                                                                                                I’d also concur that keeping a project silent “until it’s done” is the quickest way to ensure it never gets released(he says, while quietly looking at recently unearthed HDD archives).

                                                                                                1. 7

                                                                                                  Well, this is indeed a nice blast from the past I wasn’t expecting to see this evening!

                                                                                                  It’s only tangentially mentioned on the third page, but this is a game from 00’s indie darling ABA Games/Kenta Cho, and at the time convinced college-age me that languages outside of C or C++ were just as capable of making fast graphic games.

                                                                                                  However this was right around the big split between D1/Phobos and D2/Tango, so I never really pursued it any further.

                                                                                                  1. 2

                                                                                                    Most typesetting languages efforts seem to fail or be ignored because TeX (and its derivatives) is too pervasive and so much has been built upon it that making sense of it all, let alone improving upon it, is a massive undertaking. How does the language work? Where even is the core of TeX in its codebase nowadays? Is it still written in WEB? I remember looking into it a few years ago and I could hardly find my way around.

                                                                                                    I saw a talk by somebody who was reimplementing the core algorithms of TeX in Clojure, and looking at his github profile now I can’t even find the repo anymore.

                                                                                                    1. 2

                                                                                                      There was a reimplementation of tex in java; now abandoned, unfortunately. I believe there was also some interest at one point in reimplementing it in ocaml.

                                                                                                      That said, I believe the modern tex distributions are written in straight c, not WEB.

                                                                                                      1. 1

                                                                                                        I believe there was also some interest at one point in reimplementing it in ocaml.

                                                                                                        You mean Patoline? It seems to be functional but it doesn’t look very actively developed.

                                                                                                      2. 2

                                                                                                        There’s an interesting project called SILE. AFAIU, its author went the crazy hacker way, and basically butchered out a few core libraries from TeX, and glued them together with Lua, changing whatever else he wanted to his liking, attempting to make it simple and modern. The main reason why I think this project didn’t get much more popular (yet?), is that it still lacks math rendering support. I tried to contribute something in this area, but… kinda fizzled away after stumbling over some stupid integration issue that I didn’t have an idea how to resolve… I still have this somewhere on my TODO “bucket lists”, but no idea if I’ll manage to get back to it ever…

                                                                                                        1. 2

                                                                                                          I’ve been using Tectonic, which is a reimplementation of the WEB2C version(plus XeTeX) in Rust.

                                                                                                          1. 1

                                                                                                            It looks like not a reimplementation, but just a wrapper? Does this get you anything compared with something like ConTeXt?

                                                                                                        1. 4

                                                                                                          I’d almost forgotten about the Alf language and extremely pleased to discover from this post that it has a production-capable successor. Kudos to the author for landing a job with it!

                                                                                                          1. 3

                                                                                                            Thanks! I thought at first maybe you meant the Algebraic Logic Functional programming language, a predecessor of the Curry language of which I’m also a fan. But then I saw that you linked to the actual Alf in point. Happy to hear you know about it! I’m planning to write more about my experience with bmg and how to use in in production systems, in due time.

                                                                                                          1. 2

                                                                                                            The proposal outlined seems hugely over complicated for what is - as the commentary suggests - almost a complete non-issue. I’m surprised it’s generating so much discussion, unless I’m missing something important.

                                                                                                            1. 3

                                                                                                              It seems like a real issue to me. How are people supposed to connect to devices on their local network?

                                                                                                              The past solution has been to use their desktop’s, laptop’s, or smartphone’s web browser to make an HTTP connection, via mDNS or to the device’s IP address. This had the great advantages of being simple , universal, and non-expiring. Unfortunately, browsers are increasingly restricting HTTP. Currently, many browsers will show scary warnings every time you try to make such a connection. This is annoying to people who know what’s going on, and a total deterrence to people who don’t. In the near future, browsers may disallow HTTP altogether.

                                                                                                              I know of 3 other solutions which are compatible with current and near-future browser security requirements:

                                                                                                              1. Devices can create self-signed certificates, but those are going the same way as HTTP – scary warnings in the web browser, which may soon become total stonewalls. It also requires more software and hardware than plain HTTP, and might one day become obsolete when browsers stop trusting the device’s SSL version.

                                                                                                              2. The connection can go through infrastructure on the internet, which allows for servers with real domain names to obtain real, normal SSL certificates and connect to the user’s computer and to their network gadget. This has the downside of requiring that the networked gadget be permanently connected (exposed) to the internet, and that someone has to maintain infrastructure in order for the devices to work. In order to avoid creating a botnet or a pile of e-waste, someone has to own and maintain the software for the devices and the central server, and keep the update mechanism working nearly perfectly.

                                                                                                              3. Network-gadget creators (including open-source communities) can produce special-purpose software for connecting to the device, and require users to download and run that. This may or may not actually be secure, but it can be made to work (on supported devices with supported operating systems that the user has permission to install/run software on) without having to jump through hoops like messing with DNS settings or installing certificates.

                                                                                                              OpenWRT mailing list user abnoeh has proposed a new, hybrid solution. The OpenWRT organization would be granted limited Certificate Authority from a sponsoring Authority, like Let’s Encrypt. The scheme takes advantage of the fact that many (most?) OpenWRT devices are home routers: connected to the internet, and located in between the open internet and the user who probably want to connect to/configure their router. The OpenWRT router is authentifiably assigned a certificate for a specific subdomain of openwrt.org, and intercepts and responds to outgoing DNS and HTTP(S) requests to that domain, basically acting as an authorized Man-In-The-Middle attack on openwrt.org.

                                                                                                              This scheme has the upsides of being currently technically possible, and working on many user’s devices without tweaking settings, installing custom software, or clicking through secutiry scarewalls. It has the major downsides that it needs support from a certificate authority, that it or the browser vendors could invalidate the whole scheme at any time, and that openwrt.org needs to take on the responsibilities of a certificate authority. It is also not a general solution to the problem of interacting with devices on the local network, many of which also run OpenWRT. As the world is being populated by more and more local devices, it would be great if there was an agreed-upon solution that wasn’t “connect to a central server on the internet”.

                                                                                                              1. 1

                                                                                                                Devices can create self-signed certificates…scary warnings…

                                                                                                                People might be doing this wrong. The way I’ve found to make this work is to:

                                                                                                                1. Create your own cert authority.
                                                                                                                2. Create a cert with the CA and another key.
                                                                                                                3. Add your CA to your keychain or certificates directory.
                                                                                                                4. Add the cert and key to your server.
                                                                                                                5. Add ip address of the device to your /etc/hosts/ of all your devices or do some dns stuff

                                                                                                                There is no scary warning when you do it this way rather than doing a self signed cert. Please note des3 is deprecated but works so look into openssl if you’re not going to use this for development purposes.

                                                                                                                Here are some things you can use:

                                                                                                                root_certificate_authority_creation.sh

                                                                                                                #!/usr/bin/env bash
                                                                                                                mkdir ~/ssl/
                                                                                                                openssl genrsa -des3 -out root.key 2048
                                                                                                                openssl req -x509 -new -nodes -key root.key -sha256 -days 1024 -out rootCA.pem
                                                                                                                

                                                                                                                create_root_self_signed_certificate.sh

                                                                                                                #!/usr/bin/env bash
                                                                                                                openssl req -new -sha256 -nodes -out coolbox.local.csr -newkey rsa:2048 -keyout coolbox.local.key -config <( cat coolbox.local.cnf )
                                                                                                                
                                                                                                                openssl x509 -req -in coolbox.local.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out coolbox.local.crt -days 500 -sha256 -extfile v3.ext
                                                                                                                

                                                                                                                coolbox.local.cnf

                                                                                                                [req]
                                                                                                                default_bits = 2048
                                                                                                                prompt = no
                                                                                                                default_md = sha256
                                                                                                                distinguished_name = dn
                                                                                                                
                                                                                                                [dn]
                                                                                                                C=US
                                                                                                                ST=California
                                                                                                                L=SF
                                                                                                                O=End Point
                                                                                                                OU=Testing Domain
                                                                                                                emailAddress=admin@coolboxbro.com
                                                                                                                CN = coolbox.local
                                                                                                                

                                                                                                                v3.ext

                                                                                                                authorityKeyIdentifier=keyid,issuer
                                                                                                                basicConstraints=CA:FALSE
                                                                                                                keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
                                                                                                                subjectAltName = @alt_names
                                                                                                                
                                                                                                                [alt_names]
                                                                                                                DNS.1 = coolbox.local
                                                                                                                DNS.2 = *.coolbox.local
                                                                                                                

                                                                                                                add_root_to_KeyChain.sh

                                                                                                                #!/usr/bin/env bash 
                                                                                                                # Uncomment the one you need and check the internet to make sure it's somewhat right
                                                                                                                # MacOS
                                                                                                                #security add-trusted-cert -k /Library/Keychains/System.keychain -d rootCA.pem
                                                                                                                #Debian
                                                                                                                #sudo cp foo.crt /usr/local/share/ca-certificates/foo.crt
                                                                                                                #sudo update-ca-certificates
                                                                                                                #windows, i think this works but I'm not on windows anymore.
                                                                                                                #certutil -addstore -f "ROOT" new-root-certificate.crt
                                                                                                                
                                                                                                                1. 5

                                                                                                                  If I gave this list to my non-programmer friends, they would simply walk away, because this doesn’t mean anything to them.

                                                                                                                  Next step, once you get the CA going, is to generate a request, generate a certificate, then install said certificate on the IoT device you have, so you can then manage it via a web browser …

                                                                                                                  Look, I have set up my own CA to play around with this stuff. And so far, I have 8 scripts (totaling over 250 lines) to maintain it, and I don’t think I have it fully right.

                                                                                                                  1. 2

                                                                                                                    Seconding spc476’s comment: even with something like smallstep to streamline a local CA, it’s going to be beyond most non-programmers. And that’s before trying to roll it out to off-the-shelf IoT devices: see the Plex link in this thread.

                                                                                                                  2. 1

                                                                                                                    Your solution 1, the current situation, really isn’t a bad one in my opinion. Having to affirm that you trust a connection the first time you connect to a machine is a perfectly okay flow.

                                                                                                                    It would be nice if HTTPS didn’t tightly couple authentication and integrity. In this particular situation, where we’re probably navigating directly to a local IP address, we want the later much more than the former but nevertheless have to drag in the whole CA system.

                                                                                                                1. 1

                                                                                                                  This is part of General Magic’s pre-smartphone technology, which has been mentioned on lobsters here.

                                                                                                                  Fun fact: when I saw the GM documentary premiere at a local indie theater, Marc Porat himself was in attendance and did a Q&A afterwards.

                                                                                                                  He’s still convinced that agent-based programming should be researched further but candidly admitted that the full implementation described here would be considered a virus(agents try to replicate themselves to the servers and run code).

                                                                                                                  1. 1

                                                                                                                    The recent pushes in server-side WebAssembly remind me a little of this; I hope they take better notice of security research as well as the ability to restrict resources beyond security (probably not solve the halting problem though). Don’t want to risk an inverse of the JS model where clients are pushing servers untrusted code.