Threads for tml

    1. 1

      I love seeing these posts about things that are just second-nature to me, and yet some new individual has just learned what is going on under the covers!

    2. 18

      The whole damn thing.

      Instead of having this Frankenstein’s monster of different OSs and different programming languages and browsers that are OSs and OSs that are browsers, just have one thing.

      There is one language. There is one modular OS written in this language. You can hot-fix the code. Bits and pieces are stripped out for lower powered machines. Someone who knows security has designed this thing to be secure.

      The same code can run on your local machine, or on someone else’s machine. A website is just a document on someone else’s machine. It can run scripts on their machine or yours. Except on your machine they can’t run unless you let them and they can’t do I/O unless you let them.

      There is one email protocol. Email addresses can’t be spoofed. If someone doesn’t like getting an email from you, they can charge you a dollar for it.

      There is one IM protocol. It’s used by computers including cellphones.

      There is one teleconferencing protocol.

      There is one document format. Plain text with simple markup for formatting, alignment, links and images. It looks a lot like Markdown, probably.

      Every GUI program is a CLI program underneath and can be scripted.

      (Some of this was inspired by legends of what LISP can do.)

      1. 24

        Goodness, no - are you INSANE? Technological monocultures are one of the greatest non-ecological threats to the human race!

        1. 1

          I need some elaboration here. Why would it be a threat to have everyone use the same OS and the same programming language and the same communications protocols?

          1. 6

            One vulnerability to rule them all.

            1. 2

              Pithy as that sounds, it is not convincing for me.

              Having many different systems and languages in order to have security by obscurity by having many different vulnerabilities does not sound like a good idea.

              I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.

              1. 4

                It is not security through obscurity, it is security through diversity, which is a very different thing. Security through obscurity says that you may have vulnerabilities but you’ve tried to hide them so an attacker can’t exploit them because they don’t know about them. This works as well as your secrecy mechanism. It is generally considered bad because information disclosure vulnerabilities are the hardest to fix and they are the root of your security in a system that depends on obscurity.

                Security through diversity, in contrast, says that you may have vulnerabilities but they won’t affect your entire fleet. You can build reliable systems on top of this. For example, the Verisign-run DNS roots use a mixture of FreeBSD and Linux and a mixture of bind, unbound, and their own in-house DNS server. If you find a Linux vulnerability, you can take out half of the machines, but the other half will still work (just slower). Similarly, a FreeBSD vulnerability can take out half of them. A bind or unbound vulnerability will take out a third of them. A bind vulnerability that depends on something OS-specific will take out about a sixth.

                This is really important when it comes to self-propagating malware. Back in the XP days, there were several worms that would compromise every Windows machine on the local network. I recall doing a fresh install of Windows XP and connecting it to the university network to install Windows update: it was compromised before it was able to download the fix for the vulnerability that the worm was exploiting. If we’d only had XP machines on the network, getting out of that would have been very difficult. Because we had a load of Linux machines and Macs, we were able to download the latest roll-up fix for Windows, burn it to a CD, redo the install, and then do an offline update.

                Looking at the growing Linux / Docker monoculture today, I wonder how much damage a motivated individual with a Linux remote arbitrary-code execution vulnerability could do.

                1. 1

                  Sure, but is this an intentional strategy? Did we set out to have Windows and Mac and Linux in order that we could prevent viruses from spreading? It’s an accidental observation and not a really compelling one.

                  I’ve pointed out my thinking in this part of the thread https://lobste.rs/s/sdum3p/if_you_could_rewrite_anything_from#c_ennbfs

                  In short, there must be more principled ways of securing our computers than hoping multiple green field implementations of the same application have different sets of bugs.

              2. 3

                A few examples come to mine though—heartbleed (which affected anyone using OpenSSL) and Specter (anyone using the x86 platform). Also, Microsoft Windows for years had plenty of critical exploits because it had well over 90% of the desktop market.

                You might also want to look up the impending doom of bananas, because over 90% of bananas sold today are genetic clones (it’s basically one plant) and there’s a fungus threatening to kill the banana market. A monoculture is a bad idea.

                1. 1

                  Yes, for humans (and other living things) the idea of immunity through obscurity (to coin a phrase) is evolutionarily advantageous. Our varied responses to COVID is one such immediate example. It does have the drawback that it makes it harder to develop therapies since we see population specificity in responses.

                  I don’t buy that the we need to employ the same idea in an engineered system. It’s a convenient back-ported bullet list advantage of having a chaotic mess of OSes and programming languages, but it certainly wasn’t intentional.

                  I’d rather have an engineered, intentional robustness to the systems we build.

                  1. 4

                    To go in a slightly different direction—building codes. The farther north you go, the steeper roofs tend to get. In Sweden, one needs a steep roof to shed show buildup, but where I live (South Florida, just north of Cuba) building such a roof would be a waste of resources because we don’t have snow—we just need a shallow angle to shed rain water. Conversely, we don’t need codes to deal with earthquakes, nor does California need to deal with hurricanes. Yet it would be so much simpler to have a single building code in the US. I’m sure there are plenty of people who would love to force such a thing everywhere if only to make their lives easier (or for rent-seeking purposes).

                    1. 2

                      We have different houses for different environments, and we have different programs for different use cases. This does not mean we need different programing languages.

              3. 2

                I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.

                In principle, yeah. But even the best security engineers are human and prone to fail.

                If every deployment was the same version of the same software, then attackers could find an exploitable bug and exploit it across every single system.

                Would you like to drive in a car where every single engine blows up, killing all inside the car? If all cars are the same, they’ll all explode. We’d eventually move back to horse and buggy. ;-) Having a variety of cars helps mitigate issues other cars have–while still having problems of its own.

                1. 1

                  In this heterogeneous system we have more bugs (assuming the same rate of bugs everywhere) and fewer reports (since there are fewer users per system) and a more drawn out deployment of fixes. I don’t think this is better.

                  1. 1

                    Sure, you’d have more bugs. But the bugs would (hopefully) be in different, distinct places. One car might blow up, another might just blow a tire.

                    From an attacker’s perspective, if everyone drives the same car, it the attacker knows that the flaws from one car are reproducible with 100% success rate, then the attacker doesn’t need to spend time/resources of other cars. The attacker can just reuse and continue to rinse, reuse, recycle. All are vulnerable to the same bug. All can be exploited in the same manner reliably, time after another.

                    1. 3

                      To go by the car analogy, the bugs that would be uncovered by drivers rather than during the testing process would be rare ones, like, if I hit the gas pedal and brake at the same time it exposes a bug in the ECU that leads to total loss of power at any speed.

                      I’d rather drive a car a million other drivers have been driving than drive a car that’s driven by 100 people. Because over a million drivers it’s much more likely someone hits the gas and brake at the same time and uncovers the bug which can then be fixed in one go.

        1. 1

          Yes, that’s probably the LISP thing I was thinking of, thanks!

      2. 2

        I agree completely!

        We would need to put some safety measures in place, and there would have to be processes defined for how you go about suggesting/approving/adding/changing designs (that anyone can be a part of), but otherwise, it would be a boon for the human race. In two generations, we would all be experts in our computers and systems would interoperate with everything!

        There would be no need to learn new tools every X months. The UI would familiar to everyone, and any improvements would be forced to go through human testing/trials before being accepted, since it would be used by everyone! There would be continual advancements in every area of life. Time would be spent on improving the existing experience/tool, instead of recreating or fixing things.

      3. 2

        I would also like to rewrite most stuff from the ground up. But monocultures aren’t good. Orthogonality in basic building blocks is very important. And picking the right abstractions to avoid footguns. Some ideas, not necessarily the best ones:

        • proven correct microkernel written in rust (or similar borrow-checked language), something like L4
        • capability based OS
        • no TCP/HTTP monoculture in networks (SCTP? pubsub networks?)
        • are our current processor architectures anywhere near sane? could safe concurrency be encouraged at a hardware level?
        • less walled gardens and centralisation
        1. 2

          proven correct microkernel written in rust (or similar borrow-checked language), something like L4

          A solved problem. seL4, including support for capabilities.

          1. 5

            seL4 is proven correct by treating a lot of things as axioms and by presenting a programmer model that punts all of the difficult bits to get correct to application developers, making it almost impossible to write correct code on top of. It’s a fantastic demonstration of the state of modern proof tools, it’s a terrible example of a microkernel.

            1. 2

              FUD unless proven otherwise.

              Counter-examples exist; seL4 can definitely be used, as demonstrated by many successful uses.

              The seL4 foundation is getting a lot of high profile members.

              Furthermore, Genode, which is relatively easy to use, supports seL4 as a kernel.

      4. 2

        Someone wrote a detailed vision of rebuilding everything from scratch, if you’re interested. 1

        1. 11

          I never understood this thing.

          1. 7

            I think that is deliberate.

      5. 1

        And one leader to rule them all. No, thanks.

        1. 4

          Well, I was thinking of something even worse - design by committee, like for electrical stuff, but your idea sounds better.

      6. 1

        We already have this, dozens of them. All you need to do is point guns at everybody and make them use your favourite. What a terrible idea.

    3. 3

      I’ve been using variations on The Monopoly Interview for about 15 years now, ever since reading about it in Reg Braithwaite’s blog. The biggest thing I’ve found in making this successful is to clearly establish up-front “This isn’t a test. This is a chance for you and I to have a conversation - I need to see how you think about a problem, and see we’re a good fit as teammates.”

      A couple of changes I’ve made to the process laid out by Mr. Braithwaite:

      1. I make it a collaborative exercise - “Our boss/client/stakeholder has requested we deliver on online, multiplayer version of Monopoly. You and I are going to spend the next 20 minutes mapping out our initial response to the request.” And then you ACTUALLY work WITH them to build out an answer.

      2. I have filled multiple upper- and mid-management roles in the past 4 employers, so I frequently have to adapt the question to the role being interviewed for, as I am frequently interviewing in different disciplines. For example, for an SRE-type role, I often say “The client has built an online, multiplayer version of Monopoly and has engaged us to do the delivery of the application. Any day now, the client expects their good friend Justin Bieber to tweet out an endorsement; what do we need to know/do to prepare the application for this kind of flood of activity.” The question is amazingly flexible and so far I have been able to trivially adapt it to every role I’ve ever run an interview for - front-end engineering, back-end engineering, SRE, DevOps, QA, UI/UX/design, content marketing, management/mentoring, etc.

      3. Be prepared with other board games for people who are not familiar with Monopoly; I’ve found that at least being familiar with the rules to Chess, Boggle, and Ludo (at least, the variant popularized in the US as the children’s game “Sorry”) to be enough to cover every interview I’ve ever run since starting to use Monopoly.

      Not only have I personally found great success in the hires who have been through this interview process, everyone who has ever been through this interview process with me has reported finding it one of the best interviews they’ve ever had, and I know of at least a 14 companies who now use it as their base interview process, either because I introduced it at that company, or previous hires of mine who have moved on to other companies took it with them.

    4. 22

      After averaging north of 80/week working in the tech industry for 20+ years, I am taking a month off of work to get some professional help. What I used to write off as a noble work ethic has developed into a serious mental health problem - I never feel like I dare take a day off, for fear employers will realize they never needed me anyway; the projects and relationships I’ve built up will collapse like a house of cards; my wife will stop loving me because I can no longer provide for our family; and, generally speaking, I will be exposed as a giant fraud who should never have been employed in this industry to begin with.

      The first task I’ve set myself for Tuesday morning is to identify a mental health professional who can help me work through this.

      1. 2

        Good job, and thank you for sharing! It’s incredibly valuable to have people see others engage with mental health issues as they would any other illness.

      2. 2

        That sounds like a tough situation. I’m glad you’ve taken the step to seek help.

        1. 2

          Thanks for the support. The hardest part about it is that this is so deeply embedded in my own self-definition that I’m really struggling to keep seeing it as a problem, even though when I process it completely logically it is obvious. I have been added to the waitlist for a few therapists in the area, turns out due to the pandemic almost everyone is booked weeks out.