Threads for 1amzave

  1. 5

    Company: Equinix Metal

    Company site: https://metal.equinix.com/

    Position: Senior Firmware Engineer

    Location: Remote (US/UK/EU preferred, other locations potentially considered)

    Description: Help develop OpenBMC for deployment on our bare-metal cloud servers! Past experience with Linux kernel/firmware/embedded development, electronics, and open-source community participation desired; see the posting for more information.

    Tech stack: OpenBMC – Yocto/OpenEmbedded, Linux kernel, u-boot, C, C++, occasional bits of Rust.

    Compensation: I don’t have any concrete numbers available, but it’s pretty competitive. US-based applicants may appreciate that health benefits have recently been expanded to cover travel and lodging.

    Contact: email <zweiss at equinix.com>, or find me (username zevweiss) on the OpenBMC Discord server (please do get in touch if you’ve applied or have any questions!).

    1. 1

      Nearly impossible to detect on the running server. However, if you have something like a pihole looking for dns exfiltration attempts, this becomes much easier to detect. It does require multiple layers of protection though, I’ll give it that.

      1. 2

        Since I haven’t seen any mention of it tampering with the kernel or hooking actual syscalls (as opposed to userspace syscall wrappers), it sounds like its concealment mechanisms should be pretty simple to bypass using statically-linked executables? (A static busybox build, say.)

        1. 1

          This was my take. LD_PRELOAD wouldn’t work in the statically linked context

        2. 1

          Or if you’re running in AWS there’s also their guardduty alert which I hope would pick it up: https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-ec2.html#backdoor-ec2-ccactivitybdns

          1. 1

            The grsecurity patchset includes a feature called Trusted Path Execution (TPE). It can integrate with the RTLD to completely mitigate LD_PRELOAD abuses. I’m working on implementing something similar in HardenedBSD this weekend. :-)

          1. 18

            As Github’s post notes, Atom is the thing that gave us Electron. That’s going to be with us a long time.

            1. 37

              But other than that, Mrs. Lincoln, how was the play

              1. 15

                Know what’s cooler than pissing and moaning about Electron? Taking a web codebase and getting three desktop apps for cheap. I run a 2013 MBP and good Electron apps are fine. What the hell are people complaining about? Is this some sort of purity test?

                1. 11

                  For me, personally, many Electron apps are quite sluggish and far less resource efficient than native apps (Discord, Skype, etc)

                  1. 1

                    There’s definitely good and bad electron apps. Slack, VS Code and Spotify are very snappy and do enough stuff to justify the extra memory usage, while Discord and Signal are absolute turds.

                    At the end of the day the memory usage thing is a bit of a canard. I have a 2018 laptop with 16GB of RAM, regularly run Goland, PyCharm (multiple instances), Spotify, Slack, Firefox, Chrome, VS Code, Obsidian… and still have a few GB of RAM to spare.

                    1. 2

                      So in other words, running a bunch of Electron apps takes gigabytes upon gigabytes of memory and you’d be screwed if you had only 8GB.

                  2. 9

                    Doing something for cheap that degrades the user experience is cool for managers, but not for users. If good Electron apps run fine on a 2013 laptop, think of what bad Electron apps do on a 2008 laptop.

                    1. 1

                      I grew up in a developing country, and even I find it hard to shed a tear for those poor people using a laptop that is almost a decade and a half old.

                      1. 3

                        The slope of the Moore’s Law performance curve has leveled off significantly in the last ~10 years or so; the difference between a 2008 computer and a 2022 computer is a lot smaller than a 1994 computer and a 2008 computer. If it works well enough (or perhaps, if the only reason it wouldn’t work well enough is shitty, resource-gluttonous software), why spend money and create more e-waste replacing it?

                    2. 3

                      Maybe desktop applications are not desirable. Maybe Web applications are an anti-pattern. It’s not a purity test, but a realization that we made an architectural wrong turn.

                      1. 3

                        If you’re against desktop and web applications then are you for a new platform? How does it differ from the web beyond just being better somehow?

                        1. 1

                          Maybe the concept of “application” – the deliverable, the product, the artifact which is sold to consumers – is incorrect.

                          At a minimum, we could imagine replacing the Web with something even more amenable to destroying copyright and empowering users of Free Software. A content-addressed system is one possible direction.

                      2. 2

                        Electron apps tend to crash my Wayland desktop if they’re not updated. They rarely are, like Discord’s Electron hasn’t been updated for ages.

                        Sure there are always ways to skirt around the issue, but we have a lot of resources yet most of them are spent on apps that run worse. Often those apps use Electron.

                        We shouldn’t have worse resource usage and waste of energy just because some managers think it’s cheap on the short run.

                        1. 1

                          Nobody’s forcing you to use those apps. Just take your money somewhere else and notify those managers that they are losing your business because they use electron.

                          1. 6

                            Pretty much everyone is forcing me to use Slack, though. To the point where it was actually a significant factor in finally deciding I had to buy a new computer a couple years ago, with as much RAM as I could fit in it so that Slack didn’t make the fan practically lift it off the desk. Yeah there’s a web client but it doesn’t have all the integrations and blah blah. And I’d say 90% of the companies & projects I’ve worked with & on in the last 3-4 years have required me to use it, no matter how much I don’t like it.

                            1. 6

                              I am forced to use Discord if I want the community around my games to thrive.

                              I’m locked in to these systems, that’s the whole point of making them like this.

                              Same with Slack, but for work.

                              Edit: Not to forget all my friends who use discord, and good luck trying tp convince them to use any alternatives.

                          2. 2

                            Yeah

                          3. 0

                            Sparks - So Tell Me Mrs. Lincoln Aside From That How Was The Play? (Official Audio) https://youtu.be/OuHGmtdJrDM?list=OLAK5uy_ntUoHXUt38rtp3L91dpdq-n7l776TF0nE

                          4. 1

                            Nodewebkit existed and was a big deal before that. Electron is not the breakthrough piece of softtware many are assuming it is.

                            1. 2

                              As did Microsoft’s HTA in 1998, but in the end, it was Electron that got mass adoption.

                          1. 6

                            Nice. For a while I used DNS TXT records as an alternative to twitter, though it fell into disuse and I recently abandoned it for a honk instance.

                            Also in the category of DNS-based hacks: iodine (IP-over-DNS tunnel).

                            1. 1

                              Hah, coincidentally, I too recently switched to a honk instance from Pleroma. Great stuff.

                            1. 16

                              In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs. Then Node started warning me about security problems in some of those libraries. I ended up taking some time finding alternative packages with fewer dependencies.

                              On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t. It’s cool to look at how tiny and efficient code can be — a Scheme interpreter in 4KB! The original Mac OS was 64KB! — but yowza, is it ever difficult to code that way.

                              There was an early Mac word processor — can’t remember the name — that got a lot of love because it was super fast. That’s because they wrote it in 68000 assembly. It was successful for some years, but failed by the early 90s because it couldn’t keep up with the feature set of Word or WordPerfect. (I know Word has long been a symbol of bloat, but trust me, Word 4 and 5 on Mac were awesome.) Adding features like style sheets or wrapping text around images took too long to implement in assembly compared to C.

                              The speed and efficiency of how we’re creating stuff now is crazy. People are creating fancy OSs with GUIs in their bedrooms with a couple of collaborators, presumably in their spare time. If you’re up to speed with current Web tech you can bring up a pretty complex web app in a matter of days.

                              1. 24

                                I don’t know, I think there’s more to it than just “these darn new languages with their package managers made dependencies too easy, in my day we had to manually download Boost uphill both ways” or whatever. The dependencies in the occasional Swift or Rust app aren’t even a tenth of the bloat on my disk.

                                It’s the whole engineering culture of “why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application, and then implement your glorified scp GUI application inside that, so that you never have to learn anything other than the one and only tool you know”. Everything’s turned into 500megs worth of nail because we’ve got an entire generation of Hammer Engineers who won’t even consider that it might be more efficient to pick up a screwdriver sometimes.

                                We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t

                                That’s the argument, but it’s not clear to me that we haven’t severely over-corrected at this point. I’ve watched teams spend weeks poking at the mile-high tower of leaky abstractions any react-native mobile app teeters atop, just to try to get the UI to do what they could have done in ten minutes if they’d bothered to learn the underlying platform API. At some point “make all the world a browser tab” became the goal in-and-of-itself, whether or not that was inefficient in every possible dimension (memory, CPU, power consumption, or developer time). It’s heretical to even question whether or not this is truly more developer-time-efficient anymore, in the majority of cases – the goal isn’t so much to be efficient with our time as it is to just avoid having to learn any new skills.

                                The industry didn’t feel this sclerotic and incurious twenty years ago.

                                1. 7

                                  It’s heretical to even question whether or not this is truly more developer-time-efficient anymore

                                  And even if we set that question aside and assume that it is, it’s still just shoving the costs onto others. Automakers could probably crank out new cars faster by giving up on fuel-efficiency and emissions optimizations, but should they? (Okay, left to their own devices they probably would, but thankfully we have regulations they have to meet.)

                                  1. 1

                                    left to their own devices they probably would, but thankfully we have regulations they have to meet.

                                    Regulations. This is it.

                                    I’ve long believed that this is very important in our industry. As earlier comments say, you can make a complex web app after work in a weekend. But then there are people, in the mentioned above autoindustry, that take three sprints to set up a single screen with a table, a popup, and two forms. That’s after they pulled in the internet worth of dependencies.

                                    On the one hand, we don’t want to be gatekeeping. We want everyone to contribute. When dhh said we should stop celebrating incompetence, majority of people around him called this gatekeeping. Yet when we see or say something like this - don’t build bloat or something along the line - everyone agrees.

                                    I think the middle line should be in between. Let individuals do whatever the hell they want. But regulate “selling” stuff for money or advertisement eyeballs or anything similar. If an app is more then x MB (some reasonable target), it has to get certified before you can publish it. Or maybe, if a popular app does. Or, if a library is included in more then X, then that lib either gets “certified”, or further apps using it are banned.

                                    I am sure that is huge, immensely big, can of worms. There will be many problems there. But if we don’t start cleaning up shit, it’s going to pile up.

                                    A simple example - if controversial - is Google. When they start punishing a webapp for not rendering within 1 second, everybody on internet (that wants to be on top of google) starts optimizing for performance. So, it can be done. We just have to setup - and maintain - a system that deals with the problem ….well, systematically.

                                  2. 1

                                    why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application

                                    Yeah. One of the things that confuses me is why apps bundle a browser when platforms already come with browsers that can easily be embedded in apps. You can use Apple’s WKWebView class to embed a Safari-equivalent browser in an app that weighs in at under a megabyte. I know Windows has similar APIs, and I imagine Linux does too (modulo the combinatorial expansion of number-of-browsers times number-of-GUI-frameworks.)

                                    I can only imagine that whoever built Electron felt that devs didn’t want to deal with having to make their code compatible with more than one browser engine, and that it was worth it to shove an entire copy of Chromium into the app to provide that convenience.

                                    1. 1

                                      Here’s an explanation from the Slack developer who moved Slack for Mac from WebKit to Electron. And on Windows, the only OS-provided browser engine until quite recently was either the IE engine or the abandoned EdgeHTML.

                                  3. 10

                                    On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                                    The problem is that your dependencies can behave strangely, and you need to debug them.

                                    Code bloat makes programs hard to debug. It costs programmer time.

                                    1. 3

                                      The problem is that your dependencies can behave strangely, and you need to debug them.

                                      To make matters worse, developers don’t think carefully about which dependencies they’re bothering to include. For instance, if image loading is needed, many applications could get by with image read support for one format (e.g. with libpng). Too often I’ll see an application depend on something like ImageMagick which is complete overkill for that situation, and includes a ton of additional complex functionality that bloats the binary, introduces subtle bugs, and wasn’t even needed to begin with.

                                    2. 10

                                      On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                                      The problem is that computational resources vs. programmer time is just one axis along which this tradeoff is made: some others include security vs. programmer time, correctness vs. programmer time, and others I’m just not thinking of right now I’m sure. It sounds like a really pragmatic argument when you’re considering your costs because we have been so thoroughly conditioned into ignoring our externalities. I don’t believe the state of contemporary software would look like it does if the industry were really in the habit of pricing in the costs incurred by others in addition to their own, although of course it would take a radically different incentive landscape to make that happen. It wouldn’t look like a code golfer’s paradise, either, because optimizing for code size and efficiency at all costs is also not a holistic accounting! It would just look like a place with some fewer amount of data breaches, some fewer amount of corrupted saves, some fewer amount of Watt-hours turned into waste heat, and, yes, some fewer amount of features in the case where their value didn’t exceed their cost.

                                      1. 7

                                        We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t

                                        But we aren’t. Because modern resource-wastfull software isn’t really realeased quicker. Quite the contrary, there is so much development overhead that we don’t see those exciting big releases anymore with a dozen of features every ones loves at first sight. They release new features in microscopic increments so slowly that hardly any project survives 3-5 years without becoming obsolete or out of fashion.

                                        What we are trading is quality, by quantity. We lower the skill and knowledge barrier so much to acompdate for millions of developers that “learned how tonprogra in one week” and the results are predictably what this post talks about.

                                        1. 6

                                          I’m as much against bloat as everyone else (except those who make bloated software, of course—those clearly aren’t against it). However, it’s easy to forget that small software from past eras often couldn’t do much. The original Mac OS could be 64KB, but no one would want to use such a limited OS today!

                                          1. 5

                                            The original Mac OS could be 64KB, but no one would want to use such a limited OS today!

                                            Seems some people (@neauoire) do want exactly that: https://merveilles.town/@neauoire/108419973390059006

                                            1. 6

                                              I have yet to see modern software that is saving the programmer’s time.

                                              I’m here for it, I’ll be cheering when it happens.

                                              This whole thread reminds me of a little .txt file that came packaged into DawnOS.

                                              It read:

                                              Imagine that software development becomes so complex and expensive that no software is being written anymore, only apps designed in devtools. Imagine a computer, which requires 1 billion transistors to flicker the cursor on the screen. Imagine a world, where computers are driven by software written from 400 million lines of source code. Imagine a world, where the biggest 20 technology corporation totaling 2 million employees and 100 billion USD revenue groups up to introduce a new standard. And they are unable to write even a compiler within 15 years.

                                              “This is our current world.”

                                              1. 11

                                                I have yet to see modern software that is saving the programmer’s time.

                                                People love to hate Docker, but having had the “pleasure” of doing everything from full-blown install-the-whole-world-on-your-laptop dev environments to various VM applications that were supposed to “just work”… holy crap does Docker save time not only for me but for people I’m going to collaborate with.

                                                Meanwhile, programmers of 20+ years prior to your time are equally as horrified by how wasteful and disgusting all your favorite things are. This is a never-ending cycle where a lot of programmers conclude that the way things were around the time they first started (either programming, or tinkering with computers in general) was a golden age of wise programmers who respected the resources of their computers and used them efficiently, while the kids these days have no respect and will do things like use languages with garbage collectors (!) because they can’t be bothered to learn proper memory-management discipline like their elders.

                                                1. 4

                                                  I’m of the generation that started programming at the tail end of ruby, and Objective-C, and I would definitely not call this the golden age, if anything, looking back at this period now it looks like mid-slump.

                                                2. 4

                                                  I have yet to see modern software that is saving the programmer’s time.

                                                  What’s “modern”? Because I would pick a different profession if I had to write code the way people did prior to maybe the late 90s (at minimum).

                                                  Edit: You can pry my modern IDEs and toolchains from my cold, dead hands :-)

                                            2. 6

                                              Node is an especially good villain here because JavaScript has long specifically encouraged lots of small dependencies and has little to no stdlib so you need a package for near everything.

                                              1. 5

                                                It’s kind of a turf war as well. A handful of early adopters created tiny libraries that should be single functions or part of a standard library. Since their notoriety depends on these libraries, they fight to keep them around. Some are even on the boards of the downstream projects and fight to keep their own library in the list of dependencies.

                                              2. 6

                                                We’re trading CPU time and memory, which are ridiculously abundant

                                                CPU time is essentially equivalent to energy, which I’d argue is not abundant, whether at the large scale of the global problem of sustainable energy production, or at the small scale of mobile device battery life.

                                                for programmer time, which isn’t.

                                                In terms of programmer-hours available per year (which of course unit-reduces to active programmers), I’m pretty sure that resource is more abundant than it’s ever been any point in history, and only getting more so.

                                                1. 2

                                                  CPU time is essentially equivalent to energy

                                                  When you divide it by the CPU’s efficiency, yes. But CPU efficiency has gone through the roof over time. You can get embedded devices with the performance of some fire-breathing tower PC of the 90s, that now run on watch batteries. And the focus of Apple’s whole line of CPUs over the past decade has been power efficiency.

                                                  There are a lot of programmers, yes, but most of them aren’t the very high-skilled ones required for building highly optimal code. The skills for doing web dev are not the same as for C++ or Rust, especially if you also constrain yourself to not reaching for big pre-existing libraries like Boost, or whatever towering pile of crates a Rust dev might use.

                                                  (I’m an architect for a mobile database engine, and my team has always found it very difficult to find good developers to hire. It’s nothing like web dev, and even mobile app developers are mostly skilled more at putting together GUIs and calling REST APIs than they are at building lower-level model-layer abstractions.)

                                                2. 2

                                                  Hey, I don’t mean to be a smart ass here, but I find it ironic that you start your comment blaming the “high-level languages with package systems” and immediately admit that you blindly picked a library for the job and that you could solve the problem just by “taking some time finding alternative packages with fewer dependencies”. Does not sound like a problem with neither the language nor the package manager honestly.

                                                  What would you expect the package manager to do here?

                                                  1. 8

                                                    I think the problem in this case actually lies with the language in this case. Javascript has such a piss-poor standard library and dangerous semantics (that the standard library doesn’t try to remedy, either) that sooner, rather than later, you will have a transient dependency on isOdd, isEven and isNull because even those simple operations aren’t exactly simple in JS.

                                                    Despite being made to live in a web browser, the JS standard library has very few affordances to working with things like URLs, and despite being targeted toward user interfaces, it has very few affordances for working with dates, numbers, lists, or localisations. This makes dependency graphs both deep and filled with duplicated efforts since two dependencies in your program may depend on different third-party implementations of what should already be in the standard library, themselves duplicating what you already have in your operating system.

                                                    1. 2

                                                      It’s really difficult for me to counter an argument that it’s basically “I don’t like JS”. The question was never about that language, it was about “high-level languages with package systems” but your answer hyper focuses on JS and does not address languages like python for example, that is a “high-level language with a package system”, which also has an “is-odd” package (which honestly I don’t get what that has to do with anything).

                                                      1. 1

                                                        The response you were replying to was very much about JS:

                                                        In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs.

                                                        For what it’s worth, whilst Python may have an isOdd package, how often do you end up inadvertently importing it in Python as opposed to “batteries-definitely-not-included” Javascript? Fewer batteries included means more imports by default, which themselves depend on other imports, and a few steps down, you will find leftPad.

                                                        As for isOdd, npmjs.com lists 25 versions thereof, and probably as many isEven.

                                                        1. 1

                                                          and a few steps down, you will find leftPad

                                                          What? What kind of data do you have to back up a statement like this?

                                                          You don’t like JS, I get it, I don’t like it either. But the unfair criticism is what really rubs me the wrong way. We are technical people, we are supposed to make decisions based on data. But this kind of comments that just generates division without the slightest resemblance of a solid argument do no good to a healthy discussion.

                                                          Again, none of the arguments are true for js exclusively. Python is batteries included, sure, but it’s one of the few. And you conveniently leave out of your quote the part when OP admits that with a little effort the “problem” became a non issue. And that little effort is what we get paid for, that’s our job.

                                                    2. 3

                                                      I’m not blaming package managers. Code reuse is a good idea, and it’s nice to have such a wealth of libraries available.

                                                      But it’s a double edged sword. Especially when you use a highly dynamic language like JS that doesn’t support dead-code stripping or build-time inlining, so you end up having to copy an entire library instead of just the bits you’re using.

                                                    3. 1

                                                      On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                                                      We’re trading CPU and memory for the time of some programmers, but we’re also adding the time of other programmers onto the other side of the balance.

                                                      1. 1

                                                        I definitely agree with your bolded point - I think that’s the main driver for this kind of thing.

                                                        Things change if there’s a reason for them to be changed. The incentives don’t really line up currently to the point where it’s worth it for programmers/companies to devote the time to optimize things that far.

                                                        That is changing a bit already, though. For example, performance and bundle size are getting seriously considered for web dev these days. Part of the reason for that is that Google penalizes slow sites in their rankings - a very direct incentive to make things faster and more optimized!

                                                      1. 1

                                                        The logging code records a thread ID, but I don’t see any mention of what use (if any) the trace replay makes of it – it could be more accurate to model concurrency for lock contention and such, but then you also get into questions of how the performance differences between different allocators affect that concurrency (two allocations that were concurrent with the original allocator might not have ended up that way with another).

                                                        1. 3

                                                          My experience is exactly the opposite. We all know what the plural of anecdote isn’t data, but still. I had EXT3/4 partitions suffer all kinds of power loss and hardware failures, and they were always recoverable.

                                                          But that one time I decided to go with XFS for the root partition, a power loss event killed it instantly. The data partition, which was EXT3, just needed a routine fsck. I never used XFS since then: once bitten, twice shy, you know.

                                                          I still haven’t tried BTRFS, so I can’t say anything on that subject yet.

                                                          1. 2

                                                            Out of curiosity, how long ago was this? I know XFS used to have pretty significant reliability issues in the past, but I’ve been using it nowadays for quite a while without issues.

                                                            1. 1

                                                              That XFS incident was in 2008 or such, very long time ago. None of my friends use XFS, so I had no way to know if it improved and after that event I didn’t feel like trying it again without a solid proof that it improved. :)

                                                              1. 1

                                                                Ah interesting. I know by around 2012 a lot of major improvements had either very recently been, or were soon to be, pushed into XFS (https://xfs.org/images/d/d1/Xfs-scalability-lca2012.pdf), which included the addition of checksums on metadata. I do also know that it had a strong tendency to lose data on power loss in the past, but as for some very anecdotal evidence, I’ve been using it for a few years now on my personal system, and it’s endured at least several dozen forced shutdowns without data loss.

                                                            2. 2

                                                              Indeed – this post and the ensuing discussion reinforces my belief in my Grand Unifying Theory of Filesystems.

                                                              1. 1

                                                                Paraphrase:

                                                                For all filesystems there exists a user that says “$fs ate my data”

                                                            1. 1

                                                              “any kernel version” That’s probably not true.

                                                              1. 3

                                                                If you take a maximally-literal interpretation, sure, it’s not going to work on HURD or FreeBSD or Linux 2.4, but I think it would be fair to interpret it as meaning it will work with any kernel for which that could reasonably be expected (i.e. a recent-ish Linux with CONFIG_IO_URING=y).

                                                              1. 5

                                                                I enjoyed reading this, though I’m not sure I agree entirely with its conclusions.

                                                                Turn it off, and turn it on again. Anything else is less principled.

                                                                I think it depends on the particulars of the situation. If you’re talking about a running process that’s discovered some internal error and can be easily and quickly restarted, yeah, crashing and restarting it probably makes sense. But if restarting is expensive (e.g. 10+ minutes of downtime for a bare-metal server reboot, or worse, the same across an entire cluster) and there’s a fairly simple/obvious fix that can be applied, why not do that?

                                                                It seems like the reasoning in the article is founded on (what I see as) an overly optimistic picture of how well-understood your system’s state really is even when it appears to be functioning as intended. A running process, let alone an entire server or cluster, has many, many bits of state – the fraction of those that its authors are aware of and thoroughly understand is a tiny fraction of the whole. Even before any bugginess has (detectably) reared it’s head, there’s a gigantic iceberg of subsurface state that we just kind of assume is in alignment with the tip of it that we can see. Subtle non-determinism can creep in from all sorts of places and manifest in that hidden state, from ASLR at the OS level to temperature-dependent differences in how many cycles it takes a PLL in your DRAM controller to lock when it comes out of reset (I’ve learned from experience that it’s entirely possible to run the exact same sequence of instructions from system power-on and get different behavior from one run to the next). There is no Mozart; it’s always jazz.

                                                                (I should clarify that this isn’t to say we shouldn’t strive to understand our systems and their states as thoroughly as possible, I just think it’s fair to acknowledge that that understanding is always going to be less than absolutely complete.)

                                                                1. 2

                                                                  I’m not sure you disagree with the author. If I’m right about the article’s implicit assumptions and yours, I think we all agree that a system that is functioning as intended is very much capable of concealing latent dysfunction, and it’s only when an error actually occurs that we are informed of the fact that there is a disconnect between our mental model of how the system should behave vs. how it’s actually behaving. But that’s the point: so long as the system is behaving as expected, even if we assume a priori that at least one such possible error state exists, we cannot know its specific nature until it rears its head (assume that we’ve exhausted every avenue for static analysis available to us, since none of those can save us if our spec is incorrect). Once we observe such an error, it’s incumbent on us to investigate its causes and expand our knowledge of how the true system state evolves, even if full knowledge of that evolution will always elude us. Crash-only behavior is valuable both because it surfaces those error states quickly and because, to your concern, it is the strategy that demands the least of us in terms of knowing precisely the ideal state of the system, the current state of the system, and what a viable path between those two might be. So then: because a crash-only strategy is the most resilient to imperfect knowledge, systems should be designed in the first place to minimize the expense of pursuing that strategy.

                                                                1. 1

                                                                  Mildly amusing:

                                                                  Being beautiful is not something I would say about an algorithm.

                                                                  …and in the video linked from the article:

                                                                  The algorithm still works. […] That is just so beautiful.

                                                                  1. 14

                                                                    I’m very curious how these companies address the fact that there are countries where smartphones are not universally owned (because of cost, or lack of physical security for personal belongings).

                                                                    1. 8

                                                                      At least Microsoft has multiple paths for 2FA - an app, or a text sent to a number. It’s hard to imagine them going all in on “just” FIDO.

                                                                      Now, as to whether companies should support these people - from a purely money-making perspective, if your customers cannot afford a smartphone, maybe they’re not worth that much as customers?

                                                                      A bigger issue is if public services are tied to something like this, but in that case, subsidizing smartphone use is an option.

                                                                      1. 24

                                                                        if your customers cannot afford a smartphone, maybe they’re not worth that much as customers?

                                                                        I had a longer post typed out and I don’t think at all you meant this but at a certain point we need to not think of people as simply customers and begin to think that we’re taking over functions typically subsidized or heavily regulated by the government like phones or mail. It was not that long ago that you probably could share a phone line (telcos which were heavily regulated) with family members or friends when looking for a job or to be contacted about something. Or pay bills using the heavily subsidized USPS. Or grab a paper to go through classifieds to find a job.

                                                                        Now you need LinkedIn/Indeed, an email address, Internet, your own smartphone, etc. to do anything from paying bills to getting a job. So sure if you’re making a throwaway clickbait game you probably don’t need to care about this.

                                                                        But even this very website, do we want someone who is not doing so well financially to be deprived of keeping up with news on their industry or someone too young to have a cellphone from participating? I don’t think it is a god-given right but the more people are not given access to things you or I have access to, the greater the divide becomes. Someone who might have a laptop, no Internet, but have the ability to borrow a neighbor’s wifi. Similarly a family of four might not have a cell phone for every family member.

                                                                        I could go on but like discrimination or dealing with people of various disabilities it is something that’s really easy to forget.

                                                                        1. 15

                                                                          I should have been clearer. The statement was a rhetorical statement of opinion, not an endorsement.

                                                                          Viewing users as customers excludes a huge number of people, not just those too poor to have a computer/smartphone, but also people with disabilities who are simply too few to economically cater to. That’s why governments need to step in with laws and regulations to ensure equal access.

                                                                          1. 11

                                                                            I think governments often think about this kind of accessibility requirement exactly the wrong way around. Ten or so years ago, I looked at the costs that were being passed onto businesses and community groups to make building wheelchair accessible. It was significantly less than the cost of buying everyone with limited mobility a motorised wheelchair capable of climbing stairs, even including the fact that those were barely out of prototype and had a cost that reflected the need to recoup the R&D investment. If the money spent on wheelchair ramps had been invested in a mix of R&D and purchasing of external prosthetics, we would have spent the same amount and the folks currently in wheelchairs would be fighting crime in their robot exoskeletons. Well, maybe not the last bit.

                                                                            Similarly, the wholesale cost of a device capable of acting as a U2F device is <$5. The wholesale cost of a smartphone capable of running banking apps is around $20-30 in bulk. The cost for a government to provide one to everyone in a country is likely to be less than the cost of making sure that government services are accessible by people without such a device, let alone the cost to all businesses wanting to operate in the country.

                                                                            TL;DR: Raising people above the poverty line is often cheaper than ensuring that things are usable by people below it.

                                                                            1. 12

                                                                              Wheelchair ramps help others than those in wheelchairs - people pushing prams/strollers, movers, emergency responders, people using Zimmer frames… as the population ages (in developed countries) they will only become more relevant.

                                                                              That said, I fully support the development of powered exoskeletons to all who need or want them.

                                                                              1. 8

                                                                                The biggest and most expensive problem around wheelchairs is not ramps, it’s turn space and door sizes. A wheelchair is broader (especially the battery-driven ones you are referring to) and needs more space to turn around than a standing human. Older buildings often have too narrow pathways and doors.

                                                                                Second, all wheelchairs and exoskeletons here would need to be custom, making them inappropriate for short term disability or smaller issues like walking problems that only need crutches. All that while changing the building (or building it right in the first place) is as close to a one-size-fits-all solution as it gets.

                                                                                1. 5

                                                                                  I would love it if the government would buy me a robo-stroller, but until then, I would settle for consistent curb cuts on the sidewalks near my house. At this point, I know where the curb cuts are and are not, but it’s a pain to have to know which streets I can or can’t go down easily.

                                                                                2. 7

                                                                                  That’s a good point, though I think there are other, non-monetary concerns that may need to be taken into account as well. Taking smartphones for example, even if given out free by the government, some people might not be real keen on being effectively forced to own a device that reports their every move to who-knows-how-many advertisers, data brokers, etc. Sure, ideally we’d solve that problem with some appropriate regulations too, but that’s of course its own whole giant can of worms…

                                                                                  1. 2

                                                                                    The US government will already buy a low cost cellphone for you. One showed up at my house due to some mistake in shipping address. I tried to send it back, but couldn’t figure out how. It was an ancient Android phone that couldn’t do modern TLS, so it was basically only usable for calls and texting.

                                                                                    1. 2

                                                                                      Jokes aside - it is basically a requirement in a certain country I am from; if you get infected by Covid you get processed by system and outdoors cameras monitor so you don’t go outside, but to be completely sure you’re staying at home during recovery it is mandatory to install a government-issued application on your cellphone/tablet that tracks your movement. Also some official check ups on you with videocalls in said app to verify your location as well several times per day at random hours.

                                                                                      If you fail to respond in time or geolocation shows you left your apartments you’ll automatically get a hefty fine.

                                                                                      Now, you say, it is possible to just tell them “I don’t own a smartphone” - you’ll get cheap but working government-issued android tablet, or at least you’re supposed to; as lots of other things “the severity of that laws is being compensated by their optionality” so quite often devices don’t get delivered at all.

                                                                                      By law you cannot decline the device - you’ll get fined or they promise to bring you to hospital as mandatory measure.

                                                                                  2. 7

                                                                                    Thank you very much for this comment. I live in a country where “it is expected” to have a smartphone. The government is making everything into apps which are only available on Apple Appstore or Google Play. Since I am on social welfare I cannot afford a new smartphone every 3-5 years and old ones are not supported either by the appstores or by the apps themselves.

                                                                                    I have a feeling of being pushed out by society due to my lack of money. Thus I can relate to people in similar positions (larger families with low incomes etc.).

                                                                                    I would really like more people to consider that not everybody has access to new smartphones or even a computer at home.

                                                                                    I believe the Internet should be for everyone not just people who are doing well.

                                                                                3. 6

                                                                                  If you don’t own a smartphone, why would you own a computer? Computers are optional supplements to phones. Phones are the essential technology. Yes, there are weirdos like us who may choose to own a computer but not a smartphone for ideological reasons, but that’s a deliberate choice, not an economic one.

                                                                                  1. 7

                                                                                    In the U.S., there are public libraries where one can use a computer. In China, cheap internet cafés are common. If computer-providing places like these are available to non-smartphone-users, that could justify services building support for computer users.

                                                                                    1. 1

                                                                                      In my experience growing up in a low income part of the US, most people there now only have smartphones. There most folks use laptops in office or school settings. It remains a difficulty for those going to college or getting office jobs. It was the same when I was growing up there except there were no smartphones, so folks had flip phones. Parents often try and save up to buy their children nice smartphones.

                                                                                      I can’t say this is true across the US, but for where I grew up at least it is.

                                                                                      1. 1

                                                                                        That’s a good point, although it’s my understanding that in China you need some kind of government ID to log into the computers. Seems like the government ID could be made to work as a FIDO key.

                                                                                        Part of the reason a lot of people don’t have a computer nowadays is that if you really, really need to use one to do something, you can go to the library to do it. I wonder though if the library will need to start offering smartphone loans next.

                                                                                      2. 5

                                                                                        How are phones the “essential technology”? A flip phone is 100% acceptable these days if you just have a computer. There is nothing about a smartphone that’s required to exist, let alone survive.

                                                                                        A computer, on the other hand, (which a smart phone is a poor approximation of), is borderline required to access crucial services outside of phone calls and direct visits. “Essential technology” is not a smartphone.

                                                                                        1. 2

                                                                                          There’s very little I can only do on a computer (outside work) that I can’t do on a phone. IRC and image editing, basically. Also editing blog posts because I do that in the shell.

                                                                                          I am comfortable travelling to foreign lands with only a phone, and relying on it for maps, calls, hotel reservations, reading books, listening to music…

                                                                                          1. 1

                                                                                            The flip phones all phased out years ago. I have friends who deliberately use flip phones. It is very difficult to do unless you are ideologically committed to it.

                                                                                          2. 3

                                                                                            I’m curious about your region/job/living situation, and what about is making phones “the essential technology”? I barely need a phone to begin with, not to mention a smartphone. It’s really only good as a car navigation and an alarm clock to me.

                                                                                            1. 1

                                                                                              People need to other people to live. Most other people communicate via phone.

                                                                                              1. 1

                                                                                                It’s hardly “via phone” if it’s Signal/Telegram/FB/WhatsApp or some other flavor of the week instant messenger. You can communicate with them on your PC just as well.

                                                                                                1. 4

                                                                                                  I mean I guess so? I’m describing how low income people in the US actually live, not judging whether it makes sense. Maybe they should all buy used Chromebooks and leech Wi-Fi from coffee shops. But they don’t. They have cheap smartphones and prepaid cards.

                                                                                                  1. 2

                                                                                                    You can not connect to WhatsApp via the web interface without a smartphone running the WhatsApp app, and Signal (which does not have this limitation) requires a smartphone as the primary key with the desktop app only acting as a subkey. I think Telegram also requires a smartphone app for initial provisioning.

                                                                                                    I think an Android Emulator might be enough, if you can manually relay the SMS code from a flip phone, maybe.

                                                                                              2. 2

                                                                                                You’re reasoning is logical if you’re presented a budget and asked what to buy. Purchasing does not happen in a vacuum. You may inherit a laptop, borrow a laptop, no longer afford a month to month cell phone bill, etc. Laptops also have a much longer life cycle than phones.

                                                                                                1. 4

                                                                                                  I’m not arguing that this is good, bad, or whatever. It’s just a fact that in the USA today if you are a low income person, you have a smartphone and not a personal computer.

                                                                                            1. 1

                                                                                              Heh, this is a neat hack.

                                                                                              I would expect booting the same filesystem twice would lead to massive filesystem corruption or kernel panics as two kernels write to the same block device at once, arguing over metadata updates.

                                                                                              In most cases this is true, though ext4 has (opt-in) multiple-mount protection available that can safeguard against this.

                                                                                              1. 1

                                                                                                This might work to protect the filesystem against complete corruption but things at the VFS layer and above (including userspace code) will make assumptions about exclusive access. For example, if you lock a file in exclusive mode, then you expect that nothing else will modify the blocks while you’re doing so. The MMP mode in ext4 won’t protect against this because the locks are purely a kernel construct, they aren’t reflected in the FS.

                                                                                                Some filesystems are designed explicitly to support a SAN model, where multiple machines have an iSCSI (or similar) block device and share the same machinery. Often, this involves some other out-of-band communication for things like locking to avoid needing a round-trip through persistent storage (think: NFS log manager, but at a lower level, so you lock a set of blocks rather than a range in a file).

                                                                                                1. 1

                                                                                                  This might work to protect the filesystem against complete corruption but things at the VFS layer and above (including userspace code) will make assumptions about exclusive access. For example, if you lock a file in exclusive mode, then you expect that nothing else will modify the blocks while you’re doing so. The MMP mode in ext4 won’t protect against this because the locks are purely a kernel construct, they aren’t reflected in the FS.

                                                                                                  That sounds like a fairly different mechanism (locks and such) than how I’m pretty sure ext4 MMP works – if you try to mount a filesystem that’s already mounted elsewhere it just fails the second mount. I suppose in the event of wildly screwy clocks that could let something slip through, though after looking into the code I see there’s also periodic runtime checking while mounted (with the usual panic/continue/remount-ro handling options) as an additional safeguard.

                                                                                              1. 6

                                                                                                100 versions later

                                                                                                This seems to be playing a little loose with the facts. At some point Firefox changed their versioning system to match Chrome, I assume so that it wouldn’t sound like Firefox was older or behind Chrome in development. Firefox did not literally travel from 1.0 to 100. So it probably either has fewer or more than 100 versions, depending on how you count. UPDATE: OK I was wrong, and that was sloppy of me, I should have actually checked instead of relying on my flawed memory. There are in fact at least 100 versions of Firefox. Seems like there are probably more than 100, but it’s not misleading to say that there are 100 versions if there are more than 100.

                                                                                                That said, this looks like a great release with useful features. Caption for picture-in-picture video seems helpful, and I’m intrigued by “Users can now choose preferred color schemes for websites.” On Android, they finally have HTTPS-only mode, so I can ditch the HTTPS Everywhere extension.

                                                                                                1. 6

                                                                                                  Wikipedia lists 100 major versions from 1 to 100.

                                                                                                  https://en.m.wikipedia.org/wiki/Firefox_version_history

                                                                                                  What did happen is that Mozilla adopted a 4 week release cycle in 2019 while Chrome was on a 6 week cycle until Q3 2021.

                                                                                                  1. 4

                                                                                                    They didn’t change their version scheme, they increased their release cadence.

                                                                                                    1. 7

                                                                                                      They didn’t change their version scheme

                                                                                                      Oh, but they did. In the early days they used a more “traditional” way of using the second number, so we had 1.5, and 3.5, and 3.6. After 5.0 (if I’m reading Wikipedia correctly) they switched to increasing the major version for every release regardless of its perceived significance. So there were in fact more than 100 Firefox releases.

                                                                                                      https://en.wikipedia.org/wiki/Firefox_early_version_history

                                                                                                      1. 3

                                                                                                        I kinda dislike this “bump major version” every release scheme, since it robs me of the ability to visually determine what may have really changed. For example, v2.5 to v2.6 is a “safe” upgrade, while v2.5 to v3.0 potentially has breaking changes. Now moving from v99 to v100 to v101, well, gotta carefully read release notes every single time.

                                                                                                        Oracle did something similar with JDK. We were on JDK 6 for several years, then 7 and then 8, until they ingested steroids and now we are on JDK 18! :-) :-)

                                                                                                        1. 7

                                                                                                          Sure for libraries, languages and APIs, but Firefox is an application. What is a breaking change in an application?

                                                                                                          1. 4

                                                                                                            I got really bummed when Chromium dropped the ability to operate over X forwarding in SSH a few years ago, back before I ditched Chromium.

                                                                                                            1. 1

                                                                                                              Changing the user interface (e.g. keyboard shortcuts) in backwards-incompatible ways, for one.

                                                                                                              And while it’s true that “Firefox is an application”, it’s also effectively a library with an API that’s used by numerous extensions, which has also been broken by new releases sometimes.

                                                                                                              1. 1

                                                                                                                My take is that it is the APIs that should be versioned because applications may expose multiple APIs that change at different rates and the version numbers are typically of interest to the API consumers, but not to human users.

                                                                                                                I don’t think UI changes should be versioned. Just seems like a way to generate arguments.

                                                                                                            2. 6

                                                                                                              It doesn’t apply to consumer software like Firefox, really. It’s not a library for which you care if it’s compatible. I don’t think version numbers even matter for consumer software these days.

                                                                                                              1. 5

                                                                                                                Every release contains important security updates. Can’t really skip a version.

                                                                                                                1. 1

                                                                                                                  Those are all backported to the ESR release, right? I’ve just noticed that my distro packages that; perhaps I should switch to it as a way to get the security fixes without the constant stream of CADT UI “improvements”…

                                                                                                                  1. 2

                                                                                                                    Most. Not all, because different features and such. You can compare the security advisories.

                                                                                                              2. 1

                                                                                                                Oh, yeah, I guess that’s right. I was focused in on when they changed the release cycle and didn’t think about changes earlier than that. Thank you.

                                                                                                          1. 1

                                                                                                            I wonder if a similar setup is possible with QEMU?

                                                                                                            1. 2

                                                                                                              It’s definitely possible to run qemu/kvm VMs with storage on raw block devices, so I’d certainly expect so.

                                                                                                              1. 2

                                                                                                                I’ve used this trick in the past testing lilo tweaks to make sure I’d still have a bootable machine.

                                                                                                            1. 8

                                                                                                              This looks like a pretty cool tool.

                                                                                                              Though while it’s completely tangential, because this particular README was full of them, I have to say I’m really not a fan of the recent trend toward screen-captures as looping gifs in READMEs. It seems the pace is always wrong (usually too fast for a person unfamiliar with it to really follow what’s going on), and combined with the lack of ability to adjust the speed or pause it, and it often being fairly non-obvious where the end/start cut is, I usually end up feeling that the presentation as a whole would be much better if they were simply removed (and better still if replaced by a well-written verbal description or shell transcript).

                                                                                                              1. 3

                                                                                                                Totally agree and I think asciinema is a good alternative for terminal captures. Also a mixture of images and descriptions are very nice.

                                                                                                              1. 8

                                                                                                                it’s widely recommended as best practice that all scripts should start by enabling this

                                                                                                                Sounds like it’s time for another periodic reminder that while pipefail can indeed be useful, blindly slapping it on every shell script in sight without consideration of the case-by-case particulars is not necessarily a good idea.

                                                                                                                1. 1

                                                                                                                  Title should have (2016)

                                                                                                                  1. 3

                                                                                                                    Most users of this site can suggest a title change or addition. If enough do, the change is automatically applied.

                                                                                                                    1. 2

                                                                                                                      Oh, neat, thanks - I think I missed the button showing up.

                                                                                                                    2. 2

                                                                                                                      Is the implication that the integer overflow situation in Rust has changed since then? If so, pointers to more up-to-date info would be cool.

                                                                                                                      1. 1

                                                                                                                        No, it’s just common lobste.rs practice.

                                                                                                                    1. 5

                                                                                                                      I dunno, this seems like a somewhat intractable problem of trying to precisely and concisely describe motley assemblages of software components (“operating systems”) by whether or not they belong to some fairly broadly-defined category. You can certainly draw some semi-arbitrary boundaries and do that, but what does it ultimately achieve?

                                                                                                                      For example, I run Void. It’s available in both glibc- and musl-based flavors. If I swap out one libc for the other, does it suddenly become a meaningfully different OS? I’d argue no, for most practical purposes. In a situation where the distinction is likely to be relevant, I can clarify by stating whether it’s “Void with musl” or “Void with glibc”, but it’s probably just as frequent (if not more so) that I’d need to clarify “Void with Xmonad” vs. “Void with KDE” or whatever else.

                                                                                                                      1. 2

                                                                                                                        It’s an interesting article with what seems to be an interesting thrust, but I’m struggling a bit to understand where the author is trying to take it.

                                                                                                                        So, yes, modern computers are in and of themselves distributed systems with a myriad of buses, multi-core processors, and arbitrarily complex software systems governing their operation. That in and of itself is an almost miraculous thing worth pondering.

                                                                                                                        But it seems like from there the author is asserting that we don’t currently have good abstractions to help us write potentially globe spanning distributed systems, and this is where they and I part ways.

                                                                                                                        I work for a Major Cloud Provider, and we operate crazy scale globe spanning distributed systems as a way of life. We’re practically awash in abstractions that help us achieve this. They all have different characteristics depending on precisely what they’re trying to achieve and how they’re choosing to present that, and no doubt as with everything they need to iterate and evolve.

                                                                                                                        So, if what they’re really saying is “We need to continue innovating better abstractions that makes running distributed systems easy and safe” then I’m in violent agreement :)

                                                                                                                        1. 8

                                                                                                                          I work for a Major Cloud Provider, and we operate crazy scale globe spanning distributed systems as a way of life. We’re practically awash in abstractions that help us achieve this.

                                                                                                                          Yes, but those don’t abstract away the distributed nature of the system. That’s why they differ from the abstraction in an individual computer.

                                                                                                                          1. 2

                                                                                                                            There’s a challenge there though, right? Because human brains by design have an incredibly difficult time visualizing parallel tasks.

                                                                                                                            Look at the utter sh** show any kind of threaded programming has been for the last 30 years even when we know there are far better models like Actors available the whole time.

                                                                                                                            How does one both enable people to build reliable distributed systems that work and NOT hide their distributed nature?

                                                                                                                            1. 3

                                                                                                                              How does one both enable people to build reliable distributed systems that work and NOT hide their distributed nature?

                                                                                                                              I think you may have misunderstood me. Today’s large-scale distributed abstractions don’t hide the distributed nature of the system, and, ostensibly, they’re used to build reliable systems.

                                                                                                                              That differs from the abstraction in an individual computer, which does hide the distributed nature of the system, and also, ostensibly, is used to build reliable systems.

                                                                                                                              Both kinds of abstractions, “hiding” and “non-hiding”, are heavily used. (Whether one is better than the other, I haven’t said anything about)

                                                                                                                              1. 2

                                                                                                                                How does one both enable people to build reliable distributed systems that work and NOT hide their distributed nature?

                                                                                                                                I think we already have some great abstractions around distributed systems in both theory and practice, especially if you work on a big cloud. However once it comes to an individual computer, we pretend that everything is working synchronously. I think we could author our systems much more effectively by designing with distributed systems abstractions from the beginning.

                                                                                                                                1. 1

                                                                                                                                  Ah you’re absolutely right. We’re still building systems using an architecture designed in the 50s and 60s when computers were implemented using vacuum tubes, paper tape, and raw grit :)

                                                                                                                                  I do have to wonder though - If we start thinking about building computers based on non Von Neumann architectures, will humans actually be able to reason about them and their internal workings?

                                                                                                                                  I’d argue that even WITH an essentially serial architecture most people, myself included can’t even begin to truly wrap our brains around everything inside their modern computer.

                                                                                                                                  It’s one of the reasons I so very much enjoy working with 8 bit era systems like my Atari 800XL. You really CAN understand everything about the machine from ‘tail to snout’ as they say :)

                                                                                                                                  1. 3

                                                                                                                                    Ah you’re absolutely right. We’re still building systems using an architecture designed in the 50s and 60s when computers were implemented using vacuum tubes, paper tape, and raw grit :)

                                                                                                                                    Most modern CPUs have a Harvard architecture up to L2 or L3.

                                                                                                                                    1. 2

                                                                                                                                      Thanks for that. TIL!

                                                                                                                                      https://en.wikipedia.org/wiki/Harvard_architecture

                                                                                                                                      I’d not heard of the Harvard Architecture but reading about it the issues around contention are certainly widely felt whenever you talk about performance.

                                                                                                                                      1. 1

                                                                                                                                        While separate L1 instruction and data caches are ubiquitous, yes, I think that’s largely an implementation detail due to circuit-design constraints – they’re still the same address space. Some CPUs, e.g. x86, will even enforce coherence between them, so a store instruction to an address that happens to be present in the I-cache will invalidate that line (though others require manual I-cache invalidation for things like JITs and self-modifying code).

                                                                                                                                  2. 2

                                                                                                                                    Look at the utter sh** show any kind of threaded programming has been for the last 30 years even when we know there are far better models like Actors available the whole time.

                                                                                                                                    Better for what? There are plenty of problems where threads are preferable to actors. There’s a reason threads were invented in the first place, despite already having processes which can communicate through message passing.

                                                                                                                                    1. 3

                                                                                                                                      I would be open to hearing this story. The version I learned is that threads arose because disk I/O was expensive, and sharing the CPU with threads could allow a program to simultaneously wait for a disk and run a computation. Today, we have better options; we can asynchronously manage our I/O.

                                                                                                                                      1. 2

                                                                                                                                        Actors are an abstraction implemented on top of threads. (I have implemented Actors myself.)

                                                                                                                                        Concurrency by means of multiple communicating OS processes tends to be inefficient, because processes are expensive to create and slow to context-switch between. Messaging is expensive too. So lightweight threads in a process were a performance boost as well as easier to use.

                                                                                                                                        The advantage of actors is they’re much easier to reason about. But I agree with you that in some cases it’s simpler to use threads as your model and just deal with mutexes.

                                                                                                                                        1. 4

                                                                                                                                          in some cases it’s simpler to use threads as your model and just deal with mutexes.

                                                                                                                                          Also, in a lot of the (IMO) good use cases for threads, “just deal with mutexes” is barely even a concern. If you are processing a bunch of independent units of work with little or no shared state, threads make the flow of control really easy to reason about and the classic pitfalls rarely come up.

                                                                                                                                          This is arguably the situation for a pretty big percentage of multithreaded programs. For example, .NET or Java web services, where there are large numbers of framework-managed threads active and there is shared state under the covers (database connection pools, etc.) but the vast majority of the day-to-day application logic can be correctly and safely written with zero attention paid to thread-related issues.

                                                                                                                                          1. 1

                                                                                                                                            True dat, but what you’re describing is also a good fit for Actors or similar abstractions. Even the “under the covers” part.

                                                                                                                                            1. 1

                                                                                                                                              Exactly. A lot of network services that work on stateless protocols (looking at you SIP and NNTP…) essentially have no state to mutate except for a backend database. Threads are easy abstractions because you can partition incoming work in exactly the thread/db connection pool patterns that you talk about.

                                                                                                                                              For more stateful work (say working on some form of distributed cache), the thread pool model can proved to be much more complicated.

                                                                                                                                          2. 1

                                                                                                                                            So, you’re right. My comment was off the cuff and not particularly articulate.

                                                                                                                                            What I was getting at is that there was a time - mostly the early-ish Java era, where legions of work-a-day business programmers were exposed to threads and concurrency in contexts with which they were unfamiliar.

                                                                                                                                            They then proceeded to, in the large, make a giant cock-up of it because without some helpful abstractions to moderate them, threads can be so powerful that they give people more than the rope they need to hang themselves.

                                                                                                                                            Later on, the Java community reacted to this and introduced things like ThreadPool and ThreadGroup (I think that was the name. My Java is rusty) which helped people use threads in ways that were much easier to reason about, and they had a much better time.

                                                                                                                                            But you’re right, in the hands of a capable programmer with the right experience, threads are an incredibly powerful tool with tremendous potential.

                                                                                                                                            1. 2

                                                                                                                                              FWIW the abstraction we arrive at doesn’t have to be threads, actors, or anything else. I’m personally very sympathetic to capability-passing style API boundaries (and I know a few folks here are also 😛) but what I’m trying to get at is, the current abstractions we have now in systems design is purely trying to hold onto the old synchronous model of computing. It’s not a thoughtful abstraction to help tame the complexity of modern computers. It’s literally just an accident of history. And it leaves out a lot of important things you could do an API that does anything except pretend like every action is synchronous.

                                                                                                                                    1. 3

                                                                                                                                      I’m not really seeing the appeal of a program that (deep breath) makes most non-English text unreadable, and replaces perfectly cromulent characters like quotes, dashes, non-breaking spaces, currency symbols, mathematical symbols and emoji with a bunch of octal(!) junk.

                                                                                                                                      It’s an interesting historical example of a utility that was useful in the days when Unix belonged to American guys with VT-52s, but no one should letting museum pieces like isgraph do their work for them nowadays.

                                                                                                                                      1. 3

                                                                                                                                        I had to check the timestamp of the post to make sure it wasn’t written in 1995…

                                                                                                                                        1. 3

                                                                                                                                          And since it doesn’t appear to treat a literal backslash in its input any differently from any other “normal” ASCII character, it also produces ambiguous output – something I’d very much have expected a tool like that to want to avoid.