Threads for michiel

  1. 2

    A logitech Trackman Wheel (white with a red ball, and I think it was a PS/2 model) was the first mouse I bought for myself. It’s successor, a USB Trackman Wheel, is getting a bit long in the tooth now. It’s a shame Logitech doesn’t make wired models anymore.

    1. 3

      The is exactly why alphanumeric senders are not allowed in USA and Canada

      1. 3

        It is absolutely insane that that is not the case everywhere else. This is such an obvious attack vector out there in the open that it even undermines other basic security practices.

        Here we are in 2022 and a trivial spoof like this is still totally open. Even email doesn’t have this problem anymore with the advent of better anti spam services.

        1. 2

          Sender ID probably predates SMS verification by a decade, if not more. So who is ‘insane’ here, the operators that allow alphanumeric sender ID, or the engineers who designed a second factor based on assumptions that only hold inside the US?

          1. 1

            I don’t understand what you mean in the second part of your question. But setting the sender Id, even if only numerical like it was a couple of decades ago, was always meant to be a technology actively and highly moderated by operators.

            It was never the case that anyone with a phone could easily spoof a number. And you couldn’t just put any number there. Of course, if the call comes from abroad, then it falls outside the realm of the operator enforcement, but that is what country codes are for.

            That one can get a message with the exact name of a known bank fromnhard tontraxk sender’s so easily, it’s absolutely a case of a poor set up.

            Countries should enforce registration and verification procedures to use the ID. As it is, people are unprotected.

      1. 14

        What surrprised me about Tainter’s analysis (and I haven’t read his entire book yet) is that he sees complexity as a method by which societies gain efficiency. This is very different from the way software developers talk about complexity (as ‘bloat’, ‘baggage’, ‘legacy’, ‘complication’), and made his perspective seem particularly fresh.

        1. 31

          I don’t mean to sound dismissive – Tainter’s works are very well documented, and he makes a lot of valid points – but it’s worth keeping in mind that grand models of history have made for extremely attractive pop history books, but really poor explanations of historical phenomena. Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.

          His explanation of the fall of the evolution and the eventual fall of the Roman Empire makes a number of valid points about the Empire’s economy and about some of the economic interests behind the Empire’s expansion, no doubt. However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.

          Yes, some particular military expeditions were specifically motivated by the desire to get more grain or more cows. But many weren’t – in fact, some of the greatest Roman wars, like (some of) the Roman-Parthian wars, were not driven specifically by Roman desire to get more grains or cows. Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone). The increase of cost and decrease in efficiency, too, are not a matter of half-rational historical determinism – they had economic as well as cultural and social causes that rationalising things in terms of energy not only misses, but distorts to the point of uselessness. The breakup of the Empire was itself a very complex social, cultural and military story which is really not something that can be described simply in terms of the dissolution of a central authority.

          That’s also where this mismatch between “bloat” and “features” originates. Describing program features simply in terms of complexity is a very reductionist model, which accounts only for the difficulty of writing and maintaining it, not for its usefulness, nor for the commercial environment in which it operates and the underlying market forces. Things are a lot more nuanced than “complexity = good at first, then bad”: critical features gradually become unneeded (see Xterm’s many emulation modes, for example), markets develop in different ways and company interests align with them differently (see Microsoft’s transition from selling operating systems and office programs to renting cloud servers) and so on.

          1. 6

            However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.

            Of course. I’m long past the age where I expect anyone to come up with a single, snappy explanation for hundreds of years of human history.

            But all models are wrong, only some are useful. Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.

            1. 6

              Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.

              Indeed! The issue I take with “grand models” like Tainter’s and the way they are applied in grand works like Collapse of Complex Societies is that they are ambitiously applied to long, grand processes across the globe without an exploration of the limits (and assumptions) of the model.

              To draw an analogy with our field: IMHO the Collapse of… is a bit like taking Turing’s machine as a model and applying it to reason about modern computers, without noting the differences between modern computers and Turing machines. If you cling to it hard enough, you can hand-wave every observed performance bottleneck in terms of the inherent inefficiency of a computer reading instructions off a paper tape, even though what’s actually happening is cache misses and hard drives getting thrashed by swapping. We don’t fall into this fallacy because we understand the limits of Turing’s model – in fact, Turing himself explicitly mentioned many (most?) of them, even though he had very little prior art in terms of alternative implementations, and explicitly formulated his model to apply only to some specific aspects of computation.

              Like many scholars at the intersections of economics and history in his generation, Tainter doesn’t explore the limits of his model too much. He came up with a model that explains society-level processes in terms of energy output per capita and upkeep cost and, without noting where these processes are indeed determined solely (or primarily) by energy output per capita and upkeep post, he proceeded to apply it to pretty much all of history. If you cling to this model hard enough you can obviously explain anything with it – the model is explicitly universal – even things that have nothing to do with energy output per capita or upkeep cost.

              In this regard (and I’m parroting Walter Benjamin’s take on historical materialism here) these models are quasi-religious and are very much like a mechanical Turk. From the outside they look like history masterfully explaining things, but if you peek inside, you’ll find our good ol’ friend theology, staunchly applying dogma (in this case, the universal laws of complexity, energy output per capita and upkeep post) to any problem you throw its way.

              Without an explicit understanding of their limits, even mathematical models in exact sciences are largely useless – in fact, a big part of early design work is figuring out what models apply. Descriptive models in humanistic disciplines are no exception. If you put your mind to it, you can probably explain every Cold War decision in terms of Vedic ethics or the I Ching, but that’s largely a testament to one’s creativity, not to their usefulness.

            2. 4

              Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone).

              Not to mention all the periods of rampant rising military costs due to civil war. Those aren’t wars about getting more energy!

              1. 1

                Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.

                Sure. This is all about a framing of events that happened; it’s not predictive, as much as it is thought-provoking.

                1. 7

                  Thought-provoking, grand philosophy was certainly a part of philosophy but became especially popular (some argue that it was Nathaniel Bacon who really brought forth the idea of predicting progress) during the Industrial Era with the rise of what is known as the modernist movement. Modernist theories often differed but frequently shared a few characteristics such as grand narratives of history and progress, definite ideas of the self, a strong belief in progress, a belief that order was superior to chaos, and often structuralist philosophies. Modernism had a strong belief that everything could be measured, modeled, categorized, and predicted. It was an understandable byproduct of a society rigorously analyzing their surroundings for the first time.

                  Modernism flourished in a lot of fields in the late 19th early 20th century. This was the era that brought political philosophies like the Great Society in the US, the US New Deal, the eugenics movement, biological determinism, the League of Nations, and other grand social and political engineering ideas. It was embodied in the Newtonian physics of the day and was even used to explain social order in colonizing imperialist nation-states. Marx’s dialectical materialism and much of Hegel’s materialism was steeped in this modernist tradition.

                  In the late 20th century, modernism fell into a crisis. Theories of progress weren’t bearing fruit. Grand visions of the future, such as Marx’s dialectical materialism, diverged significantly from actual lived history and frequently resulted in a magnitude of horrors. This experience was repeated by eugenics, social determinism, and fascist movements. Planck and Einstein challenged the neat Newtonian order that had previously been conceived. Gödel’s Incompleteness Theorem showed us that there are statements we cannot evaluate the validity of. Moreover many social sciences that bought into modernist ideas like anthropology, history, and urban planning were having trouble making progress that agreed with the grand modernist ideas that guided their work. Science was running into walls as to what was measurable and what wasn’t. It was in this crisis that postmodernism was born, when philosophers began challenging everything from whether progress and order were actually good things to whether humans could ever come to mutual understanding at all.

                  Since then, philosophy has mostly abandoned the concept of modeling and left that to science. While grand, evocative theories are having a bit of a renaissance in the public right now, philosophers continue to be “stuck in the hole of postmodernism.” Philosophers have raised central questions about morality, truth, and knowledge that have to be answered before large, modernist philosophies gain hold again.

                  1. 3

                    I don’t understand this, because my training has been to consider models (simplified ways of understanding the world) as only having any worth if they are predictive and testable i.e. allow us to predict how the whole works and what it does based on movements of the pieces.

                    1. 4

                      You’re not thinking like a philosopher ;-)

                      1. 8

                        Models with predictive values in history (among other similar fields of study, including, say, cultural anthropology) were very fashionable at one point. I’ve only mentioned dialectical materialism because it’s now practically universally recognized to have been not just a failure, but a really atrocious one, so it makes for a good insult, and it shares the same fallacy with energy economic models, so it’s a doubly good jab. But there was a time, as recent as the first half of the twentieth century, when people really thought they could discern “laws of history” and use them to predict the future to some degree.

                        Unfortunately, this has proven to be, at best, beyond the limits of human understanding and comprehension. This is especially difficult to do in the study of history, where sources are imperfect and have often been lost (case in point: there are countless books we know the Romans wrote because they’re mentioned or quoted by ancient authors, but we no longer have them). Our understanding of these things can change drastically with the discovery of new sources. The history of religion provides a good example, in the form of our understanding of Gnosticism, which was forever altered by the discovery of the Nag Hammadi library, to the point where many works published prior to this discovery and the dissemination of its text are barely of historical interest now.

                        That’s not to say that developing a theory of various historical phenomenons is useless, though. Even historical materialism, misguided as they were (especially in their more politicized formulations), were not without value. They forced an entire generation of historians to think more about things that they never really thought about before. It is certainly incorrect to explain everything in terms of class struggle, competition for resources and the means of production, and the steady march from primitive communism to the communist mode of production – but it is also true that competition for resources and the means of production were involved in some events and processes, and nobody gave much thought to that before the disciples of Marx and Engels.

                        This is true here as well (although I should add that, unlike most materialistic historians, Tainter is most certainly not an idiot, not a war criminal, and not high on anything – I think his works display an unhealthy attachment for historical determinism, but he most certainly doesn’t belong in the same gallery as Lenin and Mao). His model is reductionist to the point where you can readily apply much of the criticism of historical materialism to it as well (which is true of a lot of economic models if we’re being honest…). But it forced people to think of things in a new way. Energy economics is not something that you’re tempted to think about when considering pre-industrial societies, for example.

                        These models don’t really have predictive value and they probably can’t ever gain one. But they do have an exploratory value. They may not be able to tell you what will happen tomorrow, but they can help you think about what’s happening today in more ways than one, from more angles, and considering more factors, and possibly understand it better.

                        1. 4

                          That’s something historians don’t do anymore. There was a period where people tried to predict the future development of history, and then the whole discipline gave up. It’s a bit like what we are witnessing in the Economics field: there are strong calls to stop attributing predictive value to macroeconomic models because after a certain scale, they are just over-fitting to existing patterns, and they fail miserably after a few years.

                          1. 1

                            Well, history is not math, right? It’s a way of writing a story backed by a certain amount of evidence. You can use a historical model to make predictions, sure, but the act of prediction itself causes changes.

                      2. 13

                        (OP here.) I totally agree, and this is something I didn’t explore in my essay. Tainter doesn’t see complexity as always a problem: at first, it brings benefits! That’s why people do it. But there are diminishing returns and maintenance costs that start to outstrip the marginal benefits.

                        Maybe one way this could apply to software: imagine I have a simple system, just a stateless input/output. I can add a caching layer in front, which could win a huge performance improvement. But now I have to think about cache invalidation, cache size, cache expiry, etc. Suddenly there are a lot more moving parts to understand and maintain in the future. And the next performance improvement will probably not be anywhere near as big, but it will require more work because you have to understand the existing system first.

                        1. 2

                          I’m not sure it’s so different.

                          A time saving or critically important feature for me may be a “bloated” waste of bits for somebody else.

                          1. 3

                            In Tainter’s view, a society of subsistence farmers, where everyone grows their own crops, makes their own tools, teaches their own children, etc. is not very complex. Add a blacksmith (division of labour) to that society, and you gain efficiency, but introduce complexity.

                        1. 4

                          The parallel between societies and software is a great find! The big thing that I disagree with though is:

                          and a fresh-faced team is brought in to, blessedly, design a new system from scratch. (…) you have to admit that this system works.

                          My experience is the opposite. No customer is willing to work with a reduced feature set, and the old software has accumulated a large undocumented set of specific features. The new-from-scratch version will have to somehow reproduce all of that, all the while having to keep up with patching done to the old system that is still running as the new system is under development. In other words, the new system will never be completed.

                          In short, we have no way to escape complexity at all. Once it’s there, it stays. The only thing we can do to keep ourselves from collapse as described in the article is avoid creating complexity in the first place. But as I think is stated correctly, that is not something most organisations are particularly good at.

                          1. 11

                            No customer is willing to work with a reduced feature set…

                            Sure they are, because the price for the legacy system keeps going up. They eventually bite the bullet. That’s been my experience, anyway. The evidence is that products DO actually go away, in fact, we complain about Google doing it too much!

                            Yes, some things stay around basically forever, but those are things that are so valuable (to someone) that someone is willing to pay dearly to keep them running. Meanwhile, the rest of the world moves on to the new systems.

                            1. 3

                              Absent vandals ransacking offices, perhaps this is what ‘collapse’ means in the context of software; the point where its added value can no longer fund its maintenance.

                              1. 1

                                Cost is one way to look at it, but it’s much harder to make this argument in situations like SaaS. The cost imposed on the customer is much more indirect than when it’s software the customer directly operates. You need to have a deprecation process that can move customers onto the supported things in a reasonable fashion. When this is done well, there is continual evaluation to reduce the bleeding from new adoption of a feature that’s going away while migration paths are considered.

                                I think the best model for looking at this overall is the Jobs To Be Done (JTBD) framework. Like many management tools, it can actually be explained in to a software engineer on a single page rather than requiring a book, but people like to opine.

                                You split out the jobs that customers need done which are sometimes much removed from the original intent of a feature. These can then be mapped onto a solution, or the solution can be re-envisioned. Many people don’t get to the bottom of the actual job the customer is currently doing and then they deprecate with alternatives that only partially suit task.

                              2. 4

                                My experience is the opposite. No customer is willing to work with a reduced feature set

                                Not from the same vendor. But if they’re lucky enough not to be completely locked in, once the first vendor’s system is sufficiently bloated and slow and buggy, they might be willing to consider going to the competition.

                                It’s still kind of a rewrite, but the difference this time is that one company might go under while another rises. (If the first company is big enough, they might also buy the competition…)

                              1. 9

                                When you tell them the original game Elite had a sprawling galaxy, space combat in 3D, a career progression system, trading and thousands of planets to explore, and it was 64k, I guess they HEAR you, but they don’t REALLY understand the gap between that, and what we have now.

                                Hi! I’m a young programmer. When someone says “this game had x, y, and z in (N < lots) bytes”, which I hear is that it was built by dedicated people working on limited hardware who left out features and polish that is often included in software today, didn’t integrate it with other software in the ecosystem that uses freeform, self-describing formats that require expensive parsers, and most importantly took a long time to build and port the software.

                                Today, we use higher-level languages which give us useful properties like:

                                • portability
                                • various levels of static analysis
                                • various levels of memory safety
                                • scalability
                                • automatic optimization
                                • code reuse via package managers

                                and the tradeoff there is that less time is spent in manual optimization. It’s a tradeoff, like anything in engineering.

                                1. 8

                                  While I’m curious about the free-form, self-describing formats you’re talking about (and why their parsers should be so expensive), cherry picking from your arguments, there are a lot of interesting mismatches between expectations and reality

                                  which I hear is that it was built by dedicated people (…) and most importantly took a long time to build and port the software.

                                  Elite was written by two(!) undergraduate students, and ran on more, and more different CPU architectures than any software developed today. It’s true that the ports were complete rewrites, but if wikipedia is correct, these were single-person efforts.

                                  • various levels of static analysis
                                  • various levels of memory safety
                                  • code reuse via package managers

                                  Which are productivity boosters; modern developers should be faster, not slower than those from the assembly era.

                                  • scalability

                                  Completely irrelevant for desktop software, as described in the article.

                                  • automatic optimization

                                  If optimization is so easy, why is software so slow and big?

                                  My personal theory is that software development in the home computer era was sufficiently difficult that it demotivated all but the most dedicated people. That, and survivor bias, makes their efforts seem particularly heroic and effective compared to modern-day, industrialized software development.

                                  1. 7

                                    My personal theory is that software development in the home computer era was sufficiently difficult that it demotivated all but the most dedicated people. That, and survivor bias, makes their efforts seem particularly heroic and effective compared to modern-day, industrialized software development.

                                    I tend to agree. How many games like Elite were produced, for example? Also, how many epic failures were there? I’m not saying I know the answers, I just don’t think the debate is productive without them. Pointing to Elite and saying “software was better back then” is just nostalgia.

                                    Edit: Another thought, how much crap software was created with BASIC for specific purposes and we’ve long since forgotten about it?

                                    1. 2

                                      I’m curious about the free-form, self-describing formats you’re talking about (and why their parsers should be so expensive)

                                      I’m mostly talking about JSON. JSON is, inherently, a complex format! It requires that you have associative maps, for one thing, and arbitrarily large ones at that. Interoperating with most web APIs requires unbounded memory.

                                      You respond to my assertion that building software in assembly on small computers requires dedication by saying:

                                      Elite was written by two(!) undergraduate students,

                                      But then say:

                                      My personal theory is that software development in the home computer era was sufficiently difficult that it demotivated all but the most dedicated people.

                                      It seems like you agree with me here. Two undergraduates can be dedicated and spend a lot of time on something.

                                      [scalability is] Completely irrelevant for desktop software, as described in the article.

                                      No, it’s not. Scalability in users is irrelevant, but not in available resources. Software written in Python on a 32-bit system can easily be run on a 64-bit one with all the scaling features that implies. There are varying shades of this; C and Rust, for instance, make porting from 32 to 64 bit easy, but not trivial, and assembly makes it a gigantic pain in the ass.

                                      Which are productivity boosters; modern developers should be faster, not slower than those from the assembly era.

                                      I don’t agree. These are not productivity boosters; they can be applied that way, but they are often applied to security, correctness, documentation, and other factors.

                                      1. 1

                                        I’m mostly talking about JSON. JSON is, inherently, a complex format! It requires that you have associative maps, for one thing, and arbitrarily large ones at that. Interoperating with most web APIs requires unbounded memory.

                                        JSON is mostly complex because it inherits all the string escaping rules from JavaScript; other than that, SAX-style parsers for JSON exist, they’re just not commonly used. And yes, theoretically, I could make a JSON document that just contains a 32GB long string, blowing the memory limit on most laptops, but I’m willing to bet that most JSON payloads are smaller than a kilobyte. If your application needs ‘unbounded memory’ in theory, that’s a security vulnerability, not a measure of complexity.

                                        (And JSON allows the same key to exist twice in a document, so associative maps are not a good fit)

                                        It seems like you agree with me here. Two undergraduates can be dedicated and spend a lot of time on something.

                                        But it also puts a bound on the ‘enormous effort’ involved here. Just two people with other obligations, just two years of development.

                                        No, it’s not. Scalability in users is irrelevant, but not in available resources. Software written in Python on a 32-bit system can easily be run on a 64-bit one with all the scaling features that implies. There are varying shades of this; C and Rust, for instance, make porting from 32 to 64 bit easy, but not trivial, and assembly makes it a gigantic pain in the ass.

                                        As someone who has spent time both porting C code from 32 bit to 64 bit, and porting Python2 string handling code to Python3 string handling code, I’d say the former is much easier.

                                        And that’s part of my pet theory for why modern software development is so incredibly slow: a lot of effort goes into absorbing breaking changes from libraries and language runtimes.

                                        1. 3

                                          You’re moving the goalposts. My initial point was that, given some source code and all necessary development tools, it’s far easier to expand a Python or Lua or Java program to use additional resources - such as but not limited to bus width and additional memory - than an equivalent assembly program. You’re now talking about something totally else: problems with dependencies and the constant drive to stay up to date.

                                          my pet theory for why modern software development is so incredibly slow: a lot of effort goes into absorbing breaking changes from libraries and language runtimes.

                                          I agree with you here, but it’s a complete non-sequitur from what we were talking about before. It’s at least as hard, if not harder, to port an assembly program to a new operating system, ABI, or processor as it is to port a Python 2 program to Python 3.

                                          1. 1

                                            You’re moving the goalposts. My initial point was that, given some source code and all necessary development tools, it’s far easier to expand a Python or Lua or Java program to use additional resources - such as but not limited to bus width and additional memory - than an equivalent assembly program.

                                            That is most definitely true. I actually think the use of extremes doesn’t make this discussion any easier. I don’t think anyone wants to go back to assembly programming. But at the same time there’s obviously something wrong if it takes 200 megabytes of executables to copy some files.

                                            1. 3

                                              But at the same time there’s obviously something wrong if it takes 200 megabytes of executables to copy some files.

                                              What’s wrong, exactly? The company providing the service was in business at the time of the rant, and there’s no mention of files being lost.

                                              The only complaint is an aesthetic one. Having 200MB of executables to move files feels “icky”.

                                              1. 2

                                                There are externalities to code bloat, in the form of e-waste (due to code bloat obsoleting less powerful computers), and energy use. It’s not very relevant in the case of one 200MB file transfer program, but over an industry, it adds up horribly.

                                                1. 4

                                                  Agreed. These externalities are not taken into account by producers or most consumers. That said, I think there are more important things to focus on before one gets to software bloat: increased regulation regarding privacy and accessibility among them.

                                  1. 8

                                    I suppose it depends on the company, time, and luck, and “YMMV” as always. However, my experience working in staff roles was quite miserable, and many of my friends had the same experience.

                                    Your manager may report to the COO (or the CEO in smaller companies), but it may not mean anything for either of you. If executives see you as a cost center that steals money from the real business, you will have to fight tooth and nail to keep your department funded. You may not even win: at quite a few places I’ve seen, such internal departments were staffed mainly by inexperienced people who would leave for a better job as soon as they could find one. But when disaster happens, you will be blamed for everything.

                                    I’m pretty sure there are companies that don’t mistreat their staff IT personnel, but no assumption is universal.

                                    1. 9

                                      IME: the harder it is for execs to see that “person/group X does their job which directly leads to profit” the more of an uphill battle it is. Even a single hop can have a big effect: note the salary differences between skilled sales people and skilled engineers.

                                      1. 5

                                        Can confirm. This is particularly challenging for “developer experience” or “productivity” teams, where all of the work is definitionally only an indirect contribution to the bottom line—even if an incredibly important and valuable one.

                                        1. 2

                                          Gotta be able to sell everything you do. It’s hard when metrics are immaterial but in those specific areas, you have to be showing “oh, I save business line X this many person-hours daily/weekly/etc.” constantly in order to advance

                                          1. 5

                                            As an idea that sounds good, but in practice no one knows how to even estimate that in a lot of categories of important tech investment for teams like mine. I have spent a non-trivial amount of time with both academic and industrial literature on the subject, and… yeah, nobody blows how to measure or even guesstimate this stuff in a way that I could remotely sign my name to in good conscience.

                                        2. 1

                                          note the salary differences between skilled sales people and skilled engineers.

                                          The latter usually have a higher salary or total compensation so I’m not sure if I understood your point. Maybe sales make more in down-market areas of the industry that don’t pay more than $100k for programmers if they can help it?

                                          1. 5

                                            $100k for programmers exists in the companies that have effectively scaled up their sales pipeline. Most programmers work on some kind of B2B software (like the example in the article, internal billing for an electricity company), where customers don’t number in the millions, engineer salaries have five digits, and trust me, their package can’t touch the compensation of the skilled sales person who manages the expectations of a few very important customers.

                                            1. 3

                                              I can confirm that I have never worked for companies where the sales people were paid less than the engineers. At least not to my knowledge.

                                              In fact, in most companies I worked for, the person I reported to had a sales role.

                                              1. 2

                                                I think a good discriminant for this might be software-as-plumbing vs. software-is-the-product. I suspect SaaS has driven down the costs a lot of glue type stuff like this.

                                          2. 5

                                            I’ve had exactly the opposite experience. Being in staff roles has been the most enjoyable because we could work on things that had longer term payoffs. When I’ve been a line engineer we weren’t allowed to discuss anything unless it would increase revenue that quarter. The staff roles paid slightly less but not too much less.

                                            1. 2

                                              I had a similar experience. I worked on a devops team at a small startup, and we did such a good job that when covid hit and cuts needed to be made, our department was first on the chopping block. I landed on my feet just fine, finding a job that paid 75% more (and have since received a promotion and a couple of substantial raises), but I was surprised to learn that management may keep a floundering product/dev org over an excellent supporting department (even though our department could’ve transitioned to dev and done a much better job).

                                            1. 17

                                              On the one hand, I totally get the value of a lack of a build step. Build steps are annoying. On the other hand, authoring directly in HTML is something I am perfectly happy to do as little of as possible. It’s just not a pleasant language to write in for any extended amount of time!

                                              1. 20

                                                I’m pretty convinced that Markdown is the local maxima for the “low effort, nice looking content” market.

                                                1. 10

                                                  Agreed. ASCIIDoc, reStructuredText, LaTeX, and other more-robust-than-Markdown syntaxes all have significantly more power but also require a great deal more from you as a result. For just put words out, Markdown is impressively “good enough”.

                                                  1. 4

                                                    I can never remember Markdown syntax (or any other wiki syntax for that matter), while I’m fairly fluent in HTML, and I’m not even a frontend dev. HTML also has the advantage that if some sort of exotic markup is necessary, you know it’s expressble, given time and effort.

                                                    1. 7

                                                      That’s fine, because Markdown allows embedded HTML [1]

                                                      About the only thing that’s a bit obtuse is the link syntax, and I’ve gladly learned that to not have to manually enclose every damn list with or tags.

                                                      [1] at least Gruber’s OG Markdown allowed it by default, and I recently learned CommonMark has an “unsafe” mode to allow it too.

                                                      1. 11

                                                        The trick to remember how to do links in Markdown is to remember that there are brackets and parentheses involved, then think what syntax would make sense, then do the opposite.

                                                        1. 4

                                                          For reference: a Markdown [link](https://example.com)

                                                          Elaboration on the mnemonic you describe

                                                          I thought like you when I first started learning Markdown:

                                                          • Parentheses () are normal English punctuation, so you would intuitively expect them to surround the text, but they don’t.
                                                          • Square brackets [] are technical symbols, so you would intuitively expect them to surround the URL, but they don’t.

                                                          However, I find “don’t do this” mnemonics easy to accidentally negate, so I don’t recommend trying to remember the order that way.

                                                          Another mnemonic

                                                          I think Markdown’s order of brackets and parentheses is easier to remember once one recognizes the following benefit:

                                                          When you read the first character in […](…), it’s clear that you’re reading a link. ‘[’ is a technical symbol, so you know you’re not reading a parenthetical, which would start with ‘(’. Demonstration:

                                                          In this Markdown, parentheticals (which are everywhere) and
                                                          [links like these](https://example.com) can quickly be told
                                                          apart when reading from left to right.
                                                          
                                                          Why not URL first?

                                                          Since you wrote that Markdown does “the opposite”, I wonder if you also intuitively expect the syntax to put the URL before the text, like in [https://www.mediawiki.org/wiki/Help:Links MediaWiki’s syntax] (actual link: MediaWiki’s syntax). I never found that order intuitive, but I can explain why I prefer text first:

                                                          When trying to read only the text and skip over the URLs, it’s easier to skip URLs if they come between grammatical phrases of the text (here), rather than interrupting a (here) phrase. And links are usually written at the end of phrases, rather than at the beginning.

                                                          1. 2

                                                            Well I’ll be dammed. That completely makes sense.

                                                            I do, however, wonder whether this is a post-hoc rationalization and the real reason for the syntax is much dumber.

                                                          2. 3

                                                            Hah. The mnemonic I use is everyone gets the ) on the end of their wiki URLs fucked up by markdown… because the () goes around the URL. therefore it is s []().

                                                            1. 2

                                                              This is exactly what I do. Parens are for humans, square brackets are for computers, so obviously it’s the other way around in markdown.

                                                            2. 3

                                                              A wiki also implies a social contract about editability. If my fellow editors have expressed that they’re uncomfortable with HTML, it’s not very polite of me to use it whenever I find Markdown inconvenient.

                                                              1. 1

                                                                Of course. I was replying in context of someone writing for themselves.

                                                            3. 3

                                                              This is interesting: I’ve heard that same experience report from a number of people over the years so I believe it’s a real phenomenon (the sibling comment about links especially being the most common) but Markdown clicked instantly for me so I always find it a little surprising!

                                                              I have hypothesized that it’s a case of (a) not doing it in a sustained way, which of course is the baseline, and (b) something like syntactical cross-talk from having multiple markup languages floating around; I took longer to learn Confluence’s wiki markup both because it’s worse than Markdown but also because I already had Markdown, rST, and Textile floating around in my head.

                                                              I’m curious if either or both of those ring true, or if you think there are other reasons those kinds of markup languages don’t stick for you while HTML has?

                                                              1. 2

                                                                I’m not Michiel, but for me, it’s because HTML is consistent (even if it’s tedious). In my opinion, Gruber developed Markdown to make it easier for him to write HTML, and to use conventions that made sense to him for some shortcuts (the fact that you could include HTML in his Markdown says to me that he wasn’t looking to replace HTML). Markdown was to avoid having to type common tags like <P> or <EM>.

                                                                For years I hand wrote the HTML for my blog (and for the record, I still have to click the “Markdown formatting available” link to see how to make links here). A few years ago I implemented my own markup language [1] that suits me. [2] My entries are still stored as HTML. That is a deliberate decision so I don’t get stuck with a subpar markup syntax I late come to hate. I can change the markup language (I’ve done it a few times already) and if I need to edit past entries, I can deal with the HTML.

                                                                [1] Sample input file

                                                                [2] For instance, a section for quoting email, which I do quite often. Or to include pictures in my own particular way. Or tabular data with a very light syntax and some smarts to generate the right class on <TD> elements consisting of numeric data (so they’re flush right). Stuff like that.

                                                                1. 2

                                                                  Yeah, with markdown, I often accidentally trigger some of its weird syntax. It needs a bunch of arbitrary escapes, whereas HTML you can get away with just using &lt;. Otherwise, it is primarily just those <p> tags that get you; the rest are simple or infrequent enough to not worry about.

                                                                  whereas again, with the markdown, it is too easy to accidentally write something it thinks is syntax and break your whole thing.

                                                                  1. 1

                                                                    Yes, I’ve found that with mine as well.

                                                                  2. 1

                                                                    I don’t mean this as an offense, but I did a quick look at your custom markup sample and I hated pretty much everything about it.

                                                                    Since we’re all commenting under a post from someone that is handwriting HTML, I think it goes without saying that personal preferences can vary enormously.

                                                                    Updated: I don’t hate the tables syntax, and, although I don’t particularly like que quote syntax, having a specific syntax for it is cool and a good idea.

                                                                    1. 1

                                                                      Don’t worry about hating it—even I hate parts of it. It started out as a mash up of Markdown or Org mode. The current version I’m using replaces the #+BEGIN_blah #+END_blah with #+blah #-blah. I’m still working on the quote syntax. But that’s the thing—I can change the syntax of the markup, because I don’t store the posts in said markup format.

                                                                  3. 2

                                                                    You’re absolutely right, and so is spc476; HTML has a regular syntax. Even if I’ve never seen the <aside> tag, I can reason about what it does. Escaping rules are known and well-defined. If you want to read the text, you know you can just ignore anything inside the angular brackets.

                                                                    Quick: in Markdown, if I want to use a backtick in a fixed-width span, do I have to escape it? How about an italic block?

                                                                    This would all be excusable if Markdown was a WYSIWYG plain-text format (as per Gruber’s later rationalisation in the CommonMark debate). Then I could mix Markdown, Mediawiki, rST and email syntax freely, because it’s intended for humans to read, and humans tend to be very flexible.

                                                                    But people do expect to render it to HTML, and then the ambiguities and flexibility become weaknesses, rather than strengths.

                                                                2. 2

                                                                  ASCIIDoc

                                                                  While I agree about the others, I fairly strongly disagree about AsciiDoc (in asciidoctor dialect). When I converted my blog from md to adoc, the only frequent change was the syntax of links (in adoc, URL goes first). Otherwise, markdown is pretty much valid asciidoc.

                                                                  Going in the opposite direction would be hard though — adoc has a bunch of stuff inexpressible in markdown.

                                                                  I am fairly certain in my opinion that, purely as a language, adoc is far superior for authoring html-shaped documents. But it does have some quality of implementation issues. I am hopeful that, after it gets a standard, things in that front would improve.

                                                                  1. 1

                                                                    That’s helpful feedback! It’s limped with the others in my head because I had such an unhappy time trying to use it when working with a publisher[1] a few years back; it’s possible the problem was the clumsiness of the tools more than the syntax. I’ll have to give it another look at some point!

                                                                    [1] on a contract they ultimately dropped after an editor change, alas

                                                                3. 4

                                                                  Agree, I’ve been using it a ton since 2016 and it has served me well. I think it’s very “Huffman coded” by people who have written a lot. In other words, the common constructs are short, and the rare constructs are possible with embedded HTML.


                                                                  However I have to add that I started with the original markdown.pl (written ~2004) and it had some serious bugs.

                                                                  Now I’m using the CommonMark reference implementation and it is a lot better.

                                                                  CommonMark is a Useful, High-Quality Project (2018)

                                                                  It has additionally standardized markdown with HTML within markdown, which is useful, e.g.

                                                                  <div class="">
                                                                  
                                                                  this is *markdown*
                                                                  
                                                                  </div>
                                                                  

                                                                  I’ve used both ASCIIDoc and reStructuredText and prefer markdown + embedded HTML.

                                                                  1. 3

                                                                    I tend to agree, but there’s a very sharp usability cliff in Markdown if you go beyond the core syntax. With GitHub-flavoured Markdown, I can specify the language for a code block, but if I write virtual then there’s no consistent syntax to specify that it’s a C++ code snippet and not something else where the word ‘virtual’ is an identifier and not a keyword. I end up falling back to things like liquid or plain HTML. In contrast, in LaTeX I’d write \cxx{virtual} and define a macro elsewhere.

                                                                    I wish Markdown had some kind of generic macro definition syntax like this, which I could use to provide inline domain-specific semantic markup that was easier to type (and use) than <cxx>virtual</cxx> and an XSLT to convert it into <code style="cxx">virtual</code> or whatever.

                                                                    1. 3

                                                                      I agree. What sometimes makes me a bit sad is that markdown had a feature compared to others that you can write it to make a nice looking text document as well that you might just output on the terminal for example.

                                                                      It kind of has that nicely formated plain text email style. Also with the alternative syntax for headings.

                                                                      Yet when looking at READMEs in many projects it is really ugly and hard to read for various reasons.

                                                                      1. 4

                                                                        The biggest contributor there in my experience (and I’m certainly “guilty” here!) is unwrapped lines. That has other upsides in that editing it doesn’t produce horrible diffs when rewrapping, but that in turn highlights how poor most of our code-oriented tools are at working with text. Some people work around the poor diff experience by doing a hard break after every sentence so that diffs are constrained and that makes reading as plain text even worse.

                                                                        A place I do wrap carefully while using Markdown is git commit messages, which are basically a perfect use case for the plain text syntax of Markdown.

                                                                        1. 1

                                                                          I honestly don’t care that much about the diffs? I always wrap at around 88/90 (Python’s black’s default max line length), and diffs be dammed.

                                                                          I also pretty much NEVER have auto wrap enabled, specially for code. I’d rather suffer the horizontal scroll than have the editor lie about where the new lines are

                                                                    2. 4

                                                                      It’s not just that they’re annoying, computing has largely been about coping with annoyances ever since the Amiga became a vintage computer :-). But in the context of maintaining a support site, which is what the article is about, you also have to deal with keeping up with whatever’s building the static websites, the kind of website that easily sits around for like 10-15 years. The technology that powers many popular static site generators today is… remarkably fluid. Unless you want to write your own static site generator using tools you trust to stay sane, there’s a good chance that you’re signing up for a bunch of tech churn that you really don’t want to deal with for a support site.

                                                                      Support sites tend to be built by migrating a bunch of old pages in the first two weeks, writing a bunch of new ones for the first two months, and then infrequently editing existing pages and writing maybe two new pages each year for another fifteen years. With most tools today, after those first two or three honeymoon years, you end up spending more time just keeping the stupid thing buildable than actually writing the support pages.

                                                                      Not that writing HTML is fun, mind you :(.

                                                                      (Please don’t take this as a “back in my day” lament. A static site generator that lasts 10 years is doable today and really not bad at all – how many tools written in 1992 could you still use in 2002, with good results, not as an exercise in retrocomputing? It’s not really a case of “kids these days ruined it” – it’s just time scales are like that ¯\(ツ)/¯ )

                                                                      1. 1

                                                                        Heh. I was using an editor written in 1981 in 2002! [1] But more seriously, I wrote a static site generator in 2002 that I’m still using (I had to update it once in 2009 due to a language change). On the down side, the 22 year old codebase requires the site to be stored in XML, and uses XSLT (via xsltproc) to convert it to HTML. On the plus side, it generates all the cross-site links automatically.

                                                                        [1] Okay, it was to edit text files on MS-DOS/Windows.

                                                                      2. 2

                                                                        I find that writing and edititing XML or HTML isn’t so much of a pain if you use some kind of structural editor. I use tagedit in Emacs along with a few snippets / templates and imo it’s pretty nice once you get used to it.

                                                                      1. 8

                                                                        I remember my first contact with SELinux about 2007 when I wanted to use XEN virtualization on Fedora.

                                                                        I put ISO images and systems ‘disks’ at my /home/vermaden dir - then specified path to these file at XEN configs and wanted to start the machine.

                                                                        I could not. I only got Permission Denied errors. Nothing more. I checked all the chmod(8)/chown(8) permissions but still no luck.

                                                                        After loosing half a day and searching for the solution on the net I find out that the default SELinux policy requires that all of these files need to be under /var/lib/xen/images path … and SELinux error Permission Denied tells NOTHING about this. Its just shit. Omit like all other shitty ‘technologies’.

                                                                        1. 14

                                                                          The system call that opens the files are unaware of why access was denied. It doesn’t say it’s because of permissions or MAC. journald has made this a bit easier by displaying the program error log next to the audit messages. However, even if you realize that it’s SELinux there’s still no easy path or documentation on how to properly resolve the problem.

                                                                          1. 14

                                                                            There’s a related problem, which is the inverse of your specific case and which has been the root cause of some Chrome vulnerabilities on Linux.

                                                                            • If you get the policy wrong by making it too restrictive (your case), debugging the root cause is hard.
                                                                            • If you get the policy wrong by making it too permissive, you don’t get error messages of any kind, you just get security vulnerabilities.

                                                                            The root cause of both of these is that the policy is completely decoupled from the application. SELinux is designed as a tool for system administrators to write policies that apply globally to software that they’re managing but it’s used as a tool for software to enforce defence-in-depth sandboxing policies. Capsicum is a much better fit for this (the Capsicum code for the Chrome sandbox was about a 10th of the SELinux code, failed closed, and was easier to debug) but the Linux version was never upstreamed.

                                                                            1. 11

                                                                              But as the article expresses, system administrators typically don’t feel in control of SELinux policies. I think this is an agency problem. The developers are most familiar with the needs and potential vulnerabilities of a program. The administrators are most aware of the requirements the users of the software have. But the policies are written by the distributor (Fedora/Red Hat), who is aware of neither.

                                                                              The usability of SELinux isn’t great either (and as a developer I much prefer a capabilities-based system), but I think that’s almost secondary to the way it is used in practice.

                                                                              1. 7

                                                                                And app documentation is generally completely missing about the environment the app is expected to run in, i.e. they say “linux” but never goes into more depth. Stuff like the default is to read from these directories and write to these, I require these ENV variables, etc.

                                                                                None of that is ever documented in any program I’ve ever found(that I remember). Shoot, just getting a list of ports a network app runs on for FW rules can be like pulling teeth sometimes.

                                                                                It’s super hard to write a reasonable SELinux policy without this information, so you run stuff like audit2allow and just hope for the best, and then randomly flip on extra permissions here and there until it seems to run, call it good and move along. To do it right you need to have developer experience and sysadmin experience and be willing to do deep dives for every installed instance.

                                                                                I’m a fan of Capsicum, and the pledge stuff that OpenBSD is doing, as at least the developers have a much better chance of getting it right @ runtime.

                                                                                1. 5

                                                                                  Another thing developer-managed capabilities facilitate is dynamically dropping them at runtime. The administrator has disabled the admin interface? Drop network privileges. Did you finish reading your config file? The developer knows the process never needs to read another file again.

                                                                                  On the other hand, these policies are not easy to audit. They sit embedded in the code, opaque to administrators and auditors. Knowing what they are requires you either trust the developer, or have access to the source code.

                                                                                  SELinux is a good policy system for an ecosystem where there’s an adversarial relation between the people implementing the software, and the people who run it. I don’t think it’s a natural fit for most FLOSS operating systems.

                                                                                2. 3

                                                                                  So to summarize, no-one likes selinux because it’s hard for everyone

                                                                              2. 5

                                                                                I only got Permission Denied errors. Nothing more.

                                                                                This is the typical “first contact” with SELinux. You might be super well versed in Linux/Unix security with years of experience in several distros, but if you’ve never used a system with SELinux (ie RedHat), this is what you’ll see and it’s absolutely maddening. None of your regular Linux skills and knowledge transfer to an SELinux-enabled Linux, and the errors make no sense. And to ask someone to spend weeks or months studying this crap that’s typically only used in the context of one distro? I don’t think so.

                                                                                1. 1

                                                                                  try to put some libvirt virtual machine images outside of /var/lib/libvirt/images if you have apparmor enabled (for example on debian). Great fun ahead. I can understand that pain, not only related to SELinux :/

                                                                                1. 1

                                                                                  Another concern: will the identifier ever be visible to (non-technical) end users? In most languages that use the latin alphabet, vulgar words and slurs tend to be brief, so the chance of a randomly generated NanoID identifier sounding inappropriate in some language seems decidedly non-zero.

                                                                                  Of course, as I understand it, Chinese speakers occasionally use latin numerals to write words that sound like the equivalent number spoken in Chinese, so maybe there’s no ID generation system that’s completely safe.

                                                                                  1. 2

                                                                                    Could pare down the set of letters like what Multics did.

                                                                                    1. 1

                                                                                      This is a concern I have too. The thing @calvin refers to below says Multics:

                                                                                      reduced the alphabet to sixteen characters to eliminate the possibility of obscenities: all vowels were removed, “v” because you can use it to look like an “u”, and “f”, of course, and “y” because it’s like a vowel, and 2 others.”

                                                                                      But what were the two others?

                                                                                      Anyway the above is one approach, alphabet limiting. The other approach, given randomly generated IDs or keys, is to just filter them against list of words you don’t want; if it’s random, you can always generate a new one.

                                                                                      1. 1

                                                                                        Is this really serious concern or is this just being overly cautious? Has this ever caused real troubles? (Unless of course it gets out of proportion)

                                                                                      1. 1

                                                                                        Some thoughts to that:

                                                                                        Universal Clipboard

                                                                                        That’s an anti-feature to me that KDE Connect may provide, but I also disable it on windows.

                                                                                        Safari Tab Groups

                                                                                        Don’t know, I’ve been a tab group/managing/.. user, but since I’ve migrated away and just close everything at the end, I’m a much happier person. AFAIK Opera does some of this.

                                                                                        AirPlay

                                                                                        Points for that, though it’s still a proprietary system that costs extra money on every device that can support it. Let’s see about multi-DPI after wayland finally settled.

                                                                                        Apple Maps

                                                                                        Google maps/OSM?

                                                                                        You can still boot Linux on them

                                                                                        Well you can, but you won’t ever upgrade any drivers, for all we know you’ll have to dual-boot for the end of time.

                                                                                        Costs

                                                                                        I did actually think about buying an apple laptop after the recent M1 success. Too bad they do actually ship only 8GB of hard soldered RAM for everything under 2000€. So you get a beast of a processor, but don’t try opening too many tabs (electron apps) or actually compiling on it, because then you’ll trash your hard soldered SSD over time? So I went with a lenovo yoga, that way I’m also sparing another 800+ for an apple tablet or a ton of adapters. It’s a shame to be honest.

                                                                                        1. 10

                                                                                          I’ve migrated away and just close everything at the end, I’m a much happier person

                                                                                          I respect that, but for my use cases, I need this stuff. I have a lot of long-standing research and projects going. For example, in 2021 I was researching cancer treatments for a family member so they could make an informed decision about their options. I have a few improvements I want to contribute to VirtualBox, and the relevant docs are in a tab group, safe and waiting for me when I’m ready to tackle them. Et cetera.

                                                                                          after wayland finally settled.

                                                                                          “Initial release: September 30, 2008”

                                                                                          Not sure if Wayland reminds me more of Duke Nukem Forever (15 years of development hell later, it’s released but a disappointment) or Windows Vista (“WinFS! Palladium! Avalon! Well ok, none of those, but isn’t that new theme pretty!”).

                                                                                          Google maps/OSM?

                                                                                          I did mention Marble in my article. The point was to have a desktop app that performed these functions instead of relying on a Web site. I really dislike using applications in a browser and should probably write that up later.

                                                                                          you’ll have to dual-boot

                                                                                          I’m not sure if by “upgrade any drivers” you meant firmware. Asahi is looking at making firmware updates possible from Linux since they are packaged in a standard format (.scap file) and you can already upgrade them manually using a Mac CLI command (bless). Otherwise there is no reason you have to run the Mac OS on an M1 Mac, Asahi can be the only OS on the drive (though right now you probably wouldn’t like it).

                                                                                          8GB

                                                                                          …is enough for my Mum to do eight Safari tabs + Photoshop + Telegram on her MBA, but I concede it’d be really nice to have slotted RAM. Unfortunately there are reasons they removed them; I remember the PBG4 slot failure fiasco, and it does drive up cost, thermals, dimensions, etc. Not that I like the idea, but I do understand the point.

                                                                                          1. 6

                                                                                            I have a lot of long-standing research and projects going

                                                                                            Bookmark seems to be the right feature for this use-case.

                                                                                            1. 11

                                                                                              Bookmarks don’t preserve history. It is possible to use bookmarks in a similar fashion, but I have never been as productive with bookmarks as I am with tab groups.

                                                                                              1. 4

                                                                                                Preach! I find I think I’m terms of space. Tabs exist in space. Bookmarks do not. I can find a container or a tab, but bookmarks? Five years later as I’m cleaning out my bookmarks I remember how that would have been so useful.

                                                                                                1. 1

                                                                                                  I use Pinboard, with the archiving feature; I can search by tags I’ve applied or some text within the documents. It’s pretty useful!

                                                                                                2. 3

                                                                                                  For some reason as soon as tabs became I thing, I basically stopped using bookmarks completely. I feel like what I really want is a queue of “this looks interesting” things that slowly dies if I don’t look at it… kind of like tabs, they stay open until I get so annoyed by all of them that I just close them, but it works great to keep stuff around that “oh hey, I might want to read this a bit later”

                                                                                                  1. 1

                                                                                                    I use Reading List for that, but yeah, before I had a Reading List this was another use case for tabs.

                                                                                              2. 2

                                                                                                Which Wayland implementation is being discussed?

                                                                                                I’ve been using one for years and I’m pretty happy with it.

                                                                                                Between TV, home display, work display, and internal display, there’s 4 different DPIs/scaling factors going on, and it seems to work just fine?

                                                                                                1. 5

                                                                                                  Wayland implementations are at that critical stage between “works on my machine” and “works on everyone’s machine”. Mine’s pretty well-behaved, non-nvidia, three year-old hardware, and all Wayland compositors I’ve tried break in all sorts of hilarious ways. Sway is basically the only one that’s usable for any extended amount of time but that’s for hardc0re l33t h4x0rz and I’m just a puny clicky-clicker.

                                                                                                2. 1

                                                                                                  Wayland seems to be coming to the next stable KUbuntu release, which makes it “production ready” for me. But I can totally understand the sentiment (sitting on a fullHD + 4k screen with windows for multi-dpi scaling).

                                                                                                  Fair point for desktop-app Maps, guess I’m just used to that now.

                                                                                                  Regarding driver updates you’re right, I misremembered something. What does annoy me though is that you have to use a bunch of binary blobs that you’ll have to live-download from apple (or first put on an usb stick in a future asahi release). That feels like the driver blobs on android custom ROMs and isn’t necessary for any of my intel laptops.

                                                                                                  For my daily workload 8GB of RAM isn’t enough, although I’m doing more than than office/browsing.

                                                                                                3. 9

                                                                                                  That’s an anti-feature to me that KDE Connect may provide, but I also disable it on windows.

                                                                                                  Well, it’s a useful feature for a lot of people that use multiple apple devices, including myself.

                                                                                                  8GB of hard soldered RAM

                                                                                                  Soldered on RAM is likely the future everywhere, not due to price, but due to engineering and physical constraints. If you want to increase performance, it has to come closer to the die.

                                                                                                  So you get a beast of a processor, but don’t try opening too many tabs (electron apps) or actually compiling on it, because then you’ll trash your hard soldered SSD over time?

                                                                                                  This seems to be a bit of a straw man. I haven’t had any issues with swap over the last year and a half of daily driving a MacBook air. Admittedly, it’s 16GB rather than 8GB.

                                                                                                  I agree with the rest of your points, for the most part.

                                                                                                  1. 1

                                                                                                    it’s 16GB rather than 8GB

                                                                                                    And that’s my point. I’m fine with 16GB, but 8 isn’t enough if I open my daily workload. (Note though that I was apparently wrong, I’d have gotten a decent machine for 1500€ apparently.)

                                                                                                  2. 4

                                                                                                    8GB of hard soldered RAM

                                                                                                    Trying to compare on specs like that misses the forest for the trees IMO, the performance characteristics of that RAM are so different to what we’re used to, benchmarks are the only appropriate way to compare. The M1 beats my previous 16GB machine in every memory-heavy task I give it, if non-swappable RAM is the price I pay for that, I’ll gladly pay it.

                                                                                                    1. 1

                                                                                                      That’s very interesting. I’m just looking at my usual memory usage and an IDE + VM + browser are easily over 8GB of RAM. Then add things like rust analyzer or AI completion and you’re at 12GB. Not sure if swapping is good for that.

                                                                                                    2. 3

                                                                                                      Too bad they do actually ship only 8GB of hard soldered RAM for everything under 2000€.

                                                                                                      That’s not true. A MacBook Air with M1, 16GB of RAM and 256 base level SSD costs 1.359€. Selecting a more reasonable 1TB SSD will set you back 1.819€. You can always buy a 2nd choice/refurbished model for 100+€ less. Also, one should consider that the laptop will hold a lot of its value and can be sold easily in a couple of years.

                                                                                                      1. 2

                                                                                                        only 8GB of hard soldered RAM for everything under 2000€

                                                                                                        I’m shocked it’s that expensive in Europe. My M1 Air with maxed out GPU (8-core) and RAM (16 GB), as well as 1 TB SSD, was only $1650 (~1500€).

                                                                                                        1. 7

                                                                                                          It’s not that expensive. E.g. in Germany, the prices are currently roughly:

                                                                                                          1. 2

                                                                                                            Wait what. I did go to alternate (which is also a certified repair shop) and I looked on apple.com and couldn’t find that. And even now when I go to apple.com I get a listing saying “up to 16GB”, then klick on “buy now” and get exactly two models with 8GB of RAM. Oh wait I have to change to 14 inches for that o_O

                                                                                                            Anyway if I could, I’d edit my comment, because apparently I wasn’t searching hard enough..

                                                                                                            Edit: For 16GB RAM, 512GB SSD you’re at a minimum of 1450 (1.479 on alternate), which is still far too much in my opinion. And 256GB for my workload won’t cut it sadly.

                                                                                                            1. 3

                                                                                                              Wait what. I did go to alternate (which is also a certified repair shop) and I looked on apple.com and couldn’t find that. And even now when I go to apple.com I get a listing saying “up to 16GB”, then klick on “buy now” and get exactly two models with 8GB of RAM. Oh wait I have to change to 14 inches for that o_O

                                                                                                              You click Buy, then Select the base model with 8GB RAM and then you can configure the options: 8 or 16GB RAM and 256GB storage all the way up to 2TB storage, the keyboard layout, etc. No need to change to the Pro 14”.

                                                                                                              For 16GB RAM, 512GB SSD you’re at a minimum of 1450 (1.479 on alternate), which is still far too much in my opinion.

                                                                                                              You are moving the goal posts. You said that a MacBook with 16GB costs more than 2000 Euro, while it actually starts at 1200 Euro.

                                                                                                              which is still far too much in my opinion.

                                                                                                              Each to their own, but MacBooks retain much more value. I often buy a new MacBook every 18-24 months and usually sell the old one at a ~400 loss. That’s 1.5-2 years of a premium laptop for 200-300 Euro per year, which is IMO a very good price. If I’d buy a Lenovo for 1200-1300 Euro, it’s usually worth maybe 300-400 Euro after two years.

                                                                                                              1. 3

                                                                                                                The trick is to buy these Lenovos (or some other undesirable brand) second hand from the people who paid for the new car smell, and get 5-8 more years out of them.

                                                                                                                1. 5

                                                                                                                  The trick is to buy these Lenovos (or some other undesirable brand) second hand from the people who paid for the new car smell, and get 5-8 more years out of them.

                                                                                                                  I can understand that approach, it is much more economically viable. But at least on the Apple side of things, there have been too many useful changes the last decade or so to want to use such an old machine:

                                                                                                                  • Retina display (2012)
                                                                                                                  • USB-C (2016), they definitely screwed that up by removing too many ports too early, but I love USB-C otherwise: I can charge through many ports, get high-bandwidth Thunderbolt, DP-Alt mode, etc.
                                                                                                                  • External 4K@60Hz screens (2015?)
                                                                                                                  • Touch ID (2016)
                                                                                                                  • T2 secure enclave (2017)
                                                                                                                  • M1 CPU (2020)
                                                                                                                  • XDR display (2021)

                                                                                                                  These changes have all been such an improvement of computing QoL. Then there are many nice incremental changes, like support for newer, faster WiFi standards.

                                                                                                                  1. 2

                                                                                                                    So very much this.

                                                                                                                    My approach in recent years has been:

                                                                                                                    Laptops are old Thinkpads, upgraded with more RAM and SSDs. Robust, keyboards are best of breed, screens & ports are adequate, performance is good enough.

                                                                                                                    Phones are cheap Chinese things, usually for well under £/$/€ 200. Bonuses: dual SIM, storage expansion with inexpensive µSD card, headphone socket; good battery life.

                                                                                                                    Snags: fingerprint sensors and compass are usually rubbish; 1 OS update ever; no 3rd-party cases or screen protectors. But I don’t mind replacing a £125-£150 phone after 18mth-2Y. I do mind replacing a £300+ phone that soon (or if it’s stolen or gets broken).

                                                                                                                    1. 2

                                                                                                                      I think we’re the same person. Phones seem to last about two years in my hands before they develop cracks and quirks that make them hard or impossible to use, regardless of whether it’s a “premium” phone or the cheapest model.

                                                                                                                      I wish this weren’t the case, but economically, the cheapest (‘disposable’) chinese phones offer the best value for money, even better than most realistic second-hand options that can run LineageOS.

                                                                                                                      1. 2

                                                                                                                        :-D

                                                                                                                        Exactly so. I have had a few phones stolen, and I’ve broken the screens on a few.

                                                                                                                        It gives me far fewer stabbing pains in the wallet to crack the glass on a cheapo ChiPhone than it did on an £800 Nokia or even a £350 used iPhone. (Still debating fixing the 6S+. It’s old and past it, but was a solid device.)

                                                                                                                        My new laptop from $WORK is seriously fast, but it has a horrible keyboard and not enough ports, and although it does work with a USB-C docking station, it looks like one with the ports I need will cost me some 50% of the new cost of the laptop itself. >_<

                                                                                                                        1. 1

                                                                                                                          I just bought a refurbished iPhone SE1 for 100 € to replace my old SE which had a cracked screen, dead battery and a glitchy Lightning port. Fixing all that would probably have cost as much. The SE still runs the latest iOS version and has an earphone connection.

                                                                                                                    2. 1

                                                                                                                      Thanks for the help with that website.

                                                                                                                      You are moving the goal posts

                                                                                                                      My price error does lower the bar of entry a lot, true. - I could now just stop writing and pretend that 1500 would be the ideal price and I’m regretting not buying it. But 1500 is still a lot of money when you can still get something that is very similar, has more features and costs less. I was able to buy something that is convertible for 1200 from lenovo, has more connection slots, has a replaceable SSD and does come with a high-end ryzen, supports linux (and windows) since day 1 (so it is not a glorified android tablet).

                                                                                                                      I often buy a new MacBook every 18-24 months and usually sell the old one at a ~400 loss.

                                                                                                                      I’m running phones for 8 years, laptops for 6+ years, desktops for 10 (with some minor upgrades). I wouldn’t want to invest that much time into buying a new one and selling the old one. But I can see you point, you’re essentially leasing apple hardware.

                                                                                                                      it’s usually worth maybe 300-400 Euro after two years

                                                                                                                      If you’re trying to always buy the newest thing available, fair. I’m trying to run these things for a long time because I hate switching my setup all the time and I like being environmentally friendly.

                                                                                                                      Each to their own

                                                                                                                      I agree, but I can see now where the difference in our preference comes from and I think that’s worth it.

                                                                                                              2. 1

                                                                                                                Note though that I’m not commenting too much on the OS aspect, it can be linux or windows, I use both equally. And if not for those 8GB of RAM, I’d have bought an apple laptop last week.

                                                                                                              1. 3

                                                                                                                The major JS engines also do the latin1 optimization, partially for space, but also performance.

                                                                                                                1. 2

                                                                                                                  Python as of 3.3 does something similar: all strings in Python have fixed-width storage in memory, because the choice of how to encode a string in memory is done per-string-object, and can choose between latin-1, UCS-2, or UCS-4.

                                                                                                                  1. 7

                                                                                                                    Before 3.3, Python would have to be compiled for either UCS-2, or UCS-4, leading to hilarious “it works on my machine”-bugs.

                                                                                                                    And let’s not forget MySQL, which has a utf8 encoding that somehow only understands the basic multilingual plane, and utf8mb4, which is real utf-8.

                                                                                                                    1. 7

                                                                                                                      And let’s not forget MySQL, which has a utf8 encoding that somehow only understands the basic multilingual plane, and utf8mb4, which is real utf-8.

                                                                                                                      The more I hear about MySQL the more scared I get. Why is anyone using it still?

                                                                                                                      1. 5

                                                                                                                        Because once upon a time it was easier than PostgreSQL to get started with, and faster in its default, hilariously bad configuration (you could configure it not to be hilariously bad, but then its performance was worse).

                                                                                                                        And then folks just continued using it, because it was the thing they used.

                                                                                                                        I still cringe when I see a project which supports MySQL, or worse only MySQL, but it is a mostly decent database today, if you know what you are doing and how to avoid its pitfalls.

                                                                                                                        1. 1

                                                                                                                          I still cringe when I see a project which supports MySQL, or worse only MySQL, but it is a mostly decent database today, if you know what you are doing and how to avoid its pitfalls.

                                                                                                                          I’ve probably only heard of MySQL’s warts and footguns, and little of it’s merits. On the other hand, I’ve self-hosted wordpress for a great number of years so It Has Worked On My Machine(tm).

                                                                                                                        2. 4

                                                                                                                          Because you’re hearing about the warts, it’s just legacy and now deprecated stuff they didn’t change for all the people that don’t want a broken system. Otherwise it works perfectly fine.

                                                                                                                          Edit: You could probably ask the same about windows, looking at WTF-8

                                                                                                                          1. 2

                                                                                                                            Legacy and/or confusion

                                                                                                                            1. 1

                                                                                                                              I’m no fan of MySQL, but Postgres also has some awful warts. Today I found a query that took 14s as planned by the planner or 0.2s if I turned off nested loop joins. There’s no proper way to hint that for that join, I have to turn off nested loops for the whole query.

                                                                                                                            2. 3

                                                                                                                              Another thing about pre-3.3 Python is that “narrow” (UCS-2) builds broke the abstraction of str being a sequence of code points; instead it was a sequence of code units, and exposed raw surrogates to the programmer (the same way Java, JavaScript, and other UTF-16-based languages still commonly do).

                                                                                                                              1. 2

                                                                                                                                basic multilingual

                                                                                                                                It’s still 3 bytes, making for even more fun as it first looks like its working.

                                                                                                                              2. 2

                                                                                                                                Interesting. Why did they not choose UTF-8 instead of latin-1?

                                                                                                                                1. 2

                                                                                                                                  The idea is to only use it for strings that can be represented by one-byte characters, so UTF-8 doesn’t gain you anything there. In fact, UTF-8 can only represent the first 128 characters with one byte, whereas latin-1 will obviously represent the full 256 characters in that one byte (although whether CPython in particular still uses latin1 for \u007F-\u00FF, I’m not sure - it’s a little more complicated internally due to compat with C extensions and such).

                                                                                                                                  1. 2

                                                                                                                                    Like the other commenter said: efficiency. UTF-8 (really, ASCII at that point) in the one-byte range only uses 7 bits and so can only encode the first 128 code points as one byte, while latin-1 uses the full 8 bits and can encode 256 code points as one byte, giving you a bigger range (for Western European scripts) of code points that can be represented in one-byte encoding.

                                                                                                                                    1. 1

                                                                                                                                      Because it’s easier to make UTF-16 from latin-1 than UTF-8. Latin-1 maps 1:1 to the first 256 codepoints, so you just insert zero every other byte. UTF-8 requires bit-twiddling.

                                                                                                                                      And these engines can’t just use UTF-8 for everything, because constant-time indexing into UTF-16 code units (and surrogates) has been accidentally exposed in public APIs.

                                                                                                                                1. 9

                                                                                                                                  I recall someone pointing me at a study a long time ago that looked at people who self-identified as lucky. The biggest correlating factor between them was that they’d taken opportunities when presented. This ties in very well with that: if you have no skills in an area, then you will ignore opportunities in that space. If you have a small amount of skill in the area then you’re in a better position to take an opportunity that would improve those skills. Do that a few times and you have a very broad set of skills and now you’re able to take a lot more opportunities.

                                                                                                                                  1. 2

                                                                                                                                    I recall someone pointing me at a study a long time ago that looked at people who self-identified as lucky. The biggest correlating factor between them was that they’d taken opportunities when presented.

                                                                                                                                    Did it also look at people who self-identified as unlucky? Because this kind of study would obviously be very susceptible to survivorship bias.

                                                                                                                                    1. 3

                                                                                                                                      It did, though I don’t remember in detail. One of the interesting things that I do remember is that people who identified as lucky weren’t successful significantly more often, they’d quite often take an opportunity, fail, and move onto the next thing. One of the important things there is the ability to fail, which is probably the best definition of privilege that I’ve seen: the ability to fail and not have it significantly impact your future in a negative way.

                                                                                                                                      1. 2

                                                                                                                                        Humans expect too much of themselves. The best solitary hunters are animals like lions or peregrine falcons, who only succeed approximately one in five hunts or one in two dives respectively. Even in groups, animals like wild dogs do not succeed on every hunt. Privilege isn’t in failing, or in recovering from failure, but in the unrealistic expectation of unbroken strings of success.

                                                                                                                                  1. 2

                                                                                                                                    Coding interviews are necessary because there are too many non-coders out there pretending to be coders. You need to make the candidates write code.

                                                                                                                                    Having said that, 1-hour coding interviews in the style of “show me that you can fetch and show some data from a public API” is probably the right size, as it shows enough of the day to day practices. Anything beyond that (especially take-home exercises) is stretching it.

                                                                                                                                    1. 10

                                                                                                                                      Coding interviews are necessary because there are too many non-coders out there pretending to be coders.

                                                                                                                                      I have run many, many interviews at multiple companies. I have yet to encounter the mythical “non-coder” trying to trick the company into hiring them. As far as I can tell, the whole idea is based on a literal vicious cycle where anyone who fails an interview is deemed to be utterly unable to do any coding, and that’s used as justification for ratcheting up the difficulty level, which results in more people failing and being deemed unable to code, which justifies ratcheting up the difficulty…

                                                                                                                                      And that’s not just my personal opinion/anecdote: this online interviewing platform has a decent sample size and says:

                                                                                                                                      As you can see, roughly 25% of interviewees are consistent in their performance, but the rest are all over the place. And over a third of people with a high mean (>=3) technical performance bombed at least one interview.

                                                                                                                                      In the interviews they didn’t do well in, they probably were labeled as unqualified, incapable, etc. – but in the rest of their interviewers they were top performers.

                                                                                                                                      1. 7

                                                                                                                                        I have run many, many interviews at multiple companies. I have yet to encounter the mythical “non-coder” trying to trick the company into hiring them.

                                                                                                                                        Lucky you. I haven’t run that many interviews, and so far I can remember a few cases:

                                                                                                                                        • Junior finishing degree from degree mill, trying to pass a 5 million banking system as his own. Upon further investigation, his contribution to the code base was around 20 changes to README files.
                                                                                                                                        • HR woman wanting to join because HR is so easy she expects to pick programming by just working with us.
                                                                                                                                        • Applicant-provided code looks good. But the code in the on-site interview is very different (way lower quality). Upon further investigation, turns out the provided sample was written by her boss, not actally by the applicant.
                                                                                                                                        • 20 years of experience on CV. Can only do what the IDE offers as buttons/dropdowns. Unable to debug the SOAP authentication in their company API because there is no button for it.

                                                                                                                                        Sure, some of these can work as programmers if you are willing to lower enough the requirements. But without a technical interview where they get to write some code, or explain some code they wrote in the past, you wouln’t find out.

                                                                                                                                        And yes, I also have hired people without degrees that were able to demo me something they built. I had to teach them later working in teams, agile, git, github and other stuff, but they still did good nodejs and mongo.

                                                                                                                                        It’s very hard to completely bomb an interview if you can write some code. You can write wrong abstractions, bad syntax, no tests, and that is still so much better than not being able to open an IDE, or not being able to open a terminal and type ls.

                                                                                                                                        1. 6

                                                                                                                                          have hired people without degrees that were able to demo me something they built. I had to teach them later working in teams, agile, git, github and other stuff,

                                                                                                                                          This doesn’t seem degree related, since degree work won’t teach you working in teams, agile, git, or github

                                                                                                                                          1. 1

                                                                                                                                            I have run many, many interviews at multiple companies. I have yet to encounter the mythical “non-coder” trying to trick the company into hiring them.

                                                                                                                                            I’ve seen it happen; I’ve worked with someone who managed to work at a company for several years, even though it was an open secret that they couldn’t code without pairing (he was tolerated because he was likeable, and took care of a lot of the non-programming related chores)

                                                                                                                                            I’ve also seen people strut into an interview with great confidence, only to write code that didn’t remotely resemble the syntax of the language they were supposedly proficient in. If this was due to anxiety, they certainly didn’t show it.*)

                                                                                                                                            I don’t think it’s common enough to warrant all the anxiety about “fake programmers”, but it’s not completely imaginary.

                                                                                                                                            *) I live in a country where software shops are among the few white-collar employers that’ll happily accept people who don’t speak the native language. That might mean we get more applicants for whom software development wasn’t exactly their life’s calling.

                                                                                                                                            1. 1

                                                                                                                                              work at a company for several years, even though it was an open secret that they couldn’t code without pairing

                                                                                                                                              I would hope that after years of pairing they started being able to code… Otherwise I blame their pairing partners.

                                                                                                                                              1. 1

                                                                                                                                                I don’t know how the situation arose, but by the time I got there, he was already known as the guy who did the “chores”, and avoided pairing to - presumably - reduce the chance of being found out. This all changed when new management came in, which implemented some - very overdue - changes.

                                                                                                                                                Maybe the takeaway should be that there might be room for “non-coders” in development teams, but the situation as it was can’t have been easy on anybody.

                                                                                                                                          2. 3

                                                                                                                                            Coding interviews are necessary because there are too many non-coders out there pretending to be coders. You need to make the candidates write code.

                                                                                                                                            I can’t speak for all jurisdictions, but this feels a little overblown to me, at least in the Australian market.

                                                                                                                                            If someone manages to sneak through and it turns out they have no experience and can’t do the job, you can dismiss them within the 6 month probation period.

                                                                                                                                            Yes, you would have likely wasted some time and money, but it shouldn’t be significant if this is a disproportionately small number of candidates.

                                                                                                                                            1. 2

                                                                                                                                              The thing is the culture in the US is quite different- some companies will fire people willy nilly, which is bad for morale, and other companies are very hesitant to fire anyone for incompetence, because by firing someone you leave them and their family without access to health care (and it’s bad for morale). Either way, you do actually want to make good decisions most of the time when hiring.

                                                                                                                                          1. 8

                                                                                                                                            Designing, building, managing, and growing an Open Source project demonstrates all the skills required of a Senior Software Engineer. That means maintainers can access $150k–300k++/year compensation packages.

                                                                                                                                            Just between us, that’s 3x-6x going rate east of Germany. Even more east of Poland.

                                                                                                                                            A lot of open source apparently happens because in the US there is enough capital for people to take very long sabbaticals and/or for companies to take in couple more people than strictly necessary. I am very happy for it, because that means people around me in Czechia here can produce high quality solutions for the local needs even on measly $40k.

                                                                                                                                            On the other hand, most of local talent gets vacuumed up by transnationals who pay $50k or even $60k and then used to very inefficiently deliver services to the US market, slaving on legacy systems with zero passion, just to pay for their very expensive mortgage.

                                                                                                                                            If you really want to make sure that open source you so desperately depend on survives, you can potentially sponsor 3-6x the amount of well-qualified maintainers in the central/eastern Europe for the same price.

                                                                                                                                            1. 7

                                                                                                                                              people around me in Czechia here can produce high quality solutions for the local needs even on measly $40k.

                                                                                                                                              Yes, life in Czechia is also cheaper than in most US cities, and there are well educated engineers everywhere. :)

                                                                                                                                              If you really want to make sure that open source you so desperately depend on survives, you can potentially sponsor 3-6x the amount of well-qualified maintainers in the central/eastern Europe for the same price.

                                                                                                                                              The idea isn’t to overtake projects with cheap labor, but to fund people who created something in the first place.

                                                                                                                                              1. 3

                                                                                                                                                The idea isn’t to overtake projects with cheap labor

                                                                                                                                                Which feeds back to the issue. US / western EU engineers will produce some more open source much more expensively while engineers elsewhere will waste their talents cheaply taking care of legacy DHL or Accenture clients’ software stacks.

                                                                                                                                                Also:

                                                                                                                                                who created something in the first place

                                                                                                                                                There is no need to fund anyone after they did something, is there? For that you could have some sort of award system. Allocate some funds for projects that especially helped you in a given year and then let your developers vote on distribution.

                                                                                                                                                1. 4

                                                                                                                                                  Software is never done, it needs regular maintenance, hence the funding suggestion.

                                                                                                                                                  1. 2

                                                                                                                                                    Which feeds back to the issue. US / western EU engineers will produce some more open source much more expensively while engineers elsewhere will waste their talents cheaply taking care of legacy DHL or Accenture clients’ software stacks.

                                                                                                                                                    Well. Cheap labor isn’t about software or open-source. Open-source software developers are just realizing that they’ve been working for free, and looking for a way out. If you tell them “there’s no way out, we’ll just replace you with cheaper coders from $somewhere”, I’m not sure they’ll like your solution.

                                                                                                                                                    Do you think that it all comes down to “throw money at the project and find the cheapest way to get the work done”? That’s how we get maquiladoras where I live (and elsewhere).

                                                                                                                                                    There is no need to fund anyone after they did something, is there? For that you could have some sort of award system. Allocate some funds for projects that especially helped you in a given year and then let your developers vote on distribution.

                                                                                                                                                    Sarcasm. :)

                                                                                                                                                    Yes, that could work. Managing funds require a legal entity in most parts of the world though, and raises a lot of challenges based on organizing people, making them agree on stuff, paying them, taxes, legal obligations (like health insurance), etc. That’s also why “funds” are easier to share in the context you already know.

                                                                                                                                                2. 2

                                                                                                                                                  Yeah, I find the pay suggestion interesting because of how widely varying pay is around the world. The author also says:

                                                                                                                                                  you should target figures between 25% and 100% of a SWE compensation package

                                                                                                                                                  $150k/yr is 100% of a senior SWE where I live in the US (Ohio), which makes the high end of his range 200%. For this Google developer author, $150k/yr is probably 15-25% of a senior SWE.

                                                                                                                                                  There are just as competent engineers in areas outside of California, and I’m sure the engineers here in Ohio or where you live in Czechia are just as good.

                                                                                                                                                  1. 2

                                                                                                                                                    Just between us, that’s 3x-6x going rate east of Germany. Even more east of Poland.

                                                                                                                                                    Compare phk’s funding drive. More than 15 years old by now, but he wanted a little less than $60k - and he’s a uniquely skilled software engineer by most people’s standards: https://people.freebsd.org/~phk/funding.html

                                                                                                                                                    The FAANGS - for now - make such profits that they have the luxury of ignoring the labour market, and paying 3-6x rates for engineers who have demonstrated some unique talent - like authoring an open source project, or living in silicon valley. We’ll see how long that lasts.

                                                                                                                                                    1. 2

                                                                                                                                                      The numbers in the article are not FAANGS SV numbers, see the footnote https://words.filippo.io/pay-maintainers/#fn1

                                                                                                                                                      1. 3

                                                                                                                                                        True, but it’s the 90th percentile developer salary of three cities with a notoriously high cost-of-living, which is then treated as the lower end of the range. I think it’s no surprise that the author works for a FAANG - very few people would find this kind of calculation indicative of actual salary expectations.

                                                                                                                                                        1. 4

                                                                                                                                                          Author here 👋 Note that I use the 90th percentile of all SWE salaries as indicative of the salary of a senior SWE. In my experience, $300k is actually very wrong on the low side for the non-FAANG NYC market, so that suggests the other numbers are conservative too. (Plausible explanations include: less than 10% of engineers are senior, and senior engineers don’t post their salary to levels.fyi. I believe both are true.)

                                                                                                                                                          It’s true that NYC, Berlin, and London have high cost of living, but salary is not based on cost of living. That’s one of the most amazing things companies managed to convince engineers of. Do you think lawyers get paid based on cost of living? Post-pandemic, the market is flooded of remote positions that will pay the same in Berlin, Hamburg, and Leipzig.

                                                                                                                                                          1. 3

                                                                                                                                                            Author here 👋 Note that I use the 90th percentile of all SWE salaries as indicative of the salary of a senior SWE. In my experience, $300k is actually very wrong on the low side for the non-FAANG NYC market, so that suggests the other numbers are conservative too. (Plausible explanations include: less than 10% of engineers are senior, and senior engineers don’t post their salary to levels.fyi. I believe both are true.)

                                                                                                                                                            I have my doubts about levels.fyi being a representative sample of the market; I’ve noticed that very “boring” companies that are nevertheless major employers (like Atos, CapGemini in the EU, Cognizant in the US) don’t seem to have a lot of data points. Not to mention the complete absence of the thousands of small businesses that employ the bulk of the work force.

                                                                                                                                                            You’re also implicitly defining “senior” to mean someone who commands a salary in the 90th percentile - in the context of this discussion, I’m fine with that.

                                                                                                                                                            I think your point about salary makes sense if you read it as “Some OSS maintainers make north of $500k/yr - don’t expect them to quit their day job for $50k/yr.” $1000/month is a “thank you” in the US, but - last I checked - still decent money for a junior software developer in Eastern Europe. Precisely because it’s a world-wide job market, and there’s a wide divergence in salaries, it makes sense to calibrate people’s expectations.

                                                                                                                                                            For most companies, it makes a lot more financial sense to pay their employees to work on the project (giving them the added benefit of control and in-house expertise) than it is to pay 90th percentile money to someone external. Which is the very thing you’d like to avoid.

                                                                                                                                                  1. 23

                                                                                                                                                    Every generation has to discover which operating systems cheat on fsync in their own way.

                                                                                                                                                    1. 7

                                                                                                                                                      Hello, $GENERATION here, does anyone have historical examples or stories they’d be willing to share of operating systems cheating on fsync?

                                                                                                                                                      1. 14

                                                                                                                                                        Linux only start doing a full sync on fsync in 2008. It’s not so much “cheating” (posix explicitly allows the behavior) as much as it is “we’ve been doing it this incomplete way for so long that switching to doing things the correct way will cripple already shipping software that expects fast fsync”. Of course the longer you delay changing behaviour, the more software exists depending on the perf of an incomplete sync…

                                                                                                                                                        The real issue marcan found isn’t that the default behaviour on macOS is incomplete. It’s that performing a full sync on apple’s hardware isn’t just slow compared to other nvram, it’s that the speed is at the level of spinning disks.

                                                                                                                                                        1. 2

                                                                                                                                                          It’s funny, I have 25+ years of worrying about this problem but I don’t have a great reference on hand. This post has a bit of the flavor of it, including a reference from POSIX (the standard that’s useful because no one follows it) http://xiayubin.com/blog/2014/06/20/does-fsync-ensure-data-persistency-when-disk-cache-is-enabled/

                                                                                                                                                          The hard part is the OS might have done all it can to flush the data but it’s much harder to make sure every bit of hardware truly committed the bits to permanent storage. And don’t get me started on networked filesystems.

                                                                                                                                                          1. 2

                                                                                                                                                            Don’t forget scsi/eide disks cheating on flushing their write buffers, as documented here So even when your OS thinks it’s done an fsync, the hardware might not. It’s one of the earliest examples I remember, but I’m sure this problem goes back to the 90s. I also remember reading about SGI equipping their drives with an extra battery so they could finish pending flushes.

                                                                                                                                                            1. 1

                                                                                                                                                              I remember the ZFS developers (in the early 2000’s, in the Sun Microsystems days maybe?) complaining about this same phenomenon when they loudly claimed “ZFS doesn’t need a fsck program”. Someone managed to break ZFS in a way that made a fsck program necessary to repair because their drives didn’t guarantee writes on power off the way they said they did.

                                                                                                                                                        1. 11

                                                                                                                                                          You could argue that $7/mo isn’t the most expensive thing in the world, but paying $7/mo for doing essentially nothing is quite expensive.

                                                                                                                                                          I spent a couple more hours getting the subdomain for the widget set up correctly with Route 53 and ACM, and wiring that up to the API Gateway custom domain configuration.

                                                                                                                                                          I’m not hating, we’ve all been there - I just hope all parties were happy with trading a couple hours of developer time for the same cost as hosting the thing for 10 years ;)

                                                                                                                                                          1. 2

                                                                                                                                                            It’s a bit weird there’s no “grandfather clause” where data gathered before the introduction of GDPR is exempt from explicit consent. But I do remember a fear when it was implemented in Sweden was that scofflaws and trolls would tie up government agencies from day 1 with more or less frivolous attempts to “get ’em” violating the GDPR.

                                                                                                                                                            1. 13

                                                                                                                                                              This is why there was such a long introductory period. You had a couple of years before the GDPR came into effect to contact everyone about whom you were storing PII and request consent.

                                                                                                                                                              I am not sure that this is actually a compliant implementation. You have to provide a mechanism for withdrawing consent, as well as for granting it, and individuals can require that you delete all PII associated with them. Holding their email address without the opt-in flag would put you in violation. If you have any mechanism for adding people that isn’t their direct submission of their email address, then you need to retain some hashes to prevent you from accidentally adding them back. I came across this case in the context of a college, which has a legitimate interest rationale for being able to keep the names of alumnae, but which needed to be able to ensure that ones that had opted out of having their contact information stored never had contact information added as the result of merging the alumnae list with some other public databases.

                                                                                                                                                              1. 2

                                                                                                                                                                Thank you for raising this point and of course, you are correct. The current solution is by no means perfect. We’ve sort of solved the first half of the issue, getting the opt-in action logged somewhere and surfaced in the CRM. The opt-out flow is currently fairly rocky — the person could either navigate to the consent form again and rescind their consent, or get in touch with the company and ask for their consent to be revoked, or indeed to have their PII deleted.

                                                                                                                                                                Still, I’m surprised that the CRM software does not handle this. It would be such a value-add to have this functionality built-in, compliant and correct.

                                                                                                                                                                1. 2

                                                                                                                                                                  Still, I’m surprised that the CRM software does not handle this. It would be such a value-add to have this functionality built-in, compliant and correct.

                                                                                                                                                                  I’m a bit surprised at that too. I’m pretty sure it’s been a thing that the Dynamics 365 marketing stuff has been shouting about for a while. One of the advantages of SaaS-type CRM offerings (versus on-premises offerings) is that the seller, as well as the user, has responsibilities under the GDPR and so has a much bigger incentive to care about compliance.

                                                                                                                                                                  By the way, did you check for Schrems II compliance? It looks as if your hosting provider is in the US, which may be a problem.

                                                                                                                                                              2. 9

                                                                                                                                                                Aside from what david mentions, for a lot of things, you had to get consent before the GDPR.

                                                                                                                                                                A particularly visible example is newsletters. You had to use and present opt-in before the GDPR. What the GDPR did in that area is introduce enforcement that has teeth and hurts.

                                                                                                                                                                1. 2

                                                                                                                                                                  I don’t think this article has anything to do with the GDPR at all. Opt-in for e-mail marketing is regulated by the e-Privacy directive.

                                                                                                                                                                  1. 2

                                                                                                                                                                    Thanks for this explanation, I was eternally grateful not to have to deal with this stuff when it was coming along the pipe.

                                                                                                                                                                  2. 3

                                                                                                                                                                    That would’ve caused a lot of companies to start selling/acquiring their marketing data like crazy, in order to be “grandfathered in”

                                                                                                                                                                  3. 1

                                                                                                                                                                    Haha — indeed. Everyone was happy with the deal in this case. :-)

                                                                                                                                                                  1. 5

                                                                                                                                                                    It’s harder today to find a car (even used one) that is reliable, cheap in maintenance and also with engine not crippled by for example EURO 7… Ah yes, becuase emission standards on ICE vehicles are checks notes bad.

                                                                                                                                                                    1. 4

                                                                                                                                                                      Ah yes, becuase emission standards on ICE vehicles are checks notes bad.

                                                                                                                                                                      EURO 7 is about more than just emissions standards, and there’s plenty about it that’s objectionable, but since it will go into effect in 2025, the assertion that it’s made it hard to find good used cars seems a bit overly dramatic

                                                                                                                                                                    1. 14

                                                                                                                                                                      I think a lot of the issues that commenters here and on HN have with the author’s viewpoint can be more easily explained by the author being a (patriotic) citizen of the Russian Federation. The “threat model” is different than from a website publisher in the West.

                                                                                                                                                                      1. 6

                                                                                                                                                                        This comes across here:

                                                                                                                                                                        But do you remember that any of CA authorities included in OS can MitM my domains anyway (by definition)? Well, partly you can prevent that for some software by using CAA DNS records, where you explicitly tell which CA authorities are authorized to issue certificates for given domains. Specifying LE in CAA means that I authorize noone to issue certificates for my domains, except for US-based forces. That is something I will never do, being the citizen of completely independent jurisdiction. I am not a traitor.

                                                                                                                                                                        The implicit assumption here is: you have a threat model that includes nation-state actors and, specifically, the US as a nation-state actor.

                                                                                                                                                                        Here’s the brutally honest solution to this problem: Give up. If the US government decides that you are a person of interest, you have much worse problems than their ability to MITM your web site. If the NSA is attacking your client devices, for example, then they are going to be able to compromise your infrastructure. If you have another nation state on your side then you may be able to find some hardened infrastructure but otherwise LE is not likely to be the weakest link in your chain.

                                                                                                                                                                        LE, CAA records, and DNSSEC between them give you solid protection against passive adversaries and restrict the set of people that can do active MITM attacks significantly. It’s not perfect but nothing is in cybersecurity. It at least restricts the set of people who can attack your connections to:

                                                                                                                                                                        • People that can compromise either endpoint, including compromising the device(s) that you log into your server with.
                                                                                                                                                                        • People with access to the LE signing key.
                                                                                                                                                                        • People who find a weakness in the TLS implementation that you or a client is using.

                                                                                                                                                                        Of the three, the people with access to the LE signing key are probably the ones I would worry about the least. Especially since any attempt to do this will likely show up in the CT log: either they won’t put the certificates in there (if Chrome sees a cert issued by LE that isn’t in the CT log, that’s grounds for LE to be removed as a trusted root, so that’s a very high risk for LE), or they will any you have an auditable log documenting the attack. Again, this is assuming a nation-state adversary that uses legal or extra-legal means to compromise LE.

                                                                                                                                                                        It’s likely to be more of a problem for availability but if the US decides that you’re a person of interest then they have far easier ways of taking your site offline. A DDoS from a botnet is harder to attribute accurately than a LE-based compromise. So is bribing someone in the datacenter that hosts your server to connect a USB stick full of malware to your machine (or, if it’s a VM, just adding it directly to your disk).

                                                                                                                                                                        TL;DR: If the kinds of attackers who can attack you via LE are in your threat model and attacking you via LE is their easiest path, then you must have amazing security already. For those of us who don’t work for intelligence agencies, this is an incredibly misplaced priority.

                                                                                                                                                                        Oh, and if you think having something controlled in a foreign jurisdiction in your TCB makes you a traitor, then I have news for you: You are a traitor. It is pretty much impossible to be connected to the Internet and not have at least one part of your hardware / software supply chain that comes from a jurisdiction that isn’t your own, whatever country you’re from.

                                                                                                                                                                        1. 11

                                                                                                                                                                          I think the threat isn’t the US government taking interest in him specifically, it’s the US and its allies using their political and economic clout to deny access to services to people living in certain jurisdictions, which has happened many times before.

                                                                                                                                                                          Whatever your political beliefs, if you happen to live in one of the countries that’s on the state department’s list of bad actors, you have to anticipate trouble, and I can imagine a certain degree of cynicism towards a “safe” authentication infrastructure that can be taken away at a moments’ notice.

                                                                                                                                                                          1. 1

                                                                                                                                                                            I think the threat isn’t the US government taking interest in him specifically, it’s the US and its allies using their political and economic clout to deny access to services to people living in certain jurisdictions, which has happened many times before.

                                                                                                                                                                            If they want to do this, then there are much easier ways for them to do so than attacking Let’s Encrypt. Unless he has deep pockets or is behind someone like CloudFlare, a DDoS on his systems is by far the easiest attack. If they don’t care about attribution then leaning on allies to tamper with routes, block DNS entries, and so on are all much easier than compromising Let’s Encrypt. If you have mitigations against all of these things, then start worrying about Let’s Encrypt.

                                                                                                                                                                            1. 1

                                                                                                                                                                              This is not how US trade embargoes have worked in the past, as you can see from GitHub’s adventures with users from Iran and the Crimea. The State department doesn’t order a DDOS. But it threatens companies with severe repercussions if they continue trading with these areas. And suddenly your Github account is blocked, and you find that Let’s Encrypt no longer wants to renew your certificates.

                                                                                                                                                                              Why would you make your operations dependent on an entity that might suspend your service at a moment’s notice?

                                                                                                                                                                          2. 6

                                                                                                                                                                            I actually think this is extraordinarily bad advice, especially for people not strongly connected to the US (edit: I say this with the greatest respect for your opinions and I’m a fan of your comments here on the red site). The US (and other nation state actors) even though well resourced do not operate without constraints.

                                                                                                                                                                            The purpose of security is to raise the cost of an attack beyond the level at which the actors in your threat model actually want to do it.

                                                                                                                                                                            If we assume that Let’s Encrypt (LE) is very much in the pocket of a threat actor with the abilities of a major government, and that actor is not the Russian Federation or one of their buddies, and our principals are roughly speaking SME-level resourced actors, with a normal relationship to the Russian state; then -

                                                                                                                                                                            LE/our threat actor have at least some capabilities that can be cheaply deployed in the sense that it consumes very little human resources and has very little opportunity cost. The question is really what those capabilities are, what costs the threat actor is willing to bear, and which threat actors you’re worried about.

                                                                                                                                                                            For example, our threat actor if they are the US probably has the ability on some time scale to get a certificate fake signed by almost any CA at a certain level of effort and cost.

                                                                                                                                                                            Our law abiding Muscovite businessman is much more exposed to the Russian Federation’s capabilities than those of the US, so choosing a Russia-based CA probably doesn’t increase their exposure to moderately motivated Russian state action; but it might increase their exposure to local criminal action at the risk of increasing their exposure to US state and US-based actors. Which is worse is for them to decide.

                                                                                                                                                                            Edit: Also, LE have deep visibility into what exists even on private networks if those machines are directly using LE; and visibility into common control.

                                                                                                                                                                            Edit: to be clear even if we assume Let’s Encrypt is owned by the NSA it’s not necessarily a bad idea to use it in place of no encryption.

                                                                                                                                                                            1. 1

                                                                                                                                                                              LE/our threat actor have at least some capabilities that can be cheaply deployed in the sense that it consumes very little human resources and has very little opportunity cost

                                                                                                                                                                              There are three things LE can do:

                                                                                                                                                                              • Refuse to respond to ACME requests for you and prevent you from getting a certificate.
                                                                                                                                                                              • Issue a false cert and not put it in the CT log.
                                                                                                                                                                              • Issue a false cert and put it in the CT log.

                                                                                                                                                                              The first of these is fairly low cost but even with a differently trusted CA any adversary that can attack the route between you and your CA can do this. If Let’s Encrypt is seen to be doing this then there’s a big risk to their reputation if they’d be caught.

                                                                                                                                                                              The second of these is very dangerous for Let’s Encrypt. Anyone who is attacked will have (at least transiently) a copy of the certificate signed by Let’s Encrypt. Anyone who records this cert can present it as evidence of tampering. The existence of this certificate is sufficient to have the Let’s Encrypt root certificate being removed from the trusted set (issuing a cert without adding it to the CT log is sufficient grounds for this). Doing this even once is an existential risk to Let’s Encrypt.

                                                                                                                                                                              The third is probably more dangerous for Let’s Encrypt. There’s a public log of the certs that are signed and if they’ve signed a cert for your domain that wasn’t yours then it’s easy to see. Again, if you can demonstrate that a cert was issued by someone else then this can cause LE to be dropped from the trusted root set. Again, this is an existential risk to Let’s Encrypt.

                                                                                                                                                                              So, of the three things that a malicious Let’s Encrypt can do to you, the only one that wouldn’t directly risk LE’s continued existence is limited to a DoS. If you’re doing weekly renewals of your cert then you get three weeks notice that this is happening. That should be sufficient to move to an alternative provider. Again, if you’re worried about denial of service then you need to think about all of the other ways that someone else could impact availability and there are many of these. You can purchase an off-the-shelf botnet that will run a DDoS that can take out most cheap VPS for a few dollars.

                                                                                                                                                                              1. 1

                                                                                                                                                                                I think that’s fair and disposes of low cost attacks. I still don’t think it means people shouldn’t bother including nation state actors (even the US) in their threat model.

                                                                                                                                                                          3. 1

                                                                                                                                                                            I would be interested to read an explanation of this threat model – can you point at one?

                                                                                                                                                                            [Not an explanation of threat models in general, I’ve got that.]

                                                                                                                                                                          1. 8

                                                                                                                                                                            Does this make anyone else think of OLE objects, but for the web?

                                                                                                                                                                            1. 7

                                                                                                                                                                              The web is slowly rediscovering the utility and power of 90s GUIs. I think it’s no accident that Joel Spolsky is throwing his weight behind it.

                                                                                                                                                                              1. 6

                                                                                                                                                                                Sort of. But OLE and ActiveX were mostly meant to be composed by developers to assemble custom GUis. These blocks are different in that end users compose them to build their content. (Again, like OpenDoc.)

                                                                                                                                                                                1. 7

                                                                                                                                                                                  OLE was also meant to let you embed one document kind in another. KParts in KDE and some of the GNOME stuff as well were all aiming at the same thing. It was a big deal in the 1990’s, and I’d be glad to have a lot of it back again.

                                                                                                                                                                                  1. 9

                                                                                                                                                                                    These things were all killed by security. With Etoile, we wanted to push even further in that direction but hit the same issue: you want to embed executable content from a third party but you don’t want to trust that third party. Java has shown that complex language-based sandboxing doesn’t work. MMU-based isolation is too expensive if you want to scale it up. WebAssembly doesn’t give confidentiality guarantees in the presence of speculative side channels (a wasm program can leak all of the contents of memory in the enclosing process) but that might not matter if it’s not allowed to make network connections. This is one of the reasons that I started working on CHERI: to be able to build systems where you could run many small untrusted components in a single address space without security problems. Any decade now…

                                                                                                                                                                                    1. 4

                                                                                                                                                                                      Java has shown that complex language-based sandboxing doesn’t work.

                                                                                                                                                                                      Could you elaborate on that?

                                                                                                                                                                                      1. 13

                                                                                                                                                                                        Sure. JVM security depends on every single part of the Java language being implemented correctly. A bug in any part leads to a sandbox escape. A single-bit bitflip can let you escape, so any kind of memory corruption can typically be turned into a sandbox escape. The JVM itself is a hugely complex piece of software. It must be written in an unsafe language because it’s doing intrinsically unsafe things to implement the safe-language abstract machine. It must be bug free for the security invariants to hold.

                                                                                                                                                                                        1. 2

                                                                                                                                                                                          Interesting, that makes sense. Thank you for expanding on your comment!

                                                                                                                                                                                      2. 1

                                                                                                                                                                                        When it was local on your desktop from you installing software from a (hopefully) trusted source those issues weren’t as big. More innocent times…

                                                                                                                                                                                        I hadn’t seen the CHERI stuff before. That’s really cool! It reminds me vaguely of some of the things the old Burroughs large machines did with typed memory. If I had a reason to justify playing with some of those new boards…

                                                                                                                                                                                        1. 1

                                                                                                                                                                                          When it was local on your desktop from you installing software from a (hopefully) trusted source those issues weren’t as big. More innocent times…

                                                                                                                                                                                          Unfortunately, that was never the case. You’d get a document on a floppy disk, it contained an embedded object, the object contained malware, and now you were infected. The malware would then embed itself in all outbound documents. They propagated a lot more slowly than when the Internet came along but getting a virus on a floppy was pretty common in the ’80s and ’90s.

                                                                                                                                                                                          I hadn’t seen the CHERI stuff before. That’s really cool! It reminds me vaguely of some of the things the old Burroughs large machines did with typed memory. If I had a reason to justify playing with some of those new boards…

                                                                                                                                                                                          The B5500 was certainly one of my inspirations on CHERI and I think several of the other folks are fans of this architecture.

                                                                                                                                                                                1. 4

                                                                                                                                                                                  I’m tickled that it uses AA NiMH batteries.

                                                                                                                                                                                  I’ve read about the old Tandy Model 100 from the early 1980s, which ran on AA batteries. Apparently they were somewhat popular with journalists, because one could write and either save to a cassette or hook up to a modem and upload to the office. They could run for a day or so on a fully charged set of batteries. That concept always fascinated me.

                                                                                                                                                                                  I doubt you could get anywhere near that sort of battery life with a modern Linux device, because of all the stuff that it does in the background.

                                                                                                                                                                                  1. 5

                                                                                                                                                                                    My M1 MacBook Air lasts all day, especially under light-ish usage, and having a computer with that kind of battery life has definitely been a game changer! I usually don’t even bring a charger with me when I leave the house any more.

                                                                                                                                                                                    1. 5

                                                                                                                                                                                      I had several PalmOS devices – a PalmPilot Pro, a III, a Handspring Visor – and they all ran on 3 AAA batteries for about a month of use. Then I got a Treo, which had a rechargeable battery pack and a cell voice/data modem.

                                                                                                                                                                                      As I recall, the upgraded III was basically on par with a Macintosh SE/30: 32 bit Motorola 68K series CPU, 2 MB RAM – which was static, used for both working memory and storage – greyscale screen of just a little less resolution as the Mac’s monochrome screen, and a serial port and an IrDA port.

                                                                                                                                                                                      There’s not much that a Linux box has to be doing all the time. 8 to 10 processes, mostly idle, is what you get at boot time. Write software with an eye towards power efficiency and you can do lots of useful stuff in constrained hardware.

                                                                                                                                                                                      1. 2

                                                                                                                                                                                        Actually the (pre-Lithium) Palms all took two AAA batteries. And yeah, they would run for weeks. And that was with keeping the DRAM alive 24/7 so that your data wouldn’t be lost.

                                                                                                                                                                                        Palm III series had between 2 and 8 MB of RAM depending on which model, a 16MHz 68k, and 160x160 LCD. Later models on the same architecture went as far as 33MHz CPUs and 16MB of RAM, and some devices had color and/or higher-res screens, although that became more common once they went ARM.

                                                                                                                                                                                        A semi-forgotten Palm device is the AlphaSmart Dana, which takes a 2001-era Palm (33MHz DragonBall, 16MB of RAM) and puts it in a laptop-ish form-factor with a real keyboard, and widens the screen to 560x160 (though apps not written specifically for it run in the center 160x160). One model even had WiFi.

                                                                                                                                                                                      2. 4

                                                                                                                                                                                        I owned an Amstrad NC100 for a while. Never put it to any serious use, but it was great - acceptable keyboard, all-day battery life from AAs, and PCMCIA card support.

                                                                                                                                                                                        https://duncan.bayne.id.au/photos/Retro_Computers/Amstrad_NC-100_and_case_Original.jpg

                                                                                                                                                                                        1. 4

                                                                                                                                                                                          The Psion 5 series (and its descendants) of the late nineties could also get a day out of a set of AAs, and could be made to run Linux. They had great keyboards, too.

                                                                                                                                                                                          1. 5

                                                                                                                                                                                            I had a Series 3, which got 2-4 weeks of moderate use out of a pair of AAs. The crappy battery life in comparison was the thing that put me off ever getting a Series 5. The Series 3 had quite similar specs to the original IBM PC. It used a RAM disk for most persistent storage (it also had a little Lithium battery that would protect the RAM if the AAs ran out and while you were changing them).

                                                                                                                                                                                            It was a fantastic machine. I wrote a load of essays for school on it and also learned a lot about how to write terrible code (it had a built-in compiler for a BASIC-like language called OPL). I probably used it more than my desktop. In some respects, computers are like cameras: the best one is the one you have access to. The Psion fitted in my jacket pocket and so was with me all of the time.

                                                                                                                                                                                            I had an RS-232 adaptor for mine that let me copy files to a big computer easily, so I could write things in the simple word processor (which wasn’t WYSIWYG, though could do some styling and, I think, export to RTF) and then spell check and format them on a desktop (the word processor used around 10 KiB of RAM, most of which was the open document - it couldn’t fit a spell checking dictionary in the size. I think the version for the larger 3a or 3c might have had one).

                                                                                                                                                                                            There’s a DOS emulator for the Series 3a, which runs well in DOSBox. If you tweak the ini file, you can get it to use a full 640x480 screen. I still use it periodically because I prefer the 3a’s spreadsheet to anything produced subsequently for simple tasks.

                                                                                                                                                                                            1. 3

                                                                                                                                                                                              In retrospect, all of these are pleasant devices to use, that have stood the test of time very well. The use of AA batteries also gives them a kind of longevity that I doubt modern devices will have.

                                                                                                                                                                                              1. 2

                                                                                                                                                                                                I think I got mine in 1993. The mother of a rich friend had upgraded to the 3a and sold hers quite cheaply (I think it was £120? They were £250 at launch). It came with the spreadsheet on a ROM disk and I also bought a flash SSD (I can’t remember if it was 128 KiB or 256 KiB). The flash disk was a single cell, so you could store files there but you couldn’t reclaim space until you did a complete erase. I mostly used it to store text adventures from the Lost Treasures of Infocom (which I think I still have somewhere, on 5.25” floppies. Unfortunately, I haven’t seen any off-the-shelf 5.25” USB floppies. At some point, I’ll have to find an early Pentium that still has the right controller).

                                                                                                                                                                                                I don’t remember when I stopped using it. I was definitely using it on a daily basis in 1998. It might have died around then. I don’t remember using it at university when I went in 2000. For the amount of use and abuse (it was carried around in the pocket of a teenage boy for 5 years) it got, the purchase price was incredibly low. I don’t think I’ve owned a pocket-sized device that’s been as useful since then.

                                                                                                                                                                                                I did manage to get on the Nokia 770 open source developers programme a few years later. Nokia gave a 2/3 discount on these machines to a load of people who were doing open source work. Unfortunately, a machine running Linux and X11 in 64MiB of RAM with no swap was… not a great experience. It was fine running vim in a full-screen xterm, anything else and the OOM killer would come along. The OOM killer’s policy was to kill the largest application, which usually meant the app with the most unsaved data. Or, if you were really unlucky, the X server. I used it with a ThinkOutside folding keyboard (which I still have and which still works well) to write a load of articles and a few book chapters. It wasn’t nearly as versatile as the Psion though.

                                                                                                                                                                                                My phone is now something on the order of three orders of magnitude more powerful than the Psion but I don’t find I use it as much as I used the Psion. I wouldn’t write a 3,000 word doc on my phone with the on-screen keyboard but I did that several times on the Psion with its built-in keyboard without any problems.

                                                                                                                                                                                                1. 1

                                                                                                                                                                                                  These days the ‘test of time’, is probably considered a bug.

                                                                                                                                                                                                  • electronic devices come with non-replaceable batteries
                                                                                                                                                                                                  • android phone manufactures pride themselves with ‘two year OS upgrades’ as the ‘limit’. While companies like Slack rapidly discontinue 4+ year old OS supports, so that ‘business users’ keep buying new devices every 2-3 years.
                                                                                                                                                                                                  • This practice of ‘2-3 year’ usage seem to propagate almost every sector of manufacturing. Economic growth is linked to sale of ‘new things’ not maintenance/upgradeability of the old. The ‘quality of architecture or design’ is measured not by how long those decisions last, but how easy they can be changed.
                                                                                                                                                                                                  1. 4

                                                                                                                                                                                                    iOS devices seem to have a much longer update lifetime. The iPhone 6 (released 2014) seems to still get OS security updates and the 6S (2015) can run the latest OS. LineageOS now does OTA updates, so (after the initial, quite painful, install which requires unlocking bootloaders and doing things that can potentially brick the device) it’s quite easy to get third-party OS support for a lot of devices. I’m using a OnePlus 5T (2017) and it happily runs Android 11 via LineageOS (presumably it will support 12 at some point, it usually takes a few months for a new AOSP release to make it into LineageOS).

                                                                                                                                                                                                    The EU is currently in the process of rolling out labelling requirements that will mandate device manufacturers commit up-front to how long they’ll provide security updates and the maximum interval between vulnerability disclosure and patch for Internet-connected devices. This should help the incentives a bit.

                                                                                                                                                                                                    Software for the Psion Series 3 was mostly delivered on ROM, a few things were provided on floppy disks and required you to own the serial adaptor so that you could copy them into the (scarce) RAM or external flash disks. There was a (quite small) print catalogue of all of the available software. I never had a software update for any of the software that I ran on my Series 3.

                                                                                                                                                                                                    1. 2

                                                                                                                                                                                                      My recent experience with Android:

                                                                                                                                                                                                      • Bought a flagship LG phone at the end of 2016 with Android 6
                                                                                                                                                                                                      • 2017 Received Android 7 update
                                                                                                                                                                                                      • 2021 LG exited Mobile business. As part of the exit, they stopped providing the free bootloader unlocking service (discontinued Dec 2021) and OS upgrades.
                                                                                                                                                                                                      • 2022 I desperately need to install Slack on my phone. Slack stopped supporting Android 7 August 2021, now it is only on android 8 and up (and they disable ability to use the app from a mobile browser too!)
                                                                                                                                                                                                      • Now I cannot unlock LG bootloader, therefore cannot even try to upgrade the phone, therefore cannot install Slack. Therefore, need a new device.

                                                                                                                                                                                                      On a separate occasion, recently had to throw away expensive bluetooth headset devices – because battery can no longer hold a charge. Last year had to do the same with HP tablets, non-replaceable batteries no longer hold charge.

                                                                                                                                                                                                      I am not even talking about appliances where doors break, etc.

                                                                                                                                                                                                      There seems to be some type in the global manufacturing stance, financial incentives, environmental non-concerns – that allow and actively promote this constant ‘replace-the-whole-item’ mentality. At least that’s how it feels to me.

                                                                                                                                                                                                      I am now looking for an 8-9 inch windows tablet that I can carry around, and have slack on that – instead of android, but most major manufactures stopped doing tablets with Windows (because at least up to windows 10 the UI interface for tablets and CPU/battery life rations are subpar).

                                                                                                                                                                                                      What you mentioned about labeling in EU makes good sense, but probably not anywhere near enough.

                                                                                                                                                                                                      I hope that in general longevity of devices, appliances and other consumables receive significant attention from policy makers across the world. It seems that leaving it to manufactures and financial systems – did not produce reasonable outcomes…

                                                                                                                                                                                                      1. 3

                                                                                                                                                                                                        I’m in a similar situation. I accidentally bought a Prime “exclusive” Moto G6 back in 2018. Well, my GF bought it for me with my money and didn’t read the fine print. You can’t unlock the bootloader on this particular phone through Motorola, because it was sold as an Amazon Prime “exclusive”. It hasn’t gotten an update since April of 2020.

                                                                                                                                                                                                        I’d love to install a custom ROM on this. I greatly extended the life of a previous Android device by installing Cyanogen Mod on it several years back. But I can’t, because I don’t control what I supposedly own. The whole situation is utterly ludicrous.

                                                                                                                                                                                                        1. 2

                                                                                                                                                                                                          To add irony to insult and injury, I chose the Moto G6 primarily to avoid yet another user-hostile anti-feature popularized by Apple: the lack of a 3.5 mm headset jack.

                                                                                                                                                                                                        2. 3

                                                                                                                                                                                                          I had an Asus Transformer Prime TF700, ran the stock firmware, and then some corruption in the flash caused it to get stuck in a boot loop. I never unlocked the bootloader and apparently I can’t do that without the device in a bootable state, so it became a paperweight. From that experience, I learned that the first thing that I do with an Android device is unlock the bootloader and replace the firmware with LineageOS.

                                                                                                                                                                                                          The problem in the Android ecosystem is the way that the incentives are aligned. If you buy an iPhone, Apple makes money in two ways:

                                                                                                                                                                                                          • The iPhone has a reasonably large markup.
                                                                                                                                                                                                          • They take a 30% cut of every app you install.

                                                                                                                                                                                                          This means that they have an incentive to keep devices supported because if you can’t run new apps then you won’t buy more apps. A lot of people also sell their iPhones every 1-2 years to buy the new flagship ones and the people who buy the second-hand ones often couldn’t afford a new one. Apple still gets revenue from the second-hand sales.

                                                                                                                                                                                                          With the Android ecosystem, the first of these goes to the device manufacturer, the second to Google. This means that, once a device has shipped, there’s no incentive for the manufacturer to do anything and the sooner the device stops working the sooner they’ll buy another one. I proposed a simple fix for this to the Android security team about 8 years ago when they were complaining about hardware vendors nor deploying security updates: divert 5-10% of app sale and ad revenue to the device vendor for every app that’s purchased on a device with the latest OS and all security patches installed. If your handset is fully up to date, the manufacturer gets 5-10% of the revenue (Google gets 20-25%), if it isn’t then Google gets the full 30%.

                                                                                                                                                                                                  2. 2

                                                                                                                                                                                                    I bought the Planet Gemini phone which is in Psion form factor. It’s a great little computer, but a rather expensive phone.

                                                                                                                                                                                                    1. 2

                                                                                                                                                                                                      Thanks for mentioning this. Astro slide 5g interests me (although I would prefer an 8 inch device).

                                                                                                                                                                                                      How good are they will long terms support (eg updates of OS, unlocking, battery replacement, etc)?

                                                                                                                                                                                                      I like my devices to last 1 year for every 100$ spent (or much better than that). So 800 $ means to me at least 8 years.

                                                                                                                                                                                                      1. 2

                                                                                                                                                                                                        The Gemini had a couple of updates, but it is currently on Android 8.1.0. The boot loader allows you to use your own OS, and you could boot multiple ROMs. I think if you wanted to get 8 years out of it you might need an something like Sailfish OS…I keep planning on playing with PostMarket OS on it.

                                                                                                                                                                                                2. 2

                                                                                                                                                                                                  My Newton 2000 MessagePad run quite well on 4 AA batteries. Not as long as a Palm Pilot device, but long enough.

                                                                                                                                                                                                  The father of a very close friend of mine was a journalist using one of those Tandy models. He would save the articles to little cassette tapes and express mail them to the newsroom from the field.