Threads for x64k


    Steele suggests the trick in balancing size against utility is that a language must empower users to extend (if not change) the language by adding words and maybe by adding new rules of meaning.

    I don’t buy this. Authors generally don’t extend e.g. English with new words or grammar in order to write a novel. Programmers generally don’t need to extend a programming language with new words or rules in order to write a program.

    A programming language, like a spoken/written language, establishes a shared lexicon in which ideas can be expressed by authors and widely understood by consumers. It’s an abstraction boundary. If you allow authors to mutate the rules of a language as they use it, then you break that abstraction. The language itself is no longer a shared lexicon, it’s just a set of rules for arbitrarily many possible lexicons. That kind of defeats the purpose of the thing! It’s IMO very rare that a given work, a given program, benefits from the value this provides to its authors, compared to the costs it incurs on its consumers.

    1. 13

      Programmers generally don’t need to extend a programming language with new words or rules in order to write a program.

      Maybe you would disagree, but I would argue that functions are essentially new words that are added to your program. New rules do seem to be a bit less common though.


        This would be my interpretation as well. Steele defines a language to be approximately “a vocabulary and rules of meaning”, and clearly treats defining types and functions as to be adding to that vocabulary throughout his talk. My (broad) interpretation of a generalized language is based on that idea that the “language” itself is actually just the rules of meaning and all “vocabulary” or libraries are equal whether they be “standard” or “prelude” or defined by the user.


          I understand the perspective that functions (or classes, or APIs, or etc.) define new words, or collectively a new grammar, or perhaps a DSL. But functions (or classes, or APIs, or etc.) are obliged to follow the rules of the underlying language(s). So I don’t think they’re new words, I think they’re more like sentences, or paragraphs.

        2. 11

          Authors generally don’t extend e.g. English with new words or grammar in order to

          FWIW a thing that shitposters on Tumblr have in common with William Shakespeare is the act of coining new vocabulary and novel sentence structure all the time. :)


            See: 1984, full of this


              Also, Ulysses no?


            The author is saying that simple systems need this extensibility to be useful. English is far from small. Even most conventional languages are larger than the kernel languages under discussion, but chief argument for kernel languages is their smallness and simplicity. And those languages are usually extended in various ways to make them useful.

            I would say, and I think that the author might go there too, that certain libraries that rely heavily on “magic” (typically ORMs) also count to some degree as language extensions. ActiveRecord and Hibernate, for instance, use functionality that is uncommonly used even by other practitioners in their respective languages.


              The article is about languages, right? Languages are a subset of systems. The abstraction established by a language is by definition a shared context. A language that needs to be extended in order to provide value doesn’t establish a shared context and so isn’t really a language, it’s a, like, meta-language.


                Not really.

                It’s about using languages as a case study in how systems can be of adequate or inadequate complexity to the tasks they enable their users to address, and how if a tool is “simple” or perhaps inadequately complex that the net result is not that the resulting system is “simple” but as @dkl speculates above that the complexity which a tool or system failed to address has to live somewhere else and becomes user friction or – unaddressed – lurking unsuitability.

                This creates a nice concept of “leverage” being how a language or system allows users to address an adequate level of complexity (or fails to do so), and begs how you can measure and compare complexity in more meaningful and practical terms than making aesthetic assessments both of which I want to say more about later.



                  I suppose I see languages as systems that need to have well-defined and immutable “expressibility” in order to satisfy their fundamental purpose, which isn’t measured in terms of expressive power for authors, but rather in terms of general comprehension by consumers.

                  And, consequently, that if a language doesn’t provide enough expressivity for you to express your higher-order system effectively, the solution should be to use a different language.

                  Reasonable people may disagree.


                    If you consider that each project (or team) has slightly different needs, but there aren’t that many slightly different language dialects out there that add just one or two little features (thankfully!) that these projects happen to need. Sometimes people build preprocessors to work around one particular lack of expressibility in a language (for one well-known example that’s not an in-house only development, see yacc/bison). That’s a lot of effort and produces its own headaches.

                    Isn’t it better to take a good language that doesn’t need all that many extensions, but allows enough metaprogramming to allow what your team needs for their particular project? This allows one to re-use the 99% existing knowledge about the language and the 1% that’s different can be taught in a good onboarding process.

                    Nobody in their right mind is suggesting that projects would be 50% stock language, 50% custom extensions. That way lies madness. And indeed, teams with mostly juniors working in extensible languages like CL will almost inevitably veer towards metaprogramming abuse. But an experienced team with a good grasp of architecture that’s eloquent in the language certainly benefits from metaprogrammability, if even it’s just to reduce the amount of boilerplate code. In some inherently complex projects it might even be the difference between succeeding and failing.


                      I do think the industry tends to be dominated by junior-friendly programming languages, as if the main concern was more about increasing headcount than it was about expressing computation succinctly and clearly.


              If you allow authors…

              This is an important point. You are allowing authors to add new syntax, not requiring it. You can do most of the same tricks in Common Lisp or Dylan that you can do in Javascript or Kotlin, by passing funargs etc., it’s just that you also have the ability to introduce new syntax, within certain clearly defined bounds that the language sets.

              Just as you have to learn the order of arguments or the available keywords to a function, Lisp/Dylan programmers are aware that they have to learn the order of arguments and which arguments are evaluated for any given macro call. Many of them look exactly like functions so there’s no difference. (I like that in Julia, as oppose to Common Lisp and Dylan, macro calls must start with the special character “@”, since it makes a clear distinction between function calls and macro calls. But I don’t know much about Julia macros.)

              A programming language, like a spoken/written language, establishes a shared lexicon…

              Yes, and I don’t believe macros change this significantly, although this depends to a certain extent on the macro author have a modicum of good taste. The bulk of Common Lisp and Dylan macros are one of two flavors:

              1. “defining macros” – In Common Lisp these are usually named “defsomething” and in Dylan “define [adjectives] something”. When you see define frame <chess-board> (<frame>) ... end you know you will encounter special syntax because define frame isn’t part of the core language. So you go look it up just as you would lookup a function for which you don’t know the arguments.

              2. “with…” or “…ing” macros like “with-open-file(…)” or “timing(…)”

              These don’t change the complexity of the language appreciably in my experience and they do make it much more expressive. (Think 1/10th to 1/15th the LOC of Java here.)

              Where I believe there is an issue is with tooling. Macros create problems for tooling, such as including the right line number in error messages (since macros can expand to arbitrarily many lines of code), stepping or tracing through macro calls, recording cross references correctly for code browsing tools, etc.


                Do you not see code that defines new syntax as categorically different than code which uses defined syntax?

                What is a language if not a well-defined grammar and syntax?

                I don’t see how something which permits user modification of its grammar/syntax can be called a language. It’s a language construction set, maybe?


                  This seems founded in the idea that if you just know the language, you can look at code and understand what it does. But this is always convention. For instance, consider the classic C issue of “who owns the pointer passed to a function?” Even with an incredibly simple language, there’s ambiguity about what conventions surround a library call - do you need to call free()? Will the function call free()? Can you pass a stack pointer, or does it have to be heap allocated? And so on. More powerful type systems can move more information into the type itself, but more powerful types tend to be included in more powerful languages; for instance, in D, if you pass an expression to a function, if the parameter is marked as lazy, the expression may actually be evaluated any number of times, though it’s usually zero or one, and you have no idea when the evaluation takes place. So just from looking at a function call, foo(bar), it may be that bar is evaluated before foo, or during foo, or multiple times during foo, or never.

                  Now a macro could do worse, sure, but to me it’s a difference of degree, not kind. There’s always a spectrum, and there’s always ambiguity, and you always need to know the conventions. Every library is a grammar.


                    I would accept that Common Lisp can be called a language construction set but I don’t see how it’s useful or accurate to say it’s not a language.


                  Authors generally don’t extend e.g. English with new words or grammar in order to write a novel.

                  Well, no, but it helps that English language already has innumerable – legion, one might say – words, an English dictionary is a bountiful, abundant, plentiful universe teeming with sundry words which others have already invented, not always specifically for writing a whole novel, often just in order to get around IRL. It’s less obvious with English because it’s not very agglutinative but it has been considerably extended in time.

                  Language changes not only through the addition of new words and rules but also through other words and rules falling out of use (e.g. in English, the plural present form of verbs lost its inflection by the 17th century or so), so it’s not quite fair to say that modern English is “bigger” than some of its older forms. But extension is a process that happens with spoken languages as well.

                  It’s also super frequent in languages that aren’t used for novels and everyday communication, too. At some point I got to work on some EM problems with a colleague from a Physics department and found that a major obstacle was that us engineers and them physicists use a lot of different notations, conventions, and sometimes even words, for exactly the same phenomenon. Authors of more theoretical works regularly developed their own (very APL-like…) languages that required quite some translation effort in order to render them into a comprehensible (and extremely verbose) math that us simpletons could speak.

                  (Edit: this, BTW, is in addition to all that stuff below, which @agent281 is pointing out, I think the context of the original talk is relevant here).


                    If I had to guess I would think that this was a nudge to Scheme programming language from Steel’s point of view (or Common Lisp, but this one is quite large in comparison). Quite easy to extend Scheme that way and he does have a history with lisp-family languages. But that’s just a guess.

                  1. 7

                    In fairness the engine and all the other required Serenity libraries were already ported to Linux. Andreas built a Qt wrapper around it in <2hrs, which is still impressive.

                    1. 5

                      It’s not only a testament of how bloody awesome Andreas is but also of how good both the work and the porting effort behind the libraries are. More often than not, if you take some random library ported from Linux to OpenBSD and try to write a simple Qt wrapper around it, it takes two hours just to get past the easily-reproducible segfaults.

                    1. 18

                      I wonder why growth is trending down (not that that’s necessarily a bad thing).

                      I’ve made some good friends on here, and when I post my own content, I’ve always had great interactions/feedback too!

                      1. 44

                        I love and have been a member for 8 years. The comments ahead are just my personal experience but maybe there are others who feel the same way. At some point (I think about 2 years ago) topics about tech culture and society started to be removed by moderators and I started to participate less and less. Which is insane since the reason started was HN banned the creator and HN was doing some funny moderation. (fun history:, One reason I loved was it’s careful use of moderation, instead relying on having a solid group of users vetted by others in the community.

                        So, this stronger moderation against topics related to culture and society that intersect with tech made me lose interest. Given all the crazy things happening in the world today, to believe that tech is isolated from the world is naive and ultimately creates a bubble culture. What’s the point of loving technology if it can’t be applied to real world problems? So over time I started to lose some interest in content on as it seemed less relevant to my life. Maybe the content is changing again? I don’t know as I haven’t really participated as much.

                        The community here is strong and I hope for another strong 10 years. I just hope people learn that tech is useless independent of helping people. Code that doesn’t run, that doesn’t solve problems, is just a statue. Beautiful to look at and appreciate, but not much else.

                        1. 55

                          I feel the opposite. The American culture wars are exhausting.

                          I am glad this place is peaceful.

                          1. 20

                            I find the culture war exhausting too, but I also feel it’s mostly fake. That it’s mostly manufactured by the media and big voices on social media. Notice I didn’t say anything about any culture war but that’s where you went. Isn’t that weird? Something is wrong with our discourse. I’m talking about software solving the real problems we have in society (hungry, homeless, global warming, ecological collapse, energy, prison system, education, war, inequality, gun violence). The culture war is manufactured, in my opinion (puts on tin foil hat), to distract us from the real problems.

                            Computers are literally man’s greatest invention. They can save us from meaningless labor and enhance our minds. They aren’t a bicycle for the mind, but a rocket ship. My worry is we are wasting it. We shouldn’t take computers for granted. It won’t take much to forget how to make them.

                            1. 15

                              I’m talking about software solving the real problems we have in society (hungry, homeless, global warming, ecological collapse, energy, prison system, education, war, inequality, gun violence).

                              Do we need software to fix any of those problems? Aside from global warming / ecological collapse at least? We (as a society) have the wealth to fix these issues, it is mostly the lack of consensus / political will to do so. And the main thing standing in the way are certain wealthy actors and interest groups. They are interested in their own profits first and foremost, and control of society via marginalization or outright oppression of minorities and destruction of democratic systems and discourse.

                              We can use software on the margins to try to educate people, and show how they are being manipulated. But it doesn’t seem like enough.

                              1. 3

                                Like a virus, computers are now in everything. You eat today? Computers were involved. It’s not so much that they can fix any of those problems (I would argue they accelerate some like global warming. Google is proud they increases waste and energy use through all of society, it’s that if they aren’t part of the solution, then they are part of the problem. So we either fix it, or get rid of their usage. Since they are such a powerful tool for productivity, it seems to me we can use them to accelerate solutions vs accelerate problems.


                                  Like a virus, computers are now in everything.

                                  Then all the more reason for having a place where we can discuss the science, art and craft of technology away from the divisiveness that’s tearing our society apart makes sense in my view.

                                  I’m not suggesting that this is a monastery, but monasteries existed to keep the barbarians out and knowledge in when the dark ages fell. I see communities like this serving a similar purpose.


                                    Except it was in the monasteries where truth died. The “dark ages” were nothing like you described. I suggest reading Debt the first 5000 years by David Graeber. Eratothsenes figured out the circumference of the earth and over a 1000 years later we had Christopher Columbus who thought the world was much smaller. Yajnavalkya postulated the earth revolved around the sun yet the Monasteries promoted a earth as the center vision.

                                    We need a functioning civilization to keep knowledge being passed through one generation to the next. Now that we are facing threats to organized human life at an unprecedented scale, there will be no ‘safe place’ to teach people how to build computers without civilization wide support. Computers are just too complex.

                                    Also imagine the rest of society thinking “Wow, we have these amazing tools called computers that can solve our problems, but the folks who design and build them, the elite who use these tools, want nothing to do with our problems. Want to ignore them because they are too disturbing and annoying to the experts”.


                                      Good on you for fighting the good fight. I’ll just be over here hacking around with old computers and trying to stay healthy long enough to retire and enjoy life a bit :)


                                        You evoke an interesting thought and bring up a good point. There are millions of programmers. But most programmers have little say in what they actually build as they work for large companies. That’s because, while programmers are paid well relative to the rest of society, they often own very little of their work.

                                        The responsibility I am talking about falls on those that do have a say in what is built. Many of the leaders are former programmers themselves. But even among programmers there is a class divide. Those that don’t have a say in what is built don’t have the responsibility I speak of. Maybe we need more people owning their work.

                              2. 9

                                The culture war is manufactured. It is also a real, serious problem. One of the reasons there are so many wars is that they can be started unilaterally.

                                To the point at hand, though, do you think discussion about “culture and society” on solves any of those problems? I associate these kinds of topic with’ turning into a little hackernews, in which the same handful of political arguments are rehashed and people are generally horrible to each other. I don’t think the tech industry at large is going to discover, for instance, the concept of professional ethics through comment threads here.

                                I think the reason we can be civilised here is that we find technology neat; it’s a thing we have in common, and the reddit-style discussions work reasonably well for that. When we debate bigger things the medium shows its weaknesses. For one thing, while a lot of the strictly computery posts exist in some sort of niche, articles about society have much more direct political implications, and tend to elicit some sort of opinion for pretty much everyone. It’s also much harder to stay calm when discussing something that matters.

                                I’ve argued, often and animatedly, that political content shouldn’t be on I have several reasons for this, and I hope I’ve explained one of them, but just as important is… politics. I think being exposed to the sort of environment I see on political threads here makes people worse, or at least marginalises those who are most inclined to be nice. In theory diversity of opinion might expose people to new ideas, but in practice people pretty much always go home thinking exactly what they thought yesterday, only more so. I’d be all in favour of your position if I’d ever seen any evidence that debating important things leads to people becoming more conscientious about those things.

                                I appreciate this is a bit of a ramble, but one last thing: why would we expect anything else? You say that believing tech is isolated from the world creates a bubble culture. But is a bubble in its purest form already. Most tech workers and enthusiasts, especially in America, exist in a relatively narrow social stratum; it’s hard to find a demographic distinction in which the field doesn’t exhibit strong bias. I have my doubts about the comment section free-for-all as a vehicle for social change, but even if it could work, we’d need to be more connected to the rest of society in order to have any chance of deciding what technology’s place in it ought to be.

                                1. 7

                                  You raise a lot of good issues here. But I feel maybe I wasn’t clear enough. I don’t want random discussions about culture and politics. Twitter already exists. I care about the intersection of technology and society. I think those discussions are important to have and Lobsters used to have them. Then those seemed to have gone away and I lost some interest.

                                  We know that people in technology are usually horrible at social issues, partly because we get people who prefer certainty. The certainty of the machine. I was one of those people. We have great comfort talking about frameworks, programming languages, and reverse engineering old hardware. We like our safe space.

                                  I have my doubts about the comment section free-for-all as a vehicle for social change, but even if it could work, we’d need to be more connected to the rest of society in order to have any chance of deciding what technology’s place in it ought to be.

                                  I don’t have this view of lobsters as a vehicle for social change. It’s not. Social change will come either way and we can talk about how technology is involved, or we can ignore and treat lobsters as a sort of comfort food. That’s totally acceptable. It’s just less interesting for me and that’s why I responded to ‘why has growth stalled’ comment.

                                2. 1

                                  Notice I didn’t say anything about any culture war but that’s where you went. Isn’t that weird?

                                  Seems very telling to me and makes the user come off as a troll. Somehow having concern = culture war? Or caring about a topic = virtue signaling? There’s no authenticity to users like that. They can’t imagine a world where people are caring or concerned about things bigger than themselves.

                                3. 6

                                  I had to filter out the culture tag for the sake of my sanity.

                                  As much as I love reading this site, there’s something about the influx of certain topics and the style of conversation etc that - for the lack of a better word - triggers me. I have to restrain myself from getting involved, yet I know nothing good can follow from participating.

                                  Few of us are in a position to really affect change, and online discussion (esp. heated) is a net-negative substitute.

                                  This is probably still true for culture stories, but I don’t wanna go look in that dark corner.

                                  Everything else I love, thank you and keep it up for many decades!


                                    I too find the forever culture war exhausting, and treat tech and, by extension lobsters, as a kind of haven where I can think about fun, inspiring things I might want to build or ideas I can grow from.

                                    There is a time and a place for everything, and there are a bazillion fora for discussing that stuff. IMO it doesn’t need to be here.

                                  2. 29

                                    As someone who also subscribes to the (glibly described as) “everything is politics” philosophy, I am still for removing a lot of the “culture” articles. The main reason is that these discussions are already happening elsewhere (for example HN). Society existing everywhere doesn’t mean that we have to discuss society everywhere.

                                    The secondary reason is that there is a general idea for what is on topic, and that is “can this make you a better programmer”. I think that makes some stuff about community management (like CoC discussions for prog languages) on topic, but loads of things that end up getting removed fall far from this goal.

                                    A tertiary reason (something that happens in rant-tagged articles as well): when those articles don’t get pulled down, we end up with the same 5 people yelling at each other saying the exact same things over and over again. There is a clear vibe from some people to want to pull discussions into the same stump speech.

                                    I do think that when there isn’t a forced segue, discussion about society still happens in the comments section. And it stays reasoned. But at least personally, I don’t need every social space to turn into debate club. Lobsters isn’t the only place on the internet.

                                    1. 14

                                      I’m a relative newcomer but I appreciate the fact that discussions tend to be limited to things that have some form of objective evaluation criteria. When someone presents a technical project, I can evaluate it against my requirements. I can have a discussion about why my requirements are different from yours and whether my requirements are not actually solving my underlying problem. I almost certainly have a load of biases around why my requirements ended up being that shape but they’re generally not things that I have particularly strong beliefs about and, if I do, those beliefs are very unlikely to be core to my self image.

                                      When we discuss politics or culture then you and I may have very different ideas about an ideal society looks like and have very strong beliefs derived from things that are at the core of our self identity about that shape. If those happen to line up, then we can have a rational discussion about whether a particular policy advances our society towards that goal (though, often, we don’t really have enough data to make a good assessment). If we have conflicting goals for society then discussing how to reconcile them in a public forum is hard and maintaining an inclusive culture when those discussions are happening is even harder.

                                      I enjoy discussing politics, even with folks that disagree with me, but I don’t enjoy doing it on the Internet because it’s incredibly easy for things to be quoted out of context or misinterpreted. I’m glad that this is a place where we can put those discussions on one side and engage on other issues.

                                      1. 5

                                        I am torn on this matter, not the least because the one time when I broke my “no politics here” it quickly went sideways and not all in a good way, and it left a bit of a sour taste in my mouth, mostly because, justified or not, I really didn’t want to have a flamewar in an otherwise really civil place.

                                        So on the one hand I think it’s useful, but also healthy and important for a community to be able to discuss things that its members consider important, even if they’re not exactly the reason why we’re all here.

                                        This is probably a holdover of mine from the old days, when any forum, no matter what its primary topic was, also had a “General Discussion” section. A good chunk of it was flamewar but to me, a non-native English speaker at the end of the world, technologically speaking, those things were eye-opening in many ways. Even the things I actively disagreed with. They were useful for me in tech, not just in general. Without them, I’d be largely ignorant to the social, political and economical trends that shape the tech world of tomorrow, and I’d be entirely lost in this sea of information. I also think they were healthy: in my experience, tech communities that do not engage in these exercises and cannot vent on non-technical topics will eventually vent on technical topics, and will eventually cluster around narrow niches with categorical and harsh adepts who produce a lot of judgement but don’t really move the boundary of technology any further. Once they devolve into that, communities aren’t too fun to hang out in anymore, and get an expiration date, too.

                                        Usefulness and healthiness aside, I really wish I could talk about a whole bunch of non-tech things with many of you here. There are people here whose work I admire and I’m sure the original approaches that makes their software so good has also produced a lot of other ideas worth hearing.

                                        But on the other hand the single-section, tag-based, up/down-vote structure is really inadequate for this. Even if the front-page doesn’t promote controversy, the sheer volume of material that can be tagged culture is overwhelming, it’s a category that’s ripe for self-promotion, and it’s a field that’s really inviting for bike shedding while waiting for shit to compile. Unless it’s confined to a separate section, it tends to push out technical content which, in turn, tends to push out technical people.

                                        The section-less structure also means that these things inevitably make it to the front page. On old phpBB boards you could often have civil discussions in the Linux section while also shitposting in the General Discussion section, as long as general awfulness was dealt with via the ban hammer. But on these aren’t separate sections and shitposting inevitably spreads.

                                        It’s also a very wide umbrella. culture is equally well applied to an article about the political views of early demosceners – which, even though it’s technically politics, I’d really be super curious to read about – and to an employer branding piece about how a company contributes to Rust projects which, after years of exposure to corporate hiring machines, makes me want to puke halfway through the title.

                                        Honestly, the only tag I really dislike is practices, probably because I got a bad case of burnout from over-practice-ising a while back and eww. Ultimately, I left culture unfiltered, but I don’t think we need more of it.

                                        1. 3

                                          At some point (I think about 2 years ago) topics about tech culture and society started to be removed by moderators

                                          I wonder why this was put into place if the discussions were fine. (I only joined a little over a year ago, so I can’t really speak much on this except that I’m curious as to why these posts started being removed.)

                                          1. 12

                                            It’s still a great community. I wouldn’t have buyer’s remorse. It just changed over time to something less interesting to me. Part of it is the new moderation that came with new management. They wanted to narrow the focus of the site. I can’t say that’s why growth started trending downwards, but that downward trend coincides with what I felt. So take it with a grain of salt. I was just highlighting something that might have had an impact.

                                            1. 4

                                              Where else do you get your dose of interesting discussions?

                                              100% I feel the same way too but it’s only made me take time off to reconsider my approach to the website. At the end of the day you either decide to work with it or not.

                                              I’ve begun to vet my posts via lobsters IRC first. Maybe lobsters needs an initial “post filter”? i.e. if a post is thumbs-uped by a member of certain activity and age, it gets listed?

                                              1. 3

                                                It’s strange but I am finding the best conversations I have are with individuals in private settings. Nice to know the IRC is active. Maybe I should try to hop in. Thanks!

                                            2. 5

                                              The discussions were emphatically not fine, hence the purging efforts by both moderators and the community.

                                              1. 4

                                                Popcorn tech.

                                                You know what’s sad. I tried submitting topics that were incredibly technical. Bleeding edge tech. Nothing, no traction. For example, topics dealing with quantum computing, cryptography, etc.

                                                It’s almost like people don’t want to talk about technology specifically. They want pop-technology. or popcorn tech. Compare the level of technical discussion here to say, lambda-the-ultimate (is that still around?).

                                                But it’s better here than HN and reddit! So that’s a win.

                                                1. 11

                                                  It’s almost like people don’t want to talk about technology specifically. They want pop-technology. or popcorn tech. Compare the level of technical discussion here to say, lambda-the-ultimate (is that still around?).

                                                  I think people want to talk about things that they can meaningfully engage with. I’m interested in reading about quantum computing, for example, but I have literally nothing useful to contribute on the subject. You seem to have invited quite a few folks to join, perhaps if you reached out to some physicists then you’d find the audience contained more people who were able to meaningfully contribute on those subjects.

                                                  I’m happy to engage on a range of deeply technical topics here (language design, compiler internals, OS / hypervisor internals, CPU architecture and microarchitecture, capability systems, and so on), and I will on most of those subjects. Quite a few of them have very few comments because there are not very many folks here that share that interest. That doesn’t mean that they’re shallow, it just means that they’re experts in different things to me. I’ve had a few comments where I’ve either been the only person commenting or one of a small set, yet had some very high moderation totals (so other members are happy that I posted, even if they don’t feel that they have anything to add), or where other folks have let me know that they’re grateful for the explanation (often folks who are not members here, but still read the site). Similarly, there have been other threads where I’ve read everything, clicked the up-vote button on some fantastic explanations and clarification, and yet had nothing worthwhile to add myself.

                                                  1. 5

                                                    I’d like to take this opportunity to thank you for your clear comments regarding the dark recesses of C/C++. Even though it is far from my area of expertise you usually manage to make me feel I understand them better.


                                                      Agree wholeheartedly with these conclusions. Also, your posts are reliably interesting, always extremely informed, carefully considered and well worth reading. Thanks!

                                                  2. 2

                                                    Ah, the parent made it seem like the discussions were fine (or didn’t really have a stance on that, I guess I was assuming that).

                                                2. 2

                                                  I did propose a while back that such posts be on topic with their own tag. The few responses to the proposal were overwhelmingly negative. I think it’s fair to say there is not a pent up hunger for that sort of thing to be on topic here.


                                                  I think the quality is going down. Many submissions are borderline spam, or yet another basic howto on something, that if you were interested could find on your favorite search engine in seconds. The comment sections are more and more frequently covered by “me too” style comments (including “I love this”, “Great Work!” which is nice of them, but also just doesn’t add anything, you could have just upvoted), or disagreements, with little to no merit and I more often read comments from people that didn’t go beyond the headline. And on the technical side there’s quite a bit of objectively wrong information in both articles and comments.

                                                  And then with more people I think there simply ends up a lot more bikeshedding, which I assume is pretty natural as websites grow. And with style sites in particular the most visible things will be meritless “motherhood statements” that people can easily agree with and are hard to criticize.

                                                  Don’t get me wrong, luckily none of these is really dominant, it’s just that it seems to be increasing and can be off-putting when there’s randomly multiple cases of this.

                                                  Comparing it to HN I actually switched to, because that was a bit of a problem, but comparing it nowadays, they are on equal footing, even though groups of people, interests, etc. are somewhat different.

                                                  I also wonder how Drew DeVault’s and ban of links to his blog affected things, but I don’t want to open that topic.

                                                  Anyways, with that said I am really happy about the “Saves” I’ve collected over the years. A lot of them also for the comment sections. So thanks to everyone for that! :)

                                                  1. 5

                                                    I broadly agree with your concerns. Some observations (from my viewpoint):

                                                    • DDV and others made merry use of us as a marketing channel, which is shitty behavior. We still have some folks who do the same thing, and one of the side-effects is that open lively debate is the first casualty when hucksters just want a clean, attractive billboard for their wares. (See also similar patterns on other UGC platforms who bow to advertisers).
                                                    • We do seem to have a lot of “motherhood”/“underpants” threads. I’m unsure if it is significantly worse than a few years ago, but it has been a thing I have noticed.
                                                    • “me too” comments are cancer, but there’s also the other orange-site disease of subthreads just totally derailing into detailed discussions of things that have little to do with the original article. Both are bad.
                                                    • A lot of our internal mechanisms for dealing with stuff have gone away over the years; the community has become increasingly hostile to anyone pointing out decorum violations, our moderation is effectively just pushcx, and community-led attempts to fix process issues (as evidenced by the meta tag) seems to have dropped off. I think that is the true existential threat to Lobsters right now.

                                                      the other orange-site disease of subthreads just totally derailing into detailed discussions of things that have little to do with the original article

                                                      A back-of-the-envelope sketch of a solution to that issue would be an increasing time limit imposed on replying to a comment, based on its depth in a thread.

                                                      Tweaks are needed, maybe if you’re a first time commenter in a thread you don’t get a time limit on the first reply.

                                                      I think this would address the case where 2 people just really really want to be right and keep replying to each other.

                                                    2. 6

                                                      comparing [ and Hacker News] nowadays, they are on equal footing, even though groups of people, interests, etc. are somewhat different.

                                                      I vehemently disagree.

                                                      This is a listing of the top scored and commented submissions so far this year, from HN, and /r/programming on Reddit.


                                                      I count 8 submissions from the 25 top scored submissions on HN that are on topic for The rest are (US) political or business news. From the 25 top commented, none are on topic for this site.

                                                      Not having to daily wade through that dreck (especially without the help of tags) is what makes this place so much better than HN.


                                                        As mentioned that is why I switched to HN. I don’t mean to make this a competition though. It’s just something I’ve noticed and wanted to share these things as a form of constructive criticism. I think does really good, good enough for me to spend time writing comments after all. ;)

                                                        I assume it also very much depends on the time (weekend, weekdays, American, European daytime, …), as well as how you use the websites.

                                                        Also I am not sure if overall top scores are the best measurements. I go there on a regular basis and care more about what I see then rather then the highest overall scores over the course of many months. Getting very high scores is a mixture of topics being low entry level enough, posting them at the right time, and various other factors.

                                                        Also my view obviously is very subjective in that I remembered HN worse when I opened it up a couple of times lately, when I just was a bit disappointed on the front page of So there’s obviously a bit of bias there.

                                                        Looking at the top ones I think actually gives all these sites less reputation than they deserve, with clearly winning though.

                                                        I agree that tags help. However I am a bit paranoid about filtering sometime, because for most of them I could imagine there’s stuff I find interesting. However it’s certainly a big plus.

                                                    3. 3

                                                      If by “growth” you mean a heuristic capturing overall combined site activity of existing and new users, I would postulate a causal relationship from the re-opening of alternative activities otherwise prevented during pandemic conditions (prior to general vaccine availability) and the lagging consequences from the unwinding of pandemic-related isolation trauma.

                                                      • Active Users by month remained high through 2020 Q4 before trending generally downard in 2021 and 2022. It seems that Comments Posted and Votes Cast also follow this trend.
                                                      • New Users by month began trending generally downard earlier, around the beginning of the pandemic. Where would existing users be meeting new users to invite in 2020 Q2? It seems that Stories Submitted also follows this trend. How many stories are driving users’ excitement for discussion while still being on topic to this site in 2020 Q2 and Q3?

                                                      I have made no attempt at scientific rigor in this assessment; this is chart eyeballing & back-of-napkin thinking.

                                                      1. 3

                                                        It seems to follow the covid development ? Would make sense if people started using lobsters more at the start, but over time became bored of all the digital stuff when they can’t do IRL things.

                                                        1. 2

                                                          I have joined in the last year. I haven’t invited anyone else yet that I (a) knew well enough, and (b) thought would be a good fit for this site.


                                                            I think maybe there was a turning point where people started vetting their invites a bit more carefully.

                                                            This is all anecdotal but I remember a bunch of strife around people mis-using flags and downvotes when we had it, and there was some discussion around some folks who were seen as not participating in a way many of us found benefited the community.

                                                            (Yes I know such distinctions are a VERY slippery slope. Community is a delicate flower. I’m super grateful ours continues to thrive.)

                                                          1. 18

                                                            Agreed on everything but Copilot. The freedom to study how the software works is a fundamental attribute of free software. Learning is not covered by the GPL’s requirements. Copilot sometimes copypastes code (honestly - who doesn’t) but broadly it learns. This is entirely in keeping with open source.

                                                            If we’re gonna set a standard that you can’t use things you learnt in software under an open-source license when writing commercial software, we might as well shutter either the entire software industry or the entire open-source movement, because literally everybody does that. It’s how brains work!

                                                            And of course, it’s not like being off Github is gonna prevent MS from feeding your project into Copilot 2.

                                                            1. 64

                                                              Copilot does not learn.

                                                              Like all of these neural network “AIs”, it’s just a pattern recognition system that launders the work of many humans into a new form, which the corporation can profit from but the humans cannot. It’s piracy for entities rich enough to train and operate such an AI, and unethical enough to acquire the training data, but you or I would still be punished for pirating from the corporation. Whether or not it is legal is irrelevant to me (I’m in favor of abolishing copyright), but we must recognize the increasing power imbalance between individuals and corporations such “AI” represents.

                                                              Copilot understands nothing of what it writes. It learns nothing and knows nothing. It is not sentient or alive, no matter how tempting it is to anthropomorphize it.

                                                              1. 15

                                                                I think “pattern recognition system that launders the work of many humans into a new form” is just a rude way to phrase “learning.”

                                                                Define “understands.” Define “knows.” I think transformers derive tiered abstract patterns from input that they can generalize and apply to new situations. That’s what learning is to me.

                                                                1. 19

                                                                  IMHO it’s perilous and not quite fair to think what a machine should be allowed to do and not to do by semantic convention. “Machine learning” was one uninspired grant writer away from going down into history as, say, “statistically-driven autonomous process inference and replication”, and we likely wouldn’t have had this discussion because anything that replicates code is radioactive for legal teams.

                                                                  Copilot is basically Uber for copy-pasting from Stack Overflow. It’s in a legally gray area because the legal status of deriving works via statistical models is unclear, not because Microsoft managed to finally settle the question of what constitutes learning after all. And it’s probably on the more favourable side of gray shades because it’s a hot tech topic so it generates a lot of lobbying money for companies that can afford lawyers who can make sure it stays legally defensible until the next hot tech topic comes up.

                                                                  Also, frankly, I think the question of whether what Copilot does constitutes learning or not is largely irrelevant, and that the question of whether Copilot-ing one’s code should be allowed is primarily rooted in entitlement. Github is Microsoft’s platform so, yes, obviously, they’re going to do whatever they can get away with on it, including things that may turn out to be illegal, or things that are illegal but will be deemed legal by a corrupt judge, or whatever. If someone wants $evil_megacorp to not do things with your code, why on Earth was their code anywhere near $evil_megacorp’s machines in the first place?

                                                                  This cannot be a surprise to anyone who’s been in this field for more than a couple of years. Until a court rules otherwise, “fair” is whatever the people running a proprietary platform decide is fair. If anyone actually thought Github was about building a community and helping people do great things together or whatever their mission statement is these days, you guys, I have a bridge in Manhattan, I’m selling it super cheap, the view is amazing, it’s just what you need to take your mind off this Copilot kerfuffle, drop me a line if you wanna buy it.

                                                                  (Much later edit: I know Microsoft is a hot topic in FOSS circles so just to be clear, lemme just say that I use Github and have zero problem with Copilot introducing the bugs that I wrote in other people’s programs :-D).

                                                                  1. 1

                                                                    If machine learning was called “data replication”, it would be misnamed. And if it was called “pattern inference”, it would just be a synonym for learning… I wouldn’t care about Codex if I thought it was just a copypaste engine. I don’t think it is, though. Does it occasionally copypaste? Sure, but sometimes it doesn’t, and those are the interesting cases for me.

                                                                    I don’t think this at all comes down to Github being Microsoft’s platform so much as Github being the biggest repo in one place.

                                                                    I’m not at all defending Microsoft for the sake of Microsoft here, mind. I hate Microsoft and hope they die. I just think this attack does not hold water.

                                                                    1. 7

                                                                      If machine learning was called “data replication”, it would be misnamed.

                                                                      I beg to differ! Machine learning is a misnomer for statistically-driven autonomous process inference and replication, not the other way ’round!

                                                                      I’m obviously kidding but what I want to illustrate is that you shouldn’t apply classical meaning to an extrapolated term. A firewall is neither a wall nor is it made of fire, and fire protection norms doesn’t apply to it. Similarly, just because it’s called machine learning, doesn’t mean you should treat it as human learning and apply the same norms.

                                                                      1. 2

                                                                        I don’t think machine learning learns because it’s called machine learning, I think it learns because pattern extraction is what I think learning is.

                                                                        1. 6

                                                                          I realize that. I want to underline that, while machine learning may be superficially analogous to human learning, just like a firewall is superficially analogous to a wall made of fire, it does not mean that it should be treated the same as human learning in all regards.

                                                                          1. 2

                                                                            I don’t think it should be treated the same as human learning in all regards either. I think it’s similar to human learning in some ways and dissimilar in others, and the similarities are enough to call it “learning”.

                                                                  2. 18

                                                                    The standard philosophical definition of knowledge is a justified true belief. Copilot and other AIs make the belief part problematic, so bracket that. But they don’t justify things well at all. Justification is a social process of showing why something is true. The AIs sound like total bullshit artists when asked to justify anything. I don’t think Copilot “knows” things anymore than a dictionary does yet.

                                                                    1. 2

                                                                      Putting aside Gettier cases, that’s not what I understand “justified” to mean. You just need to have a reason for holding the knowledge. With AI, reinforcement learning is the justification.

                                                                      The point of “justified belief” is just that it’s not knowledge if you just guess that it’s raining outside, even if it is in fact raining.

                                                                      1. 8

                                                                        The definition that @carlmjohnson is quoting is Plato’s and ever since Plato put it forth, knowledge theorists have been bickering about what “justified” means. The history of ideas after the age of Boethius or so isn’t quite my strong point so I’ll leave that part to someone else but FWIW most classical definitions of justification either don’t readily apply to reinforced learning, or if they do, it fails them quite badly.

                                                                        That being said, if you want to go forth with that definition, it’s very hard to frame a statistical model’s output as belief in the first place, whether justified or not. Even for the simplest kinds of statistical models (classification problems with binary output – yes/no) it’s not at all clear to formulate what belief the model possesses. For example, it’s trivial to train a model to recognize if a given text is an Ancient Greek play or not. But when you feed it a piece of text, the question that the model is “pondering” isn’t “Is this an Ancient Greek play”, but “Should I say yes?”, just like any other classification model. If subjected to the right laws and statements, a model that predicts whether a statement would cause someone to be held in contempt of the court might also end up telling you if a given text is an Ancient Greek play with reasonable accuracy, too. “Is this an Ancient Greek play?” and “Is this statement in contempt of the court?” are not equivalent statements, but the model will happily appear to make both with considerable accuracy.

                                                                        The model is making an inference about the content (“This content is of the kind I say yes to/the kind I say no to”), but because the two kinds cannot be associated to a distinct piece of information about the subject being fed to the model, I don’t think it can be said to constitute a belief. It’s not a statement that something is the case because it’s not clear what it asserts to be the case or not: there are infinitely many different classification problems that a model might turn out to solve satisfactorily.

                                                                        1. 4

                                                                          In Greek, “justified” was some variation on “logos”: an account. Obviously everyone and their Buridan’s ass has a pet theory of justification, but I think it’s fair to interpret Plato’s mooted definition (it’s rejected in the dialogue IIRC!) as being “the ability to give an account of why the belief is true”. This is the ability which Socrates finds that everyone lacks, and why he says he knows that he knows nothing.

                                                                          1. 7

                                                                            Ugh, it’s really tricky. This comes up in two dialogs: Theaetetus, where knowledge gets defined as “true judgement with an account” (which IIRC is the logos part) and it’s plainly rejected in the end. The other one is Meno, where it’s discussed in the terms of the difference between true belief and knowledge, but the matter is not definitively resolved.

                                                                            I was definitely wrong to say it was Plato’s – I think I edited my comment which initially said “is effectively Plato’s” because I thought it was too wordy but I was 100% wrong to do it, as Plato doesn’t actually use this formulation anywhere (although his position, or rather a position that can be inferred from the dialogues, is frequently summarized in these terms). (Edit: FWIW this is a super frequent problem with modern people talking about ancient sources and one of the ways you can probably tell I’m an amateur :P)

                                                                            I think it’s fair to interpret Plato’s mooted definition (it’s rejected in the dialogue IIRC!) as being “the ability to give an account of why the belief is true”.

                                                                            You may know of this already but just in case your familiarity with modern philosophy is as spotty as mine, only it’s got holes in different places, and if you’re super curious patient, you’re going to find Gettier’s “Is Justified True Belief Knowledge?” truly fascinating. It’s a landmark paper that formalizes a whole lot of objections to this, some of them formulated as early as the 15th century or so.

                                                                            The counter-examples Gettier comes up with are better from a formal standpoint but Russel famously formulated one that’s really straightforward.

                                                                            Suppose I’m looking at a clock which shows it’s two o’clock, so I believe it’s two o’clock. It really is two o’clock – it appears that I possess a belief that is both justified (I just looked at the clock!) and true (it really is two o’clock). I can make a bunch of deductions that are going to be true, to: for example, if I were to think that thirty minutes from now it’s going to be half past two, I’d be right. But – thought I haven’t realized it – that clock has in fact stopped working since yesterday at two. (Bear with me, we’re talking about clocks from Russell’s age). My belief is justified, and it’s true, but only by accident: what I have is not knowledge, but sheer luck – I could’ve looked at the clock as half past two and held the same justified belief, but it would’ve been false, suggesting that an external factor may also be involved in whether a belief is true or not, justified or not, and, thus, knowledge or not, besides the inherent truth and justification of a statement.

                                                                            1. 6

                                                                              The counter-examples Gettier comes up with are better from a formal standpoint but Russel famously formulated one that’s really straightforward.

                                                                              I love collecting “realistic” Gettier problems:

                                                                              • You’re asked a question and presented with a multiple choice answer. You can rule out 3 of the answers by metagaming (one is two orders of magnitude different from the others, etc)
                                                                              • I give you a 100 reasons why I believe X. You examine the first 30 of them and they’re all completely nonsensical. In fact, every argument but #41 is garbage. Argument #41 is irrefutable.
                                                                              • I believe “Poets commit suicide more often than the general population”, because several places say they commit suicide at 30x the rate. This claim turns out to be bunk, and a later investigation finds it’s more like 1.1x.
                                                                              • I encounter a bug and know, from all my past experience dealing with it, that it’s probably reason X. I have not actually looked at the code, or even know what language it’s programmed in, and it’s one notable for not having X-type bugs. The developers were doing something extremely weird that subverted that guarantee, though, and it is in fact X.
                                                                              • I find an empirical study convincingly showing X. The data turns out to have been completely faked. This is discovered by an unrelated team of experts who then publish an empirical study convincingly showing X’, which is an even stronger claim than X.
                                                                              1. 3

                                                                                My favourite ones come from debugging, that’s actually what got me into this in the first place (along with my Microwave Systems prof stubbornly insisting that you should know these things, even if engineers frown upon it, but that’s a whole other story):

                                                                                • While debugging an Ethernet adapter’s driver, I am pinging another machine and watching the RX packet count of an interface go up, so I believe packets are being received on that interface, and the number of packets received on my machine matches the number of packets that the other machine is sending to it. Packets are indeed being received on the interface. I made a stupid copy-paste error in the code: I’m reading from the TX count register and reporting that as the RX count. It only shows the correct value because sending a ping packet generates a single response packet, so the two counts happen to match.
                                                                                • An RTOS’ task overflows its stack (this was proprietary, it’s complicated) and bumps into another task’s stack, corrupting it. I infer the system crashes because of the stack corruption. Indeed, I can see task A bumping into task B’s stack, then A yields to B, and B eventually jumps at whatever garbage is on the stack, thus indeed crashing the system. There’s actually a bug in the process manager which causes the task table to become corrupted: A does overflow its task, but B’s stack is not located where A is overflowing. When A yields to B, the context is incorrectly restored, and B looks for its stack someplace else than it actually is, loading the stack pointer with an incorrect value. It just so happens that, because B is usually started before A, the bug is usually triggered by B yielding to A, but A just sits in a loop and toggles a couple of flags, so it’s never doing anything with the stack and never crashes, even though its stack does eventually get corrupted, too.

                                                                                I got a few other ones but it’s really late here and I’m not sure I’m quite coherent by now :-D.

                                                                              2. 2

                                                                                I’m familiar with Gettier cases. I never dove very deep into the literature. It always struck me that a justification is not just a verbal formulation but needs some causal connection to the fact of the matter: a working clock causes my reasoning to be correct but a stopped clock has no causal power etc. I’m sure someone has already worked out something like this and brought out the objections etc etc but it seems like a prima facie fix to me.

                                                                            2. 1

                                                                              Yes, IMO the belief is “how may this text continue?” However, efficiently answering this question requires implicit background knowledge. In a similar sense, our brains may be said to only have information about “what perpetuates our existence” or “what makes us feel good.” At most we can be said to have knowledge of the electric potentials applied to our nerves, as Plato also made hay of. However, as with language models, a model of the unseen world arises as a side effect of the compression of sensory data.

                                                                              Actually, language models are fascinating to me because they’re a second-order learner. Their model is entirely based on hearsay; GPT-3 is a pure gossip. My hope for the singularity is that language models will be feasible to make safe because they’ll unavoidably pick up the human ontology by imitation.

                                                                              1. 3

                                                                                Yes, IMO the belief is “how may this text continue?”

                                                                                That’s a question, not a belief – I assume you meant “This text may continue ”. This has the same problem: that’s a belief that you are projecting onto the model, not necessarily one that the model formulates. Reasoning by analogy is an attractive shortcut but it’s an uneasy one – we got gravity wrong because of it for almost two thousand years. Lots of things “may be said” about our brains, but not all of them are true, and not all of them apply to language models.

                                                                                1. 1

                                                                                  Sure, but by that metric everything that anyone has ever said is a belief that person is projecting. I think that language models match the pattern of having a belief, as I understand it.

                                                                                  a belief that you are projecting onto the model, not necessarily one that the model formulates

                                                                                  You’re mixing up meta-levels here: I believe that the model believes things. I’m not saying that we should believe that the model believes things because the model believes that; rather, (from my perspective) we should believe it because it’s true.

                                                                                  In other words, if I model the learning process of a language model, the model in my head of the process fits the categories of “belief” and “learning”.

                                                                                  1. 5

                                                                                    I think that language models match the pattern of having a belief, as I understand it.

                                                                                    Observing that a model follows the pattern of a behaviour is not the same as observing that behaviour though. For example, Jupiter’s motion matches the pattern of orbiting a fixed Earth on an epicycle, but both are in fact orbiting the Sun.

                                                                                    FWIW, this is an even weaker assumption than I am making above – it’s not that no statements are made and that we only observe something akin to statements being made. I’m specifically arguing that the statements that the model appears to make (whether “it” makes them or not) are not particular enough to discriminate any information that the model holds about the world outside of itself and, thus, do not qualify as beliefs.

                                                                                    1. 0

                                                                                      If the world had a different state, the model would have different beliefs - because the dataset would contain different content.

                                                                                      Also, Jupiter is in fact orbiting a fixed Earth on an epicycle. There is nothing that inherently makes that view less true than the orbiting-the-sun view. But I don’t see how that relates at all.

                                                                            3. 3

                                                                              The problem is that reinforcement learning pushes the model toward reproducing the data distribution it was trained on. It’s completely orthogonal to truth about reality, in exactly the same way as guessing the state of the weather without evidence.

                                                                              1. 3

                                                                                The data is sampled from reality… I’m not sure what you think evidence is, that training data does not satisfy.

                                                                                It’s exactly the same as guessing the weather from a photo of the outside, after having been trained on photo/weather pairs.

                                                                                1. 8

                                                                                  The data for language models in general is sampled from strings collected from websites, which includes true statements but also fiction, conspiracy theories, poetry, and just language in general. “Do you really think people would get on the Internet and tell lies” is one of the oldest jokes around for a reason.

                                                                                  You can ask GPT-3 what the weather is outside, and it’ll give you an answer that is structured like a real answer would be, but has no relation to the actual weather outside your location or whatever data centers collectively host the darned thing. It _looks_like a valid answer, but there’s no reason to believe it is one, and it’s dangerous to infer that anything like training on photo/weather pairs is happening when nobody built that into the actual model at hand.

                                                                                  Copilot in particular is no better - it’s more focused on code specifically, but the fact that someone wrote code does not mean that code is a correct or good solution. All Copilot can say is that it’s structured in a way that resembles other structures it’s seen before. That’s not knowledge of the underlying semantics. It’s useful and it’s an impressive technical achievement - but it’s not knowledge. Any knowledge involved is something the reader brings to the table, not the machine.

                                                                                  1. 2

                                                                                    Oh I’ll readily agree that Copilot probably doesn’t generate “correct code” rather than “typical code.” Though if it’s like GPT-3, you might be able to prompt it to write correct code. That might be another interesting avenue for study.

                                                                                    “However, this code has a bug! If you look at line”…

                                                                                    1. 2

                                                                                      I’ve experimented with this a bit and found it quite pronounced - if you feed copilot code written in an awkward style (comments like “set x to 1”, badly named variables) you will get code that reflects that style.

                                                                      2. 14

                                                                        Do you think Microsoft would be okay with someone training an AI on the leaked Windows source code and using it to develop an operating system or a Windows emulator?

                                                                        1. 5

                                                                          You don’t even have the right to read that. That said, I think it should be legal.

                                                                          1. 14

                                                                            I’m not asking whether it should be legal, but whether Microsoft would be happy about it. If not, it’s hypocritical of them to make Copilot.

                                                                            1. 7

                                                                              Oh by no means will I argue that Microsoft are not hypocritical. I think it’s morally valid though, and whether Microsoft reciprocates shouldn’t enter into it.

                                                                            2. 4

                                                                              Bit of a niggle, but it depends on the jurisdiction, really. Believe it or not, there exist jurisdictions where the Berne Convention is not recognized and as such it is perfectly legal to read it.

                                                                          2. 12

                                                                            I’d personally relicense all my code to a license that specifically prohibits it from being used as input for a machine-learning system.

                                                                            This is specifically regarding text and images, but the principle applies.


                                                                            “It would violate Freedom Zero!” I don’t care. Machines aren’t humans.

                                                                            1. 14

                                                                              Machines aren’t humans.

                                                                              Exactly this. I think anthropomorphising abstract math executed in silicon is a trap for our emotional and ethical “senses”. We cannot fall for it. Machines and algorithms aren’t humans, aren’t even alive in any sense of the word, and this must inform our attitudes.

                                                                              1. 1

                                                                                Machines aren’t humans. That’s fine, but irrelevant.

                                                                                Machines aren’t alive. Correct, but irrelevant.

                                                                                If the rule doesn’t continue to make sense when we finally have general AI or meet sapient aliens, it’s not a good rule.

                                                                                That said, we certainly don’t have any human-equivalent or gorilla-equivalent machine intelligences now. We only have fuzzy ideas about how meat brains think, and we only have fuzzy ideas about how transformers match input to output, but there’s no particular reason to consider them equivalent. Maybe in 5 or 10 or 50 years.

                                                                              2. 5

                                                                                Source distribution is like the only thing that’s not covered by Freedom Zero so you’re good there 🤷🏻‍♀️

                                                                                Arguably the GPL and the AGPL implicitly prohibits feeding it to copilot.

                                                                                (I personally don’t mind my stuff being used in copilot so don’t shoot the messenger on that.

                                                                                (I don’t mind opposition to copilot either, it sucks. Just, uh, don’t tag me.))

                                                                                1. 1

                                                                                  Do we have a lawyer’s take here, because I’d be very interested.

                                                                                  1. 4

                                                                                    It’s the position of the Software Freedom Conservancy according to their web page. 🤷🏻‍♀️ It hasn’t been tried in court.

                                                                                2. 1

                                                                                  I’m on board; however, I would, at least personally, make an exception if the machine-learned tool and it’s data/neural net were free, libre, and open source too. Of course the derivative work also needs to not violate the licenses too.

                                                                                3. 9

                                                                                  Learning is not covered by the GPL’s requirements.

                                                                                  For most intents and purposes, licences legally cover it as “creation of derived works”, otherwise why would “clean room design” ever exist. Just take a peek at the decompiled sources, you’re only learning after all.

                                                                                  1. 5

                                                                                    I think this depends on the level of abstraction. There’s a difference in abstraction between learning and copying - otherwise, clean room design would itself be a derivative work.

                                                                                    1. 11

                                                                                      I don’t understand what you mean. Clean-room implementation requires not having looked at the source of the thing you’re re-implementing. If you read the source code of a piece of software to learn, then come up with an independent implementation yourself, you haven’t done a clean-room implementation.

                                                                                      1. 3

                                                                                        Cleanroom requires having read a documentation of the thing you are reimplementing. So some part of the sequence read -> document -> reimplement has to break the chain of derivation. At any rate, my contention is that training a neural network to learn a concept is not fundamentally different from getting a human to document a leaked source code. You’re going from literal code to abstract knowledge back down to literal code.

                                                                                        Would it really change your mind if OpenAI trained a second AI on the first AI in-between?

                                                                                        1. 3

                                                                                          At any rate, my contention is that training a neural network to learn a concept is not fundamentally different from getting a human to document a leaked source code.

                                                                                          I think it’s quite different in the sense that someone reading the code’s purpose may come up with an entirely different algorithm to do the same thing. This AI won’t be capable of that - it is only capable of producing derivations. Sure, it may mix and match from different sources, but that’s not exactly the same as coming up with a novel approach. For example, unless there’s something like it in the source you feed it, I doubt the “AI” would be able to come up with Carmack’s fast inverse square root.

                                                                                          1. 2

                                                                                            You can in theory get Codex to generate a comment from code, and then code from the comment. So this sort of process is entirely possible with it.

                                                                                            It might be an interesting study to see how often it picks the same algorithm given the same comment.

                                                                                          2. 2

                                                                                            In copyright law, we have usually distinguished between an interface and an implementation. The difference there is always gonna be fuzzy, because law usually is. But with an AI approaches, there’s no step which distinguishes the interface and the implementation.

                                                                                      2. 3

                                                                                        One problem here is the same sort of thing that came up in the Oracle/Google case — what do you do with things that have one “obvious” way to do them? If I’m the first person to write an implementation of one of those “obvious” functions in a given language, does my choice of license on that code then restrict everyone else who ever writes in that language?

                                                                                        And a lot (though of course not all) of the verbatim-copying examples that people have pointed out from Copilot have been on standard/obvious/boilerplate type code. It’s not clear to me that licensing ought to be able to restrict those sorts of things, though the law is murky enough that I could see well-paid attorneys arguing it either way.

                                                                                    1. 12

                                                                                      Oh, we’re finally bringing back FrontPage and iWeb?

                                                                                      1. 5

                                                                                        Eh, almost. As far as I can tell, this imposes some file/directory structure constraints and has limited HTML, template and theme editing features. So I’d say we’re bringing half of WordWideWeb back for now :-). Took us about five years to get from that to FrontPage so, adjusting for modern software boilerplate and maintenance requirements I’d say give it another… ten years or so :-).

                                                                                        On the bright side the HTML code that Publii produces looks considerably less atrocious than anything FrontPage ever did so maybe it’s worth waiting these ten years or so!

                                                                                        1. 5

                                                                                          Yeah I get that it feels full circle but I think this is a bit different. I’ve never used FrontPage but I remember iWeb feeling more focused on WYSIWYG web design. Publii feels more like a CMS with all the features you’d expect for a blog: posts, authors, tags, categories, excerpts, feeds, etc. The default theme looks nice, works on mobile, supports dark mode, and provides the exact right level of configurability for my use case (change colors, heading image, date format, pagination, etc.) without having to touch code.

                                                                                        1. 35

                                                                                          Moved my stuff to Codeberg, was really painless with Giteas migration tools. Gotta say I enjoy the noise free experience. Github was turning into yet another social media IMO.

                                                                                          Also have my own Gitea instance running for other stuff.

                                                                                          1. 23

                                                                                            Weird to say, but having social interactions on GitHub it’s what attracted me to use it in the first place.

                                                                                            1. 17

                                                                                              Nothing wrong with that. I’m just one of those that get really easily addicted to social media. I found myself refreshing github frontpage many times a day to see if anything interesting has popped up… That’s not healthy at all, which is another reason why I moved to Codeberg. It has similar features but they dont jump on your eyes battling for your attention like Github does.

                                                                                            2. 15

                                                                                              another social media

                                                                                              It’s a plug, but this week I made a filter list (for uBlock Origin and the like) to hide some of the overtly social features, specifically from the feeds. As most of my employers and so many projects are on GitHub, I don’t feel like I have a choice, but this list helps keep some of the distraction and attempts to increase engagement at bay.

                                                                                              1. 3

                                                                                                Thanks for this, it helps.

                                                                                                1. 5

                                                                                                  Wow, I wasn’t actually expecting anyone to try it. It means a lot to hear that!

                                                                                              2. 14

                                                                                                Gitea really is a self hosting wonder. Huge fan, and I wish it got as much press as Gitlab does.

                                                                                                1. 7

                                                                                                  Cannot second this hard enough. If you went to go look for github alternatives and ended up unimpressed with the buggy, clunky, slow gitlab UI, please give codeberg/gitea a look; they are dramatically higher quality. I feel like every time I go to use gitlab I find a new bug, and I have yet to see a single bug in codeberg.

                                                                                                  1. 3

                                                                                                    I suspect this is because Gitlab is trying to do a LOT - just like Github (git web UI, bug tracking, wiki, discussions, locks, socks, lingerie) whereas Gitea does one thing and does it well.

                                                                                                    The team behind Gitea also put a ton of effort into things like documentation and ease of installation which matter a lot more than many people give them credit for.

                                                                                                2. 2

                                                                                                  …how is GitHub anything like social media? You can’t make posts or anything…

                                                                                                  1. 13

                                                                                                    GitHub is Facebook for programmers. Your posts are your repos, commits, issues & comments, pull requests, discussion posts, wiki pages. Many of these posts can be “liked” using stars and upvotes. There are several kinds of “feeds” where you can see a stream of other people’s posts.

                                                                                                    Although I use GitHub for pragmatic reasons, I’m not comfortable with how Facebooky it is. I just found out about Codeberg from this article, and TBH it looks good to me. They don’t have their own CI server yet (in planning since about 2020), so I’d have to think about how I want to do CI, if I switch.

                                                                                                    1. 2

                                                                                                      GitHub is Facebook for programmers.

                                                                                                      More like Facebook + LinkedIn, considering that potential employers and tech recruiters treat it as sort of a CV. Odd coincidence that LinkedIn is another MS appendage.

                                                                                                      Centralization also greases the wheels of surveillance.

                                                                                                      1. 2

                                                                                                        So people are refreshing their feed to see if anyone they know has… Opened any issues today?

                                                                                                        1. 7

                                                                                                          If I have a PR open, I do check Github for notifications regularly.

                                                                                                          1. 3

                                                                                                            doesn’t it email you?

                                                                                                            1. 2


                                                                                                          2. 2

                                                                                                            Yes, you can do that. I get my feed via email. I see new issues and PRs on my own repos, and I also see comments from issues in other people’s repos that I have commented on. I used to monitor new issues from some repos I don’t own but was active in, but I don’t do that right now.

                                                                                                            The stuff in my feed is only for participating in projects, but the facebookiness goes well beyond that. You can follow people and see their activity, you get “achievement badges” whether you want them or not, you can put a ton of information in your personal profile, etc. It’s not as creepy as Linked In or Facebook yet. Nobody has ever hassled me to follow them or star their project. But it’s owned by Microsoft, and their track record suggests that things will get creepier.

                                                                                                            Update: yes you can remove those “achievement badges” from your public profile. Just did that.

                                                                                                            1. 3

                                                                                                     a[href$="tab=achievements"]:upward(div) is the uBlock rule to hide everyone else’s achievements too

                                                                                                            2. 2

                                                                                                              To be honest I check the feed at least once a day to see what the people I followed liked. If you follow the right people you can find the right repos.

                                                                                                              1. 1

                                                                                                                What I really want to know is if checking @crazyloglad’s commits once in a while counts as stalking. Asking for a friend.

                                                                                                                1. 1

                                                                                                                  A few months ago I saw a new post about a GitHub repo I’d just discovered a few hours before. I thought “odd coincidence!”, then I read the first comment (by the person who posted it) saying they’d seen the repo in my GitHub feed when I starred it.

                                                                                                                  I’m fine with the social aspects of GitHub. It’s useful when you work in a team, or to know how much support there is for fixing a particular issue, or whatever.

                                                                                                          1. 7

                                                                                                            For example, it is possible to adjust the white balance or auto-focus on a camera app, or make an event recurring on a calendar app. These advanced controls can be tucked further away, as the majority of users will not need to see their UI cluttered with them.

                                                                                                            That designers believe we should design primarily for the type of user that never creates a recurring event in their calendar app helps me understand the sorry state of consumer software much better.

                                                                                                            1. 3

                                                                                                              Not so much “never”, it’s the 80/20 rule - in this example - “recurring event” may well be better classified as an “obvious” interaction. Do 80% of GCal’s billion users regularly make new recurring events? Maybe. Sadly I don’t think designers are asking these questions. So, you’re absolutely right, consumer software usability is a sorry mess.

                                                                                                              1. 3

                                                                                                                The 80/20 rule is, itself, a rule of thumb, something to consider in a wider context. It’s sometimes true that the importance of a function is proportional to how often it’s accessed, but that’s not always the case. As a trivial counter-example, consider the case of an “emergency shutdown” button: it’s rarely used – ideally, never – but tucking it away someplace non-obvious is a very bad idea.

                                                                                                                Maybe this makes sense for GCal (I obviously don’t have access to their telemetry data). But speaking of calendaring software and recurring events in general, while it’s likely that 80% of a program’s users don’t need that, it’s also likely that the 20% who do use such an “advanced” function are users who rely heavily on their calendar tool, and presumably value its efficiency, since nobody uses calendar tools for fun. Tucking the features these users need behind all sorts of hamburger menus and intermediary windows is unhelpful to the users who are most committed, least likely to switch to other solutions, most likely to advocate for your software, and easiest to retain as paying customers. Depending on how you do it, that may also end up reducing efficiency exactly for the target group that values efficiency the most, while offering marginal usability gains to the target group that already ignores most of what’s on the home screen anyway and is likely to be swayed by competitors through little more than a marketing campaign that cleverly uses pictures of cute but slightly angry cats.

                                                                                                                These days, a considerable chunk of the software industry cannot really monetize the applications it offers per se. Mobile apps are very cheap, and it’s hard to get people to pay for some types of applications anymore – browsers, email, RSS clients, even word processors in many professional fields, to cite a few examples. Companies that find themselves in this kerfuffle certainly need to optimise their designs “for the 80%” and attach zero weight to “the 20%”, because that’s where user conversion is more likely to happen, and if you’re primarily relying on monetising user accounts (through advertising, sponsorships or whatever) and low-tier paid subscriptions (with minimal features), user conversion and low-tier customer retention, rather than heavily-invested, loyal, long-term customers, are your primary money printing machines. But IMHO we shouldn’t mistake the principles used to design this kind of software for universal design principles, which result in good interfaces wherever you apply them.

                                                                                                                (Edit: just to be clear, I’m not discounting the 80/20 rule nor your idea of applying it, and especially not the way it’s applied by GCal, since it’s unlikely that, after thinking about it for of two minutes, I can outmatch Google’s marketing team, which has access to data that I don’t have, and obviously has to be primarily concerned with the question of how to bring more money, not how to make a better calendaring tool. I just wanted to add some nuance to the way the 80/20 rule is considered. I think modern UX design is getting increasingly dogmatic and I like to poke at it once in a while :-) ).

                                                                                                                1. 1

                                                                                                                  Thanks for this excellent, well considered reply. I find myself strongly agreeing with all you’ve written.

                                                                                                            1. 11

                                                                                                              I’m doing a lot of work on decentralized blogging (i have a pre-alpha protocol and implementation), but IMO, looking back to old-school blogging is the wrong direction.

                                                                                                              True decentralization has to start at the architecture and design level, and blogging just built on the normal Web 1.0 stack, so it was only decentralized in that personal websites are independent of each other. But setting up and running your own website is nontrivial. So the vast majority of bloggers used hosted systems like Blogger or LiveJournal or That’s no longer decentralized IMO. Same is true of Mastodon et al. You have to give up a lot of trust and control to whomever runs your server. And the more “social” features like comments and pings were never secure and so were very vulnerable to spam.

                                                                                                              Truly decentralized blogging has to build from a secure P2P architecture, even if it’s not strictly run that way. Servers can and will exist, but their role is to help with discovery, connectivity and availability; they should have nothing to do with trust or identity — that’s controlled by the peer and the user, using cryptography. Scuttlebutt is an example of a system like this.

                                                                                                              Your points about retro computers are interesting. The crypto might be an issue for some older CPUs. How long does it take to do a Curve25519 key exchange, or encrypt with ChaCha20, on an Apple ][ or an 8086? I can say it’s not a problem on a Raspberry Pi, though; I’m using one as a mini-server for my protocol.

                                                                                                              1. 7

                                                                                                                Everyone says that decentralized $X needs to build on the P2P stack, but I haven’t really found many applications like this that actually work well for end users.

                                                                                                                SecureScuttlebutt has some incredible ideas, but anyone who’s ever tried to actually set it up and use it can tell you that it can be a challenge.

                                                                                                                So, what would this look like and can you suggest any current implementations that follow this model?

                                                                                                                1. 4

                                                                                                                  That’s what I believe as well. A raspberry pi at home is the ideal middleware to bridge retrocomputers into the blogosphere since they can’t really do any of the crypto stuff.

                                                                                                                  I’m quite active on Secure Scuttlebutt, my own client (Patchfox) is the only one with an RSS/Atom importer able to slurp a post from the blogosphere into SSB “blog messages” :-)

                                                                                                                  I’m quite keen to follow what you’re doing with decentralised blogging, is there anywhere I can subscribe or keep an eye on?

                                                                                                                  1. 3

                                                                                                                    I’m keeping it pretty stealthy for the moment — I haven’t even asked anyone else to try it out yet — but I really want to start opening it up soon. When I do I will definitely post about it here.

                                                                                                                  2. 3

                                                                                                                    The crypto might be an issue for some older CPUs. How long does it take to do a Curve25519 key exchange, or encrypt with ChaCha20, on an Apple ][ or an 8086?

                                                                                                                    This is certainly a legitimate problem but I think the experience of the Gemini community has shown that a Gopher (or plain HTTP) gateway is good enough in 95% of the “what about the old computers?” cases. It’s not ideal but I think it’s also fair to just admit that some computers will have to be left out at some point. I think that, for a truly P2P, decentralised protocol, such a solution doesn’t even have to sacrifice decentralisation. Even in retro computing circles there are very few people who only run Apple ][s – most of them also have a computer from this century at least. Pointing your favourite old friend at your current computer isn’t that hard, and you can run the gateway on that one. There are various approaches to that being tried out even in web land (see e.g. ), where things are far less flexible than with a P2P protocol.

                                                                                                                    FWIW, I, too, think you’re right with regard to decentralisation. The relevant Web 1.0-era technology to find inspiration in isn’t blogs, it’s Napster and the further decentralised models that it spawned or influenced, like Kazaa and Bittorrent.

                                                                                                                    1. 1

                                                                                                                      You could also take the approach of something like Fujinet where you offload the networking bits to a cheap outboard CPU like the EPS32. Let it handle the HTTPS and then communicate to the retro-computer using a protocol/mechanism it can handle. In Fujinet’s case on the Atari, it works over the SIO bus. I know on the Apple II it works with the SmartCard I think? interface, and on the Atari Lynx it uses CommLynx.

                                                                                                                    2. 2

                                                                                                                      Decentralised doesn’t mean “no server” in the strict sense, nor in any sense that matters. It can mean simply having a choice of servers, and the option to build and/or run your own. Not only is this a decentralised system with all the benefits and liberties that entails, but it’s a helluva lot easier to build and maintain, a lot easier on the end-users, and more reliable than a global p2p mesh.

                                                                                                                      1. 3

                                                                                                                        Strictly speaking you’re correct, but I’ve been observing and/or working in this area since the 1990s and I’m really dissatisfied with federated architectures like Jabber or Mastodon. They still require you to put way too much trust on a server, and the difficulty of setting up and running servers means few people will do so, resulting in bigger and bigger agglomerations. Then the big servers start to play games about who they will or won’t connect with (either for business reasons, like the IM systems of the 00’s, or political reasons like Mastodon), which their users have to put up with because they can’t jump servers without losing their identity and reputation.

                                                                                                                        The way forward seems to be to move all trust to the client, none to the servers. At that point servers are nothing more than way-stations or search engines for content.

                                                                                                                      2. 1

                                                                                                                        I think it’s worth separating the different layers in the hosting stack. You get huge economies of scale from being able to share a physical host. A single RPi can probably handle an individual blog with 95% of the CPU unused, but that really means that you’re paying for 20 times as much compute as you need. If you can have one-click deployment in a cloud environment (ideally in multiple, different, cloud environments) then you still remove anyone else from being able to use your blog to data-mine your readers, appropriate your content (check out the Facebook T&Cs conditions on IP sometime), and so on. That gets me most of what I want from a decentralized platform, along with the economies of scale that centralised solutions benefit from.

                                                                                                                        [Shameless plug] We’ve just launched a Confidential Containers service that lets you deploy a workload in Azure with strong technical guarantees that no one at Microsoft can see inside (data encrypted in memory, with a per-VM key that the hypervisor doesn’t have access to). Expect to see all cloud providers building more Confidential Computing services, including FaaS and storage solutions, over the next few years. If I wanted something decentralised yet easy to adopt, I’d look at these as the building blocks. They’ll eventually converge onto some standards (or at least have third-party abstraction layers that paper over the differences) and you’ll end up being able to deploy on any cloud provider’s infrastructure (or roll your own if you want).

                                                                                                                        Jon Anderson’s PhD looked at building a decentralised social network on top of cloud offerings about 10 years ago. His conclusion was that it would cost about $1-2/user/year. That price has probably gone down since then and will continue to do so. A RPi will cost at least 3-4x that just in electricity.

                                                                                                                      1. 13

                                                                                                                        Yes, the vast majority of customers use less than 10% of the features of the Microsoft Office suite, but it’s a different 10% for each customer.

                                                                                                                        There is another layer on top of this, too – I think the author is being very charitable towards these “lean” alternatives, and the lean-base-with-complex-extensions model.

                                                                                                                        I know a guy who worked on a bunch of cool microelectronics-related CAD/CAM stuff way back in the eighties. By his own recollection, that period was pretty gruesome, as the market was still fairly volatile and everyone was rushing to add new features and keep up with developments on the manufacturing end, and computational power was not exactly cheap or plentiful, either.

                                                                                                                        It’s not just that a lean program that only did “the necessary 10%” was a tough sell because not everyone used the same 10%. It’s also that the 10% at the customer end is rarely static, as it tends to expand along with their activity. So even if your software did just “the necessary 10%”, and you have customers who use like 10% of that, unless you keep up in terms of capabilities, it’s only a matter of time until they’ll expand to just 1.1% of what $market_leading bloated software does. That 1.1% in turn will consist of the 1% that yours does, and an additional 0.1% which it doesn’t, is critical to their new endeavours, and will absolutely prompt them to switch, unless you’ve locked them in with terrible licensing, muscle memory and lots of money sunk in training and workshops.

                                                                                                                        But there is something worse that you can do besides shrugging that extra 0.1% off as an irrelevant niche case that can be easily implemented through extensions. You can, in addition to that, focus development effort on “reinventing” that 1% to make it “streamlined” it and give it a “modern touch” and so on. When you do that, you throw “good” lock-in factors (users’ accumulated expertise, continuing education etc.) out the window, and you’re left with are the bad ones (licensing, unportable data formats, poor migration workflows). That quickly turns happy customers into grumpy, hostile Grinches – for entirely legitimate reasons! – who will absolutely ditch your shit the first chance they got, because if they have to throw away most of what makes them productive, they might as well throw it away in favour of the thing that does all they need.

                                                                                                                        1. 1

                                                                                                                          I’m having some serious annoyances with their window management (related to alt+tab, full-screen, windows vs apps) too. I don’t think they are bugs, just the way it’s implemented. I should make a list some time.

                                                                                                                          1. 2

                                                                                                                            When I use a mac, I have to install a program that changes alt+tab to be more like Windows/Linux, I think it’s actually called “Alt-Tab”.

                                                                                                                            1. 1

                                                                                                                              My favourite feature of that program is that it can set the timeout of the popup window to 0. That delay (which has unfortunately been copied by KDE at some point, too) is the most annoying anti-feature of them all and so far the only thing I really had to work around on macOS because it was driving me nuts.

                                                                                                                              According to Internet wisdom (no idea if that’s the actual motivation), the idea is that if you’re hitting Alt-Tab just once, you’re likely doing it in order to switch to the most recent window because you’re alt-tabbing back and forth between two apps. So in order to minimise the amount of visual noise, the icon list window is not shown immediately, but popped up after a certain delay.

                                                                                                                              That only really works if you have no more than two applications open in the first place, though, or if you alt-tab between two of your open applications every thirty seconds or so, and nothing else. If you do it less frequently (write code in a window, compile in a terminal window, watch some output in another one maybe etc.), by the time you alt-tab again, you’ve certainly forgotten what the next window in the stack is. So in practice, almost all the time, I find myself either pressing alt-tab for too little time and switching to the wrong app (because I’ve alt-tabbed to, say, the music player, but I’ve forgotten that I did, so alt-tabbing takes me to the music player again instead of the terminal). Or pressing it longer than I need to and tabbing way past the window I meant to switch to, because it was very close to the top of the stack, and now I have to alt-tab my way through the whole bloody list again.

                                                                                                                              inb4 “but virtual workspaces”: even with animations disabled in Accessibility options, the transitions are really slow (with animations on it’s unbearable, if I move back and forth a couple of times I get dizzy). I swear to God it’s like everyone in Cupertino has PTSD from Mac OS 9’s multitasking and doesn’t run more than two apps at a time because who knows what might happen.

                                                                                                                              1. 1

                                                                                                                                I might be misunderstanding, but I usually hit “option+tab”, and then release tab, but keep option down. This keeps the most recently accessed window selected, but shows the UI with all the windows. Then while holding “command”, I either release it and switch directly, or keep hitting tab to get the window I want. Alternatively, I then also start holding shift down and hit tab to go backwards. At this point it’s just muscle memory - I don’t really think about it.

                                                                                                                                The model of switching between applications instead of windows still annoys me though. I’ve switched between Windows, Linux, and Mac enough that regardless of the platform I’m on I forget and accidentally start using the wrong shortcut to switch (on Windows accidentally trying to “alt+", and on Mac forgetting that I need to use "option+”, and trying to using “option+tab” to switch browser windows).

                                                                                                                                My general philosophy is that I don’t think any model is correct, they’re all just arbitrary designs. So I do my best to learn the platform shortcuts, and if something still annoys me enough I will try and find a hack to change it.

                                                                                                                                1. 1

                                                                                                                                  Nah, you got that right 100%, I just never managed to get myself to do what you’re doing. Having used systems with practically zero latency when switching windows since like forever, when the damn thing doesn’t show up immediately, I’m forever tempted to think it didn’t work, like, maybe I missed the Tab key, pressed it right on the edge or it didn’t go all the way through or whatever, especially since the rest of the interface is generally pretty snappy.

                                                                                                                                  I’m not a big fan of the app/window split either but I could probably get used to it. The timeout, on the other hand, feels really to me. I use Electron applications that take less time to start up than it takes to pop up a window list, my brain is just unable to cope. Maybe I got some weird and super-specific form of OCD, hell knows :-).

                                                                                                                          1. 2

                                                                                                                            Okay so power consumption changes with transistor toggle rate. The toggle rates are all data dependent. Can this be worked around by adding blinding, like you already have to do to stop RSA leaking everything?

                                                                                                                            edit: Also I would suggest linking to the site with the paper

                                                                                                                            1. 1

                                                                                                                              Intel’s guidelines mention a range of mitigation options. I’m not sure about their efficiency – I just saw this and I’m still going through the paper (at a pace akin to that of a heavily-anaesthetised drunk turtle, because of burnout and lack of time ¯\_(ツ)_/¯) – but they seem sensible.

                                                                                                                            1. 32

                                                                                                                              Getting married (in 2 hours). Don’t know what else the week contains but pretty sure nothing can top this

                                                                                                                              1. 5

                                                                                                                                Congrats! :)

                                                                                                                                1. 3

                                                                                                                                  Hey, congratulations!

                                                                                                                                  Also I know this is a huge deal in so many ways but the relative importance of things is highly subjective so what I feel is the most important question that I need to ask at this time is WILL THERE BE CAKE and if so what kind? :-D

                                                                                                                                  (I’m trying to stop demolishing the candy box every evening and this seems to be upsetting my value scale.)

                                                                                                                                1. 2

                                                                                                                                  I am curious if some experts here might clarify something for me. We are calling the brains of the systems these chatbots run on “language models,” but is that an appropriate name here? Writing is definitively not language, but rather an abstract or approximate representation of it in another form. It is never the thing itself. Language is a physical activity that is primarily aural, but also gestural and in any case physical.

                                                                                                                                  We know that writing is not the same as language, because if you simply drop a child into a literate society, that child will not learn writing without a lot of intentional instruction and effort. Literacy is a hard skill to learn. The opposite is true for language, which just about any child will pick up merely by existing within a given culture. To me this distinction hints at something (but what?) very important about language, semantics, cognition, and consciousness.

                                                                                                                                  What, then, does that say about ML “language models,” which are to my mind actually models of language approximation (ie writing)? The researchers are skipping learning actual language (which we know is core to human development and cognition) and jumped straight to the artificial approximation (writing). What does that tell us?

                                                                                                                                  1. 4

                                                                                                                                    if you simply drop a child into a literate society, that child will not learn writing without a lot of intentional instruction and effort

                                                                                                                                    This is actually not true. Many children in highly literate societies teach themselves to read and write regularly, just as they teach themselves to listen and speak. The number one obstacle to this happening is babysitting centres trying to force some kind of “teaching” on the subject before the child is ready.

                                                                                                                                    1. 5

                                                                                                                                      I only have anecdata to back this up but maybe it can give some additional perspective.

                                                                                                                                      For various accidental reasons I help a lot of elementary school teachers with their computers and whatnot. Until some time ago, before these weirdly competitive babysitting centres started popping up, depending on a variety of factors (which ultimately boiled down to parents’ financial resources and kids’ exposure to written text), it was not uncommon for many, if not most kids to enter first grade knowing some reading and writing, largely self-taught. Being able to read and write proficiently was of course super-rare, but many children could spell out and write simple words, for example. In fact, though AFAIK not formally taught to do so, weaning some of these young geniuses off some (possibly bad) self-developed habits, like really bad pen holding habits or verbalizing punctuation (i.e. saying “full stop” for every “.” because they’d heard someone say something like “this is bad, full stop!”) was a low-key but constant struggle for many teachers.

                                                                                                                                      Physically writing is a tough thing to do because it requires some muscle coordination which has to be trained a little, and non-phonetic languages also have rules that aren’t fun for seven year-olds to follow, but if you give them Scrabble tiles, a surprising number of seven year-olds will be able to spell things.

                                                                                                                                      This is anecdata so any numbers I put forward are obviously irrelevant, but what I can tell you is that, though I know dozens of elem teachers, and I’ve known some of them literally for decades, I don’t know anyone who ever had an entire class of kids show up on their first day of school with absolutely no idea about how reading and writing works. While they did have to go through all those annoying introductory cursive writing exercises, it was not uncommon for many of them to be able to use (the equivalent of) Scrabble tiles from day one.

                                                                                                                                      1. 4

                                                                                                                                        I was one such kid. I taught myself to read and write before I started school, in my native language and in English. Yes, it looked pretty bad, and it still does (and I have really bad pen holding habits indeed.) My mom kept a journal about such things, so I’m not relying solely on my poor memory.

                                                                                                                                        I never had the impression that this was unusual either, and nobody seemed surprised by it, but I can also only offer anecdata.

                                                                                                                                  1. 2

                                                                                                                                    Hey, this is really cool! If anyone cares for a fun reading, I’d like to point you in the direction of ape.s, esp. around lines 536 - 577 and onwards. I haven’t really looked at an embedded loader since the simpler (?) days of MS-DOS. This is actually a really nice way to illustrate that technique, since the way the loader is bootstrapped is, like, really straightforward :D. Also, nice ASCII art!

                                                                                                                                      1. 1

                                                                                                                                        God I hate Markdown 🙄. Thanks!

                                                                                                                                    1. 9

                                                                                                                                      Oh my goodness this is not just any MIPS machine but an (emulated) Magnum. Guys, let me tell you about my first experience debugging hardware. Sort of.

                                                                                                                                      So the Magnums were released in 1990 and cca. 1992 or so, when I was still, literally, kindergarten material, the folks at my dad’s workplace got two of these bad boys. Now said workplace was a bloody military base so naturally I was fascinated with all the stuff going on there and I would constantly harass my dad about taking me with him to work, which he was obviously reluctant to do since that’s really no place for a kid to be. Nonetheless, he’d do it maybe once or twice a year, when circumstances allowed it, as in, he had nothing else to do and could keep me away from anything dangerous and near fascinating but harmless things, like computers and telephone switchgear and the like.

                                                                                                                                      So on that fateful summer of 1992 I got to see the Magnums, not that I had any idea what was so sensational about them since I was still at an age where counting to 100 was no small feat. Now, one of these, to the chagrin of the people who were using it, had a mysterious problem – presumably with the RTC? – and its date would sometimes reset. Sometimes, this would be caught at boot, but oftentimes it would be caught after generating some long reports which then someone had to go back and manually correct because the timestamps were all wrong. I think these were refurbished, too, so there was probably no warranty for them anyway.

                                                                                                                                      It was the middle of the summer, it was hot, there was a bottle of Coca-Cola next to one of these computers, I reached for it and promptly spilled it right onto the workstation that had probably cost like half the computer department’s budget for that year. It was not nice. Someone reached for the plug quickly enough that sparks did not fly out, then the room was very, very silent for a few seconds, then it was really not. I was really sad that I would probably never get to see the place again but I was also too terrified to cry.

                                                                                                                                      The computer was left to dry for 24 hours, then turned on again…

                                                                                                                                      …and the date was never reset afterwards, much to the satisfaction of the folks who never had to fix reports by hand.

                                                                                                                                      Sadly, I was denied much of the credit for this uncanny feat of engineering, which was given to that wonderful black, sugary liquid instead. This was all happening on the other side of the recently lifted Iron Curtain, so, Coca-Cola was also new. My dad’s colleagues quickly got over the fact that I fixed their stupid computer by spilling carefully pouring Coca-Cola onto it and kept joking about how, if this had been the local Coca-Cola clone from a few years ago, it would’ve probably corroded the mainboard and ruined the whole thing.

                                                                                                                                      (Edit:) The Magnum stations were really cool. These two workstations, in particular, were still in use around Y2K, the last time I saw them. They got retired afterwards but I was unfortunately unable to grab them. They’re pretty rare, they sometimes pop up on eBay but they sell for eye-watering prices. They’re a remarkable piece of computer history: MIPS Technologies, their manufacturer, was instrumental to the success of SGI, who based their most successful lines of workstations on MIPS CPUs after buying MIPS Technologies. If you ever got to use one, you’re very lucky!

                                                                                                                                      1. 1

                                                                                                                                        I really like this article, because it’s a marked departure from the usual approach to Rust tutorials of showing you several version of the program, none of which compile, then basically telling you how to solve each compiler error. Instead, it focuses on building a model that leverages Rust’s data access model to safely implement things that are tough to get right.

                                                                                                                                        It’s a little unfortunate that the mechanism of the solutions isn’t explored in more depth. Interior mutability is nicely introduced as a good approach to solving the problem of initialisation but then it’s used via the once_cell crate.

                                                                                                                                        That’s obviously understandable given what the article is based on and, furthermore, I think it’s actually the right approach “in production”. This isn’t intended as criticism of the post, but as an encouragement for the readers who liked it to dig a little into once_cell and elsa because both make for very instructive readings of idiomatic Rust.

                                                                                                                                        1. 1

                                                                                                                                          Yeah, “how once_cell works?” would make for a good post one day… There’s a surprising amount of details to get wrong there!

                                                                                                                                        1. 58

                                                                                                                                          I was increasingly getting upset about their extension marketplace, where there is an increased number of extensions starting to sell pro versions of the extensions we used for free.

                                                                                                                                          This strikes me as a bit entitled. There’s a lot of work that goes into an extension like Gitlens, those developers shouldn’t be expected to work for free. No-one’s making anyone pay for anything, and it is an extension marketplace, after all.

                                                                                                                                          1. 18

                                                                                                                                            It seems to me that this is one of the worst things to have come out of the era of App Stores and generalised open source access. At one point folks sometimes put cool hacks online. Lots of people now expect that these cool hacks be productised, have nice, informative READMEs and screenshots on their homepage, prompt support and fixes to major bugs, helpful authors who take time to involve the community in major decisions. Basically the kind of stuff that commercial vendors do with commercial software. But without paying commercial software fees.

                                                                                                                                            That’s not open source ethics, that’s charity. I’m cool with people asking for charity but shaming people who don’t exclusively offer it, and when they also offer it, they don’t do it in the exact form that’s expected, is a little nasty. And I’m saying this with all the empathy and love of someone who used to save up for months to buy programming books 20+ years ago.

                                                                                                                                          1. 28

                                                                                                                                            Wow. Excellent article, and such nice prose.

                                                                                                                                            A simile that just struck me: A fascination with 6502 assembly or CP/M is akin to building your own suit of armor and re-enacting medieval jousts (a la the SCA.) Designing your own “clean-slate” virtual machine and applications, or building atop someone else’s, is more like escaping into medieval fantasy worlds (a la Lord Of The Rings.)

                                                                                                                                            Neither of those are bad, of course! I love me some escapism. But neither has anything to do with the world today or the future, or has any real purpose other than fun. The future, even post-apocalyptic, is not going to be like the Middle Ages nor Middle Earth, and your homemade plate armor will not save you from a survivalist toting a rifle. Nor is Rivendell or Narnia a guideline for a better tomorrow.

                                                                                                                                            1. 14

                                                                                                                                              The author (authors?) has (have?) a really nice take on software simplicity/minimalism with which I resonate wholeheartedly. They also have a good critique of an analogous approach regarding, erm, a network protocol, whose name I won’t spell out here lest I invoke the ancient USENET daemon of flamewars.

                                                                                                                                              1. 5

                                                                                                                                                I disagree, an article built on dismissing and putting down the work of two groups of people that work really hard on their projects can not be excellent, nor nice.

                                                                                                                                                1. 16

                                                                                                                                                  A lot of the articles posted to are about criticizing (or yes, dismissing) technologies created by really hard-working people. I’m sure the people responsible for the C language, async Rust, JavaScript in browsers, Swift, proprietary operating systems, and Urbit all work(ed) hard. That doesn’t mean they get a gold star and a free pass against negative opinions. Nor does it mean negative opinions have to be written in a dry just-the-facts style.

                                                                                                                                              1. 5

                                                                                                                                                Why does nobody complain about how OpenSSL doesn’t follow the UNIX philosophy of “Do one thing well”?

                                                                                                                                                1. 33

                                                                                                                                                  Probably because there’s already so many other things to complain about with openssl that it doesn’t make the top 5 cut.

                                                                                                                                                  1. 17

                                                                                                                                                    Because the “Unix philosophy” is incredibly vague and ex-post-facto rationalization. That, and I suspect cryptography operations would be hard to do properly like that.

                                                                                                                                                    1. 3

                                                                                                                                                      Does UNIX follow the UNIX philosophy?

                                                                                                                                                      I mean, ls has has 11 options and 4 of them deal with sorting. According to the UNIX philosophy sort should’ve been used for sorting. So “Do one thing well” doesn’t hold here. Likewise, other tenets are not followed too closely. For example, most of these sorting options were added later (“build afresh rather than complicate old programs” much?).

                                                                                                                                                      The first UNIX, actually, didn’t have sort so it can be understood why an option might’ve been added (only t at the time) and why it might’ve stayed (backwards compatibility). Addition of sort kinda follows the UNIX philosophy but addition of more sorting options to ls after sort was added goes completely contrary to it.

                                                                                                                                                      1. 3

                                                                                                                                                        Theoretically, yes: it seems that Bell Labs’ UNIX followed the UNIX philosophy, but BSD broke it.


                                                                                                                                                      2. 3

                                                                                                                                                        Everyone’s still wondering if the right way to phrase it is that “it does too many things” or “it doesn’t do any of them well” ¯\_(ツ)_/¯

                                                                                                                                                        1. 2

                                                                                                                                                          Maybe because it’s not really a tool you’re expected to use beyond a crypto swiss army knife. I mean, it became a defacto certificate request generator, because people have it installed by default, but there are better tools for that. As a debug tool it is a “one thing well” tool. The one thing is “poke around encryption content / functions”.

                                                                                                                                                          Otherwise, what would be the point of extracting things like ans1parse, pkey, or others if they would be backed by the same library anymore. Would it change anything if you called openssl-asn1parse as a separate tool instead of openssl asn1parse?

                                                                                                                                                          1. 1

                                                                                                                                                            For the same reason no one complains about curl either?

                                                                                                                                                            1. 1

                                                                                                                                                              related, here’s a wget gui that looks similarly complex

                                                                                                                                                          1. 14

                                                                                                                                                            What surrprised me about Tainter’s analysis (and I haven’t read his entire book yet) is that he sees complexity as a method by which societies gain efficiency. This is very different from the way software developers talk about complexity (as ‘bloat’, ‘baggage’, ‘legacy’, ‘complication’), and made his perspective seem particularly fresh.

                                                                                                                                                            1. 31

                                                                                                                                                              I don’t mean to sound dismissive – Tainter’s works are very well documented, and he makes a lot of valid points – but it’s worth keeping in mind that grand models of history have made for extremely attractive pop history books, but really poor explanations of historical phenomena. Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.

                                                                                                                                                              His explanation of the fall of the evolution and the eventual fall of the Roman Empire makes a number of valid points about the Empire’s economy and about some of the economic interests behind the Empire’s expansion, no doubt. However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.

                                                                                                                                                              Yes, some particular military expeditions were specifically motivated by the desire to get more grain or more cows. But many weren’t – in fact, some of the greatest Roman wars, like (some of) the Roman-Parthian wars, were not driven specifically by Roman desire to get more grains or cows. Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone). The increase of cost and decrease in efficiency, too, are not a matter of half-rational historical determinism – they had economic as well as cultural and social causes that rationalising things in terms of energy not only misses, but distorts to the point of uselessness. The breakup of the Empire was itself a very complex social, cultural and military story which is really not something that can be described simply in terms of the dissolution of a central authority.

                                                                                                                                                              That’s also where this mismatch between “bloat” and “features” originates. Describing program features simply in terms of complexity is a very reductionist model, which accounts only for the difficulty of writing and maintaining it, not for its usefulness, nor for the commercial environment in which it operates and the underlying market forces. Things are a lot more nuanced than “complexity = good at first, then bad”: critical features gradually become unneeded (see Xterm’s many emulation modes, for example), markets develop in different ways and company interests align with them differently (see Microsoft’s transition from selling operating systems and office programs to renting cloud servers) and so on.

                                                                                                                                                              1. 6

                                                                                                                                                                However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.

                                                                                                                                                                Of course. I’m long past the age where I expect anyone to come up with a single, snappy explanation for hundreds of years of human history.

                                                                                                                                                                But all models are wrong, only some are useful. Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.

                                                                                                                                                                1. 6

                                                                                                                                                                  Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.

                                                                                                                                                                  Indeed! The issue I take with “grand models” like Tainter’s and the way they are applied in grand works like Collapse of Complex Societies is that they are ambitiously applied to long, grand processes across the globe without an exploration of the limits (and assumptions) of the model.

                                                                                                                                                                  To draw an analogy with our field: IMHO the Collapse of… is a bit like taking Turing’s machine as a model and applying it to reason about modern computers, without noting the differences between modern computers and Turing machines. If you cling to it hard enough, you can hand-wave every observed performance bottleneck in terms of the inherent inefficiency of a computer reading instructions off a paper tape, even though what’s actually happening is cache misses and hard drives getting thrashed by swapping. We don’t fall into this fallacy because we understand the limits of Turing’s model – in fact, Turing himself explicitly mentioned many (most?) of them, even though he had very little prior art in terms of alternative implementations, and explicitly formulated his model to apply only to some specific aspects of computation.

                                                                                                                                                                  Like many scholars at the intersections of economics and history in his generation, Tainter doesn’t explore the limits of his model too much. He came up with a model that explains society-level processes in terms of energy output per capita and upkeep cost and, without noting where these processes are indeed determined solely (or primarily) by energy output per capita and upkeep post, he proceeded to apply it to pretty much all of history. If you cling to this model hard enough you can obviously explain anything with it – the model is explicitly universal – even things that have nothing to do with energy output per capita or upkeep cost.

                                                                                                                                                                  In this regard (and I’m parroting Walter Benjamin’s take on historical materialism here) these models are quasi-religious and are very much like a mechanical Turk. From the outside they look like history masterfully explaining things, but if you peek inside, you’ll find our good ol’ friend theology, staunchly applying dogma (in this case, the universal laws of complexity, energy output per capita and upkeep post) to any problem you throw its way.

                                                                                                                                                                  Without an explicit understanding of their limits, even mathematical models in exact sciences are largely useless – in fact, a big part of early design work is figuring out what models apply. Descriptive models in humanistic disciplines are no exception. If you put your mind to it, you can probably explain every Cold War decision in terms of Vedic ethics or the I Ching, but that’s largely a testament to one’s creativity, not to their usefulness.

                                                                                                                                                                2. 4

                                                                                                                                                                  Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone).

                                                                                                                                                                  Not to mention all the periods of rampant rising military costs due to civil war. Those aren’t wars about getting more energy!

                                                                                                                                                                  1. 1

                                                                                                                                                                    Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.

                                                                                                                                                                    Sure. This is all about a framing of events that happened; it’s not predictive, as much as it is thought-provoking.

                                                                                                                                                                    1. 7

                                                                                                                                                                      Thought-provoking, grand philosophy was certainly a part of philosophy but became especially popular (some argue that it was Nathaniel Bacon who really brought forth the idea of predicting progress) during the Industrial Era with the rise of what is known as the modernist movement. Modernist theories often differed but frequently shared a few characteristics such as grand narratives of history and progress, definite ideas of the self, a strong belief in progress, a belief that order was superior to chaos, and often structuralist philosophies. Modernism had a strong belief that everything could be measured, modeled, categorized, and predicted. It was an understandable byproduct of a society rigorously analyzing their surroundings for the first time.

                                                                                                                                                                      Modernism flourished in a lot of fields in the late 19th early 20th century. This was the era that brought political philosophies like the Great Society in the US, the US New Deal, the eugenics movement, biological determinism, the League of Nations, and other grand social and political engineering ideas. It was embodied in the Newtonian physics of the day and was even used to explain social order in colonizing imperialist nation-states. Marx’s dialectical materialism and much of Hegel’s materialism was steeped in this modernist tradition.

                                                                                                                                                                      In the late 20th century, modernism fell into a crisis. Theories of progress weren’t bearing fruit. Grand visions of the future, such as Marx’s dialectical materialism, diverged significantly from actual lived history and frequently resulted in a magnitude of horrors. This experience was repeated by eugenics, social determinism, and fascist movements. Planck and Einstein challenged the neat Newtonian order that had previously been conceived. Gödel’s Incompleteness Theorem showed us that there are statements we cannot evaluate the validity of. Moreover many social sciences that bought into modernist ideas like anthropology, history, and urban planning were having trouble making progress that agreed with the grand modernist ideas that guided their work. Science was running into walls as to what was measurable and what wasn’t. It was in this crisis that postmodernism was born, when philosophers began challenging everything from whether progress and order were actually good things to whether humans could ever come to mutual understanding at all.

                                                                                                                                                                      Since then, philosophy has mostly abandoned the concept of modeling and left that to science. While grand, evocative theories are having a bit of a renaissance in the public right now, philosophers continue to be “stuck in the hole of postmodernism.” Philosophers have raised central questions about morality, truth, and knowledge that have to be answered before large, modernist philosophies gain hold again.

                                                                                                                                                                      1. 3

                                                                                                                                                                        I don’t understand this, because my training has been to consider models (simplified ways of understanding the world) as only having any worth if they are predictive and testable i.e. allow us to predict how the whole works and what it does based on movements of the pieces.

                                                                                                                                                                        1. 4

                                                                                                                                                                          You’re not thinking like a philosopher ;-)

                                                                                                                                                                          1. 8

                                                                                                                                                                            Models with predictive values in history (among other similar fields of study, including, say, cultural anthropology) were very fashionable at one point. I’ve only mentioned dialectical materialism because it’s now practically universally recognized to have been not just a failure, but a really atrocious one, so it makes for a good insult, and it shares the same fallacy with energy economic models, so it’s a doubly good jab. But there was a time, as recent as the first half of the twentieth century, when people really thought they could discern “laws of history” and use them to predict the future to some degree.

                                                                                                                                                                            Unfortunately, this has proven to be, at best, beyond the limits of human understanding and comprehension. This is especially difficult to do in the study of history, where sources are imperfect and have often been lost (case in point: there are countless books we know the Romans wrote because they’re mentioned or quoted by ancient authors, but we no longer have them). Our understanding of these things can change drastically with the discovery of new sources. The history of religion provides a good example, in the form of our understanding of Gnosticism, which was forever altered by the discovery of the Nag Hammadi library, to the point where many works published prior to this discovery and the dissemination of its text are barely of historical interest now.

                                                                                                                                                                            That’s not to say that developing a theory of various historical phenomenons is useless, though. Even historical materialism, misguided as they were (especially in their more politicized formulations), were not without value. They forced an entire generation of historians to think more about things that they never really thought about before. It is certainly incorrect to explain everything in terms of class struggle, competition for resources and the means of production, and the steady march from primitive communism to the communist mode of production – but it is also true that competition for resources and the means of production were involved in some events and processes, and nobody gave much thought to that before the disciples of Marx and Engels.

                                                                                                                                                                            This is true here as well (although I should add that, unlike most materialistic historians, Tainter is most certainly not an idiot, not a war criminal, and not high on anything – I think his works display an unhealthy attachment for historical determinism, but he most certainly doesn’t belong in the same gallery as Lenin and Mao). His model is reductionist to the point where you can readily apply much of the criticism of historical materialism to it as well (which is true of a lot of economic models if we’re being honest…). But it forced people to think of things in a new way. Energy economics is not something that you’re tempted to think about when considering pre-industrial societies, for example.

                                                                                                                                                                            These models don’t really have predictive value and they probably can’t ever gain one. But they do have an exploratory value. They may not be able to tell you what will happen tomorrow, but they can help you think about what’s happening today in more ways than one, from more angles, and considering more factors, and possibly understand it better.

                                                                                                                                                                            1. 4

                                                                                                                                                                              That’s something historians don’t do anymore. There was a period where people tried to predict the future development of history, and then the whole discipline gave up. It’s a bit like what we are witnessing in the Economics field: there are strong calls to stop attributing predictive value to macroeconomic models because after a certain scale, they are just over-fitting to existing patterns, and they fail miserably after a few years.

                                                                                                                                                                              1. 1

                                                                                                                                                                                Well, history is not math, right? It’s a way of writing a story backed by a certain amount of evidence. You can use a historical model to make predictions, sure, but the act of prediction itself causes changes.

                                                                                                                                                                          2. 13

                                                                                                                                                                            (OP here.) I totally agree, and this is something I didn’t explore in my essay. Tainter doesn’t see complexity as always a problem: at first, it brings benefits! That’s why people do it. But there are diminishing returns and maintenance costs that start to outstrip the marginal benefits.

                                                                                                                                                                            Maybe one way this could apply to software: imagine I have a simple system, just a stateless input/output. I can add a caching layer in front, which could win a huge performance improvement. But now I have to think about cache invalidation, cache size, cache expiry, etc. Suddenly there are a lot more moving parts to understand and maintain in the future. And the next performance improvement will probably not be anywhere near as big, but it will require more work because you have to understand the existing system first.

                                                                                                                                                                            1. 2

                                                                                                                                                                              I’m not sure it’s so different.

                                                                                                                                                                              A time saving or critically important feature for me may be a “bloated” waste of bits for somebody else.

                                                                                                                                                                              1. 3

                                                                                                                                                                                In Tainter’s view, a society of subsistence farmers, where everyone grows their own crops, makes their own tools, teaches their own children, etc. is not very complex. Add a blacksmith (division of labour) to that society, and you gain efficiency, but introduce complexity.