Threads for carlana


      If you like the file based testing with glob around the 19 minute mark, see for a test helper that implements this pattern. It also does automatic updating of the golden files as suggested at 21 minutes.


      Seems like a clone of HTMX. But why use this instead of HTMX? Needs to be more clear about the USP.


      When I was a teacher, I would bubblesort my students’ homework into alphabetical order at first, but eventually I started using a pigeon hole sort by letter of last name.

    4. 6

      Another new language that drops the ball on memory management. Sigh.

      There are realistically two alternatives [to manual memory management], both of which Onyx chooses not to do.

      • Garbage collection
      • Borrow checking semantics

      I guess the Onyx people haven’t looked at Objective-C, Swift, Nim and even C++, all of which use the third option, automatic ref-counting.

      1. 5

        Automatic ref-counting is a type of garbage collection, as also implemented in eg Python, no?

        (I agree there’s a bit more to the picture in some of your examples - copy-on-write (or mutable value semantics) interacting with the refcount automatically, static refcount inference or ellision, etc. Some of these approaches are really interesting!)


          Technically, yes. But it’s so much simpler than other GC algorithms that it’s often considered a separate thing. You can implement it in a page of code without requiring a special heap, it can be applied to one type of object without affecting others, etc.


            Yeah, “technically,” memory management has two top-level taxons: automatic and manual. Garbage Collection is technically a synonym for automatic memory management. In that sense, Rust is garbage collected.

            However, the industry parlance is to use “garbage collected” to mean tracing garbage collection, which is what we think of when we talk about Java, JS, Go, and the related concepts of mark-and-sweep, stop the world, and generational gc.


              If you go back to the origin of C, stack variables were called “auto” variables because they were automatically laid out and destroyed with the stack frames, as opposed to the static variables that the programmer was responsible for managing through out the program lifetime.


        Automatic ref counting is considered one of the simplest garbage collection mechanisms and is used at least as a component of many more complicated implementations. I fail to see how that wouldn’t fall into the first bullet there.

        1. 5

          If someone calls something “garbage collection”, I’ll assume it collects cycles but has at least some stop-the-world style runtime overhead; if they call it “reference counting”, it probably has the opposite trade-off. I’d give someone a very funny look if they told me Rust has a garbage collector, and I don’t think I’m alone in that. If a garbage-collected language uses reference counting as part of its GC implementation, that’s an implementation detail that (generally) could be replaced without changing the semantics.

          That may not be strictly correct under academic definitions of the term “garbage collection”, but that’s the way I almost always see it used in practice.

          (It would be possible for a garbage-collected language to guarantee reference-counting-style deterministic collection in the absence of cycles. I think Python might do this? But that’s neither here nor there - the point is they’re distinct enough features that it’s not necessarily correct to assume that one term encompasses the other.)


            Fair enough


      I really worry about increasing dependence on LLMs becoming a local minimum that leaves so much on the table. Typing is slow and waiting for responses from LLMs is slow. I get using them for “refactor this code base to do X Y and Z…” and if they work great. But for small snippets of code, its sad.

      This is even worse in the data science space. Most operations that you want to do on an SQL table or pandas dataframe follow very specific patterns. It’s just awkward to remember those 20 operations and none of the tools in those spaces focus on the UI aspects.

      Why will 100 people today look at a column of mostly numbers with a couple of obviously errant strings, and have to remember the commands to fix that? Why don’t we have tools that look at the column, and suggest one or more transformations based on the general shape of it. Why don’t our tools let us toggle between those different transformations.


        I wonder about this backwards in time, too. Back in the day, they wrote Unix using a computer with less capabilities than my headphones by writing stuff out on paper and then retyping it into a punchcard. Did we lose focus when we went to video terminals because it was suddenly “so easy” to just try stuff out and run it instead of really thinking it through beforehand?


          I’ve been thinking about this for over ten years and I still have no answer. On the one hand, we have capabilities we didn’t before, like emulating the CPU in an assembler to run unit tests and on the other, yes, it is way easier to just try stuff out instead of thinking about it beforehand.


            I am not sure emulation is a good example :-)

            Microsoft BASIC was developed in the 1970s on a PDP10 using an 8080 emulator.

            The ARM was designed in the 1980s using an emulator written in BBC BASIC running on a BBC Micro.


          Why on earth should fallible, expensive, high-calorie-per-flop meat do slowly and poorly what the cheap fast metal can do for us?


            Because the metal does its things quite a few times, and if the high-calorie-per-flop meat thinks a little bit, the metal doesn’t have to do as much. Then the metal goes much faster and it wastes less time of all the other high-calorie meat that has to consume what the metal does.


            You have the world’s most powerful neural network (for the next couple of years) on your shoulders. It makes sense to train the neural network instead of using a fixed algorithm that doesn’t benefit from training.

    6. 6

      It’s actually pronounced more like “yo” in Chinese (尤 Yóu). I’m not sure why he doesn’t correct people who say “yu”. Maybe he’s just given up and decided that how you say his name in English.


        Having a name that is misspelled sucks. Either you don’t give a shit and stop correcting people (and the people around you who find out they’ve been mispronouncing your name will be hurt that they weren’t important enough to be corrected), or you give a shit and you dedicate 3% of your life to correcting people, some of whom will insist you don’t know how to pronounce your own name. It’s a no-win situation.

        Don’t ask me how I know. =\


          People tend to get my first name right, but I’m astonished at how badly people manage to get my surname wrong. And I’m not talking about people who are native speakers of a language with a totally different set of base phonemes. It’s an Irish name, but is spelled in English exactly as it’s pronounced, you a lot of people manage to get it wrong. If I had a non-British name, I’d have very low expectations of native English speakers being able to get it right (French people soften the ch because they don’t have a hard ch sound, but aside from that typically get it closer than a lot of English people who throw random rs in the middle or change the vowel sounds).


            French people soften the ch because they don’t have a hard ch sound

            They might not think of it in the same way as Anglophones think of English “ch”, but French tch represents the same thing, as in French Tchad (English “Chad”, the country).


        Good to know! I listened to him pronounce his name and tried to mimic him, but I guess I got it wrong.


          Chinese and Vietnamese (and some other languages from the area, not Japanese tho) both hinge on differences between intonation on vowels that sound the same to somebody who only heard a western language growing up. You’re almost certainly going to get it wrong no matter how much you try, so getting it wrong in the same way most westerners do is the best way to actually be understood.


          Yeah, I sort of imagined you had a pre-show pronunciation check given the level of professionalism of your show. (Good episode, BTW. Can’t believe he didn’t tell his wife.) I think he must have just given up and gone to Yu to make things easier. I know my friends named “Zhang” usually just say “zang” instead of something more like “jong”.


            Can’t believe he didn’t tell his wife

            I know right! That was the craziest thing to me.

            I sort of imagined you had a pre-show pronunciation check given the level of professionalism of your show.

            What I usually do is just ask people to say their name, and go off that. But yeah, that is still subject to my abilities.


            gone to Yu to make things easier

            I note that (assuming Pīnyīn) the pronunciation of “Yu” is not like the English word “you” /ju/ but is canonically /y/, which is unknown in standard varieties of English.


        I was curious if there was a video of him saying his name, and found this, around 5 seconds in.

    7. 5

      Either starting HRT or performing a daring robbery of the local CVS, lol.


        That’s so exciting!

        I recommend picking up OTC migraine painkillers in case you get a migraine after the first few days. Hormone induced migraines are no joke.


          Good advice. I get scintillating scotoma already, so I should probably be prepared for it to get worse.

    8. 83

      Sometime in the past few years the comments on the orange website started to feel like walking into an animated discussion in a first-year college class where the instructor wasn’t even present. Things just went in circles because the questions were answered by people who didn’t know the answers but were recalling answers to the same question they’d previously seen upvoted. This was particularly clear on the topics where I actually knew what was going on, and so I realized I was just reading peoples’ telephone-game confident-sounding opinions on basically every topic there. I don’t know what it is that causes websites to transition to this mode of discourse, where they kind of just get stuck watching the world parade on by. Maybe it starts with rewarding & upvoting asking lazy questions, so comment threads just become dominated by 101-level questions & answers.

      1. 51

        HN is made by entrepreneurs for entrepreneurs. And what is an “entrepreneurs” in those circle? Someone who sounds confident enough to convince investors that he’s smart.

        Faking intelligence is a written golden rule (“fake it until you make it”). And people confident enough to convince others often forget that they are faking it. They start to believe themselves. That’s the plague of our whole society, not only HN.

        But HN is still very interesting because you get a lot of broader topics not allowed on general scientific news, more personal blog posts, etc.

        1. 6

          Just go read the comment sections from when around the supposed room temperature super conductor was being replicated. Tons of experts about how this can be real, how this would change everything. I remember thinking “Okay I know nothing about superconductors and material science, but this looks like an echo chamber regardless of the facts”. This was just a selection bias of the people who chose to post on these threads, it’s not representative of all HN commenters.

          HN has a lot of visitors, less but still a lot commenters. It’s not possible to generalize on all of them. I doubt “entrepreneurs” (scare quotes intended) is the majority there anymore: It seems like there is all sorts of people from professional engineers, business people, enthusiasts. At this scale I don’t believe we can generalize on it anymore.

      2. 11

        Not sure if it was intentional, but you also happen to be describing how LLMs work, and why they’re terrible for verifying information.


        Sometime in the past few years the comments on the orange website started to feel like walking into an animated discussion in a first-year college class where the instructor wasn’t even present.

        I believe that is an accurate description of reality, not just a feeling. Information technology workforce is on the rise, following a logistic curve (US, EU).

        The effect on HN is more pronounced, because it already has been mostly younger people.


        Conspiracy theory: HN is now all bots that have been trained on previous years of HN.

        I have joked about how we should train an LLM on HN to have the ultimate middlebrow know-it-all. Prompt template:

        User: USER INPUT

        Well Actually: RESPONSE


        You see this on programming reddits too. I once had somebody try to correct a term in a gfx coding article I wrote, only their correction was entirely wrong and betrayed a very junior level familiarity with the field’s history.

        Not only wrong, but confidently wrong about other people’s wrongness.

    9. 93

      Last week on HN, I saw a bunch of people complaining that a site with tools for blind people didn’t have screenshots, so you know, every community has its pros and cons.

      1. 8

        Most people considered “blind” have limited ability to perceive visual information, but have usable eyesight with some corrective assistance. “Total blindness,” or the complete lack of vision in both eyes is far less common. Most people affected by “legal blindness” would still be able to make use of a screenshot, perhaps by using zoomed-in display. I hate to praise HN users, but they are correct in this case.

        Edited to add: “And why beholdest thou the mote that is in thy brother’s eye, but considerest not the beam that is in thine own eye?” ;-)


          So the screenshot should probably feature huge, high contrast text. :)

    10. 35

      Battleships can’t be berthed in backyards, but people build wonderful wigwams

      I don’t understand this metaphor at all. A wigwam is a house. Are they thinking wigwam is another word for canoe (which is itself an Arawakan word)?

      Wigwams rarely become battleships.

      No wigwam has ever become a battleship or even a boat.

      1. 8

        Author here! I wanted to illustrate that wigwams are both different in kind and degree from battleships

        Houses rarely become boats, but there’s no reason why they can’t in principle

        1. 7

          Indeed - the 100 Rabbits folks turned their boat into a house and have built software on the wigwam list. Perhaps this is the path to the conceptual through-line?


            Maybe I am nitpicking now, but didn’t they turn their boat into a home, and not a house?

            It’s still a boat, in the water.

            On the linked page, they compare it to a house, as something different:

            Living in a boat, your living space is restricted when compared to a house.


              These distinctions are indeed important. A house is a structure like a wigwam. And as they say, home is where the heart is.

              The metaphor would probably be more clear if it just referenced particular types of living and working spaces. For example, could Common Lisp be a Mies van der Rohe structure full of incredible modularity and audible leakiness built on top of a complex substrate of steel and glass?

              Please forgive me for my own 1/2 backed analogy.


          Perhaps a barge or a houseboat is a clearer analogy (folks I’ve known who live on houseboats definitely feel like the DIY type!) I get there are two dimensions here (size and kind) but the simplicity of just measuring it in one feels quicker to understand at first glance (without needing to dive into comments!)

          love the idea though, I may shoot you an email with some ideas!

      2. 2

        My best guess is they were going for a size and sturdiness analogy, rather than one based on seaworthiness or “boatishness.” Not to say I totally understand why they went that direction…

    11. 10

      I love retrocomputing, working with simple hardware and being in full control of the machine, but a lot of retrocomputing projects focus on specific pieces of antique hardware. It’s cool that you can write a game for the NES or Apple II in 6502 assembly, but that’s about all you can write. Those machines didn’t support much in the way of communication, so any program you write must basically be an island, that you visit occasionally and then return to the computing mainland to resume your normal life.

      The Uxn virtual machine has the spirit of retrocomputing without being a clone of any particular incarnation - it’s a stack machine like Forth, the instructions are very reminiscent of the 6502 (including zero-page addressing), there’s special I/O instructions like the Z80, and so forth. There’s some “retro-style computing” platforms like Pico8 that impose a limited UI on top of a relatively unconstrained system, but Uxn is not like that - you have a 64KB address space to work with, and the programmer gets to/must think about what goes in every byte.

      There’s also the varvara system, which specifies particular behaviours for the various I/O addresses that Uxn’s I/O instructions can interact with. This is the interface to the modern world - you get 1-bit or 2-bit-per-pixel 2D bitmap graphics, simple 4-voice audio and gamepad support… but you also get stdin, stdout, mouse and keyboard input, file I/O, and exit code support. So you can make little self-contained games, but also you can write tools that interact with the external system.

      For example, there’s an assembler for the Tal programming language written in C, but there’s also a self-hosting assembler written in Tal. And a linter written in Tal. And an entire text editor written in Tal, that can be plumbed together with all the other tools. That makes retrocomputing something you can use to get work done, something you can bring with you and make part of your every day life.

      1. 4

        I really like the idea of a retrocomputing environment that’s actually useful. There is a place for simplicity and expressiveness; your examples are compelling.

        I hate to say it, but Emacs kind of fits this model as well. I actually have a great deal of interest in Lisp machines. But I’ve never had much interest in moving bits around. The debate between Lisp and C/Algol is long in the tooth. Either way, it’s nice to have a running system you can modify. Sadly, that’s a bit retro.

      2. 1

        The PlayDate is also a sort of modern retro computer.

    12. 3

      biz > user > ops > dev” is used for “late capitalism”, but perhaps “open source software” would be a better fit, where it can lead to software actually optimized for users.

      maybe the two can join forces? vc-paid open source projects with no expectation of becoming a business, producing the best user-focused software.

      1. 2

        Perhaps tax-paid would be a better fit.

      2. 1

        I think you can and should put user ahead of biz because the user is screwed if the business closes due to lack of funding. The idea is to take enough money to keep the lights on in service to the user.

    13. 9

      Well, I hope we have a good, clean copy of the 2021 Internet somewhere because we’re never getting it back.

      1. 2

        Never would I have imagined that we’d one day be nostalgic for the 2021 Internet

    14. 3

      But if we have 2 * 0 = 0, the inverse of that would be 0 / 0 = 2, which is just not true.

      Is that “just not true” because 0 / 0 = 1?

      Is that “just not true” because X * 0 = Y relates an infinite number of Xs to a single Y, so when inverted that would relate a single Y to an infinite number of Xs (which doesn’t match how functions work by definition)?

      Or is it something else entirely?

      1. 6

        I think the elementary school understanding of division is enough to explain this. If you want to eat a pizza of size 2, how many pieces of size 0 do you need to put together to make that pizza of size 2? Mu. It doesn’t matter how many size 0 pizza slices you have, even an infinite amount, it still won’t add up to a pizza of size 2.

        1. 1

          This is the best answer. People go about proving by resourcing to premises that could themselves be questioned. The OP talks about “general consensus”, it’s not general consensus at all. It’s math, it what is correct and what males sense. People forget maths is just establishing formal notation and naming for what is already there. It just won’t add up to a pizza the size of 2. That is a fact of the universe, it’s not an analogy, it’s the case itself at display. No one decided anything, it just is.

          1. 3

            I don’t want to over-privilege elementary school reasoning. For example, with computer programming, we use discrete math all the time, which isn’t really an elementary school concept. And elementary school notions of infinity and limits are fuzzy at best. But in this case, it seems pretty clear that if division is the name for the operation that one does with pizzas, then dividing by zero has to be undefined. Dividing by an infinitesimal? Elementary school intuitions on that might be right or wrong depending on how well thought through they are. But dividing by zero? There’s a pretty simple intuition here and it is the “correct” answer based on how we define division mathematically.

      2. 2

        You can’t even divide 0 by 0 and get a answer.

        As I understand it division by 0 can literally be anything, which is bad because you could start proofing that 2 = 1 which is obviously false.

      3. 2

        I think it’s something else entirely. Saying it’s “just not true” seems a bit circular. You can easily ask “well why is it just not true?”

        The “real” reason is that given the axioms of everyday arithmetic, division by zero is just meaningless nonsense. It has no semantic meaning. It’s like the phrase “colorless green ideas sleep furiously”. When using the (common) definitions of those words, the statement doesn’t express anything coherent.

        “Well why is ‘colorless green ideas sleep furiously’ meaningless nonsense?” Because our definitions make it so. If we use a more poetic interpretation, maybe the statement means “dull new thoughts lie waiting to spring to life”. But with the face-value definitions it’s just not coherent. Other less controversial example might be the phrases “married bachelor” or “square circle”. “1 divided by 0” is nonsense in the way that “I am a married bachelor” is nonsense.

        1. 1

          Other less controversial example might be the phrases “married bachelor” or “square circle”.

          The problem with language is that deliberately breaking the language is a feature of the language. It’s commonly used in sarcasm, where you say something obviously nonsensical to emphasize its nonsensicality.

          For instance, imagine the following conversation:

          A: I’m a bachelor. B: And are you married? A: I’m a married bachelor. B:

          This is because language is a tool whose purpose is to convey meaning to other speakers. “colorless green ideas” is nonsensical precisely because it

          1. is of nonsensical mental image (i.e. when you parse it with your mind, the mental picture of “colorless green” contradicts itself and you fail to draw a mental image), and

          2. is being neutrally expressed, and therefore can’t be a deliberate flouting of rules.

          In the right context, the phrase could be meaningful as sarcasm. But in all the contexts you’ve seen it, it isn’t. It’s just meaningless.

    15. 3

      Go’s refusal to allow even an opt-into stack traces for every error continues to make production error debugging unnecessarily difficult for 100% of Go developers. This is great but error tracing should be built into the runtime.

        1. 2

          Great link, appreciate the heads up and I’m glad it’s getting some serious consideration.

    16. 3

      I remember in the early days of CSS everyone was like “don’t use tables for layout, tables are semantic, not presentational!” and then CSS utterly just could not lay things out in a tabular format in any reasonable way for the first two or three years.

      (My memory is I’m sure fuzzy but as I recall it was definitely one of those “this is an absolutely obvious use case that was somehow not considered” just like “I want to pick which episodes of a podcast sync to my Apple Watch” more recently.)

      1. 4

        I stopped doing layout in tables briefly when it was first a thing, then I decided I was wasting my life and customer money, and the only people who cared weren’t paying me or my customers, so I went back to using them without feeling bad about it. By the time flexbox was almost-ready for every layout, I’d switched to outsourcing the initial HTML step anyway.

      2. 2

        I certainly don’t have sources to back me up, but IIRC some people at the standards committees were very worried about squeezing HTML pages into the tiny screens and memories of the phones of the late ’90s and early 2000s.

        The lack of tabular format support in CSS was intentional; layouts had to be made reflowable, so they could fit in tiny screens. XHTML (introduced around the same time) forced HTML into an XML corset, making it far easier to parse.

        Of course, everybody kept using tables for layout anyway, and by the time the mobile web actually took off, smartphone screens and processors were getting competitive with late-90s desktop pcs.

        1. 3

          XHTML (introduced around the same time) forced HTML into an XML corset, making it far easier to parse.

          It wasn’t just HTML, it was XML documents that included HTML. This made it easier to put HTML in other XML formats and vice versa. You could create SVG documents as inline XML in XHTML and have XHTML inside text boxes in that SVG, for example, with a single DOM, which let you send the whole thing easily in a single web request. At least in theory.

          It was mostly useful the other way around. XMPP requires well-formed XML and so could trivially embed XHTML in messages. Atom had the same thing. RSS 1.0 was very exciting to parse because it was mostly XML, but embedded HTML. The thing parsing the feed wanted to pull out the HTML bit and hand it off to a web view, but that required figuring out the end of the HTML block, and that basically required a full HTML parser.

          Of course, everybody kept using tables for layout anyway, and by the time the mobile web actually took off, smartphone screens and processors were getting competitive with late-90s desktop pcs.

          I don’t think I agree. Sites that use tables for layout are still painful on my phone and end up with a load of side scrolling. A bunch of them are even annoying on the iPad.

      3. 1

        Two or three years? More like ten iirc, before flexbox was both powerful enough and well-supported enough to make it actually easier than using tables for layout.

        1. 2

          Floats are fine! You just gotta get good.

    17. 3

      If you remember these, you are old. 💁‍♀️

      1. 2

        Had a bit of Vietnam flashback read thru this. I’m thankful we’ve moved on.

      2. 2

        Actually, most of these are from /after/ my time doing this kind of stuff.

    18. 5

      Sorry OP, everybody is fixating on your hot take on dependency injection and the actual message of your post about rejecting distractions got lost in the clever title and the snarkiness.

      I heard you. To create novel insights and get those creative juices running it’s good to reject dependencies, travel light, and focus hard on solving a problem.

      1. 2

        It’s unfortunate, but in my experience, this is what happens to most blog posts. You write about foo with an offhand example of how bar is analogous to foo, and then all the comment are about bar. Or worse, you write about how foo is good/bad/nuanced, but no one reads the piece at all and they just go off on their own preexisting opinions about foo with no engagement on why you think foo is good/bad/nuanced. Such is the life of a programming blogger.

      2. 2

        Yea, it was a good lesson for me, though! I appreciate your comment.

    19. 18

      The need for someone to call out that “We should be unafraid to enforce existing laws” is imho the scariest development in recent times.

      Why is it considered radical to demand that “self-driving” cars are taken from the streets?

      The exploitation of commons, the externalization of costs “at scale” is outmaneuvering our legal and moral frameworks: The speed at which people and resources are being misused seems to have reached the limits of our reactivity as society.

      1. 8

        The need for someone to call out that “We should be unafraid to enforce existing laws” is imho the scariest development in recent times.

        It’s always been slightly controversial. Selective enforcement has been a popular tool for oppressive governments for at least a thousand years. You introduce enough laws that everyone is guilty of something. Then you ignore them for most people and enforce them against inconvenient people. As a result, the outcome of strong enforcement is more people in prisons / fined / executed, not more compliance with laws. The USSR was very good at this. No need to manufacture evidence, you could always find a law that someone had violated. It worked even well because most other people were violating the same same law and so if they said ‘don’t arrest this person’ then they’d also be arrested (and if you say ‘everyone commits this crime’ suddenly all of your neighbours become very silent).

        I saw a talk by a lawyer in the US 20 years ago claiming that most people committed a dozen felonies by lunchtime. Selective enforcement is the only way that you avoid everyone being in prison.

        Politicians always promise more laws. This composes with the legal doctrine that ignorance of the law is no excuse to ensure that there’s always a law that you didn’t know about that you’re violating. I’d love to see this replaced with a doctrine that no law can be enforced if a reasonable person could be assumed to be ignorant of it (if it’s something specialised to a job, you’re required to be aware of it as part of practicing, if it’s something that affects the general public then the prosecutor needs to show evidence that you should have been aware of it). This should then be followed up with a systematic review of the legal system to eliminate laws that no longer make sense and to reframe the ones that do so that they don’t depend on deltas to a load of other laws. This used to be impossible 40 years ago because legislatures were overwhelmingly full of legal professionals who benefitted directly from this. It may be possible now.

        1. 3

          Selective enforcement via prosecutorial discretion is also essential for a functioning free society, since not every nuance of every possible infringement can be written down in words. The necessary outcome of following every law to the letter is a Kafkaesque society.

          1. 3

            Sure, but when you have a company like Uber which is obviously running a taxi service move into cities and show total disrespect for existing regulations on cabs and then face no punishment, it’s very discouraging that the law means anything. There can be nuance to the law, sure, but what it seems more like in practice is that if you are rich and powerful, the laws are suggestions.

    20. 2

      The “German subsidiary absorbing the Australian subsidiary as its own” story does remind me about how Arm China has basically become its own entity, entirely detached from the mothership and using its own tech.

      There’s some bigger picture idea there, which is that we use a lot of trust when sharing stuff with other entities. But those entities can and will change. I have definitely seen that sentiment when it comes to relicensing questions. And in business dealings, there are plenty of people who suddenly find some tech sharing arrangement to backfire when the other party decides to move what markets they care about.

      Of course one out is to just not work on things that can “do harm”. But so much stuff out there is dual use, ethically it’s hard to just pretend not to see things.

      My pet license of “BSD except if you’re a weapons manufacturer” has always felt like a good compromise, but very few seem excited about opening such a box

      1. 2

        My pet license of “BSD except if you’re a weapons manufacturer”

        Can you elaborate on that? It seems an arbitrary industry to single out, especially in the light of history (i.e. access to weapons of war being a necessary condition for democracy existing, slavery ending in the USA, etc.)

        1. 3

          The University of Tokyo has a blanket ban on doing any weapons research. I do not know if other universities have this or not, but when being told this I felt this to be a good position[0] to have if you are a pacifist.

          This sort of license is not saying “I will not ever interact with anyone who has made a weapon”, but that I do not want my specific work to be directly contributing to the building of weapons[1]

          I live in a specific context, being a resident of countries that have had imperial projects. Within that context, I do not believe there is a need for me to contribute to weapons of war. But this is specific to my work and my beliefs.

          Perhaps an easier to digest version of this license is “BSD except if you are building tech that matches faces”. FaceID is barely a worthy positive use for this technology, and all other use cases actually brought up have absolutely terrible externalities, or are themselves just outright bad!

          Everyone likely has their own pet cause along these lines. It would be hard for a single license exclusion list to make everyone happy. But sometimes it’s easy to do what you think is ethical, even if it is not complete, and even if you go at it “alone”. And I don’t imagine that such a license would convince anyone to change their minds. It’s about my own feelings, and perhaps finding others who agree with me (even if many wouldn’t).

          [0]: there are other positions as well!

          [1]: I don’t want to argue about the semantics of being a weapons manufacturer vs actually building weapons in a specific context. The license exists only in my mind for a reason!

          1. 5

            Crockford puts (or at least used to put) the addendum

            The software shall be used for good, not evil

            as an addendum to the MIT license in the software he released. He’s got a talk about JSLint where he explains more or less every year an IBM lawyer contacts him to say that they don’t know how the software will be used, so he grants them an alternate version of the license where

            [He gives] permission to IBM, its customers, partners, and minions, to use [the software] for evil.

            1. 2

              I’ve never been a fan of that bit, nor of the WTFPL. They’re both basically jokes that waste time for people with lawyers. If you’re gonna force your end users to get lawyers involved, might as well choose something a bit more real to hang your hat on! But I’m a bit “no fun allowed” on these kinds of topics.

              1. 1

                I don’t think it’s forcing end-users to get lawyers involved. If you have to think about it, you can either think about what you’re doing, or use something else.

          2. 1

            The University of Tokyo has a blanket ban on doing any weapons research. I do not know if other universities have this or not

            This can have some interesting unintended consequences. In the ’60s, a lot of US universities found it politically difficult to take money from the DoD. They worked around it by creating private research institutions that would hire their staff part time (e.g. over the summer), on significantly higher salaries, to do military-funded research. The result is several world-leading military research institutions across the USA.

            This sort of license is not saying “I will not ever interact with anyone who has made a weapon”, but that I do not want my specific work to be directly contributing to the building of weapons

            The problem here is in defining what a weapon is. Do you consider Facebook (a system designed to build psychological profiles for the purpose of propaganda) a weapon? Is it a weapon only when used to manipulate people to commit genocide, but not when used to manipulate people to buy stuff?

            There’s also the problem that most governments will happily grant themselves exemptions from various forms of IP law. If Boeing makes a guided missile using your code, the US government will happily grant them an exemption if you sue them (assuming you ever find out - it’s not like they put copyright notices on the web for the software that goes into these things). But the potential liability means that they probably won’t put it into a component aimed at the civil aviation industry that may be dual use (they get sued if someone puts it in a weapon).

            1. 1

              While I do think there’s a real conversation to be had here, I don’t think this is the venue for it. I realize I am the one who mentioned this first, but this sort of topic can really go off the deep end (and ultimiately we’re veering a bit from the golden rule of “does this make you a better programmer” for on-topic-ness).

              Perhaps the only takeaway I would like for people to have is that you can put whatever you want in your license. It might harm adoption but hey what’s the point of an ethical stance without consequences?

              1. 1

                It might harm adoption but hey what’s the point of an ethical stance without consequences?

                I don’t think that’s the right question. Any action has consequences. The question is whether the consequences are aligned with the ethics behind the decision. This is the problem I often have with the FSF: the consequences of their actions are often in direct opposition to their stated aims. This is often the outcome when you try to use clever legal tricks, rather than well-aligned incentives, to change behaviour.

          3. 1

            Various German universities have ratified a so-called “civil clause”, which commits them not to engage in military research. For those not involved in German academia (can’t blame you! :)), these include lots of impactful universities.

            Btw, I find the in-article link to the Mertonian norms of particular relevance. This aspect of the topic was generally new to me and (imho) an interesting area to explore further.

        2. 1

          I remember using a Swiss version of Scheme in the 90s that had a license that said it must not be used for war.