Threads for GrayGnome

  1. 6

    We should shake up the UNIX monoculture a bit. I’d hope for a secure microkernel OS (seL4?) written in Rust (or Ada). Ideas from e.g. Genode, EROS, QNX, Plan9. For starters it would just need a microkernel + a way to act as a hypervisor. You’d be able to run a Linux / Windows guest to work and continue improving the host OS.

    We need more security, privacy and reliability from the ground up. But we can’t forget about usability, or nobody is ever going to use the worlds most secure paper weight.

    1. 3

      Take a look at Fuschia which is a microkernel that has parts written in Rust.

    1. 1

      I guess I’m a little confused as this site looks like some kind of article aggregator.

      Why would you have used react for a site like this in the first place?

      1. 5

        As they say in the talk, when they were beginning they were told they had to use react for their application to be “modern”. Sadly, many people think that that’s true, and don’t realize that there are hypermedia-oriented options like htmx, unpoly and hotwire that can give you more interactivity within the hypermedia model. So they end up going react, because everyone else is, and that’s what HR hires for.

        1. 2

          Did y’all evaluate the different hypermedia oriented frameworks before choosing htmx? Just curious if there are significant differences.

          1. 7

            I’m not the speaker, I’m the creator of htmx, so not an unbiased source. :)

            David mentions unpoloy and hotwire, two other excellent hypermedia oriented options in his talk, and he uses Stimulus for some javascript (rather than my own hobby horse, https://hyperscript.org) but he didn’t say why he picked htmx.

            Generally, I would say:

            • htmx is lowest level, very focused on extending HTML
            • unpoly is more batteries-included and supports progressive enhancement better
            • hotwire is the most automagical of the bunch, and very polished (+ mobile support)
            1. 2

              We chose htmx because of its simplicity (data-attribute driven, very few but very generic features). We evaluated:

              • hotwired, which would have been a great option since it works particularly well with Stimulus that we were already using on another project. It seemed just a bit too much complicated because it introduces new markup tags, and it has a strong emphasis on real-time stuff that we did not need.
              • unpoly, which was pretty similar to htmx at first sight. But it seemed to us that the community was less active, so we didn’t push the evaluation further
          2. 2

            Hi there, author of the talk here, sorry for the delay.

            Our application is not an article aggregator, it’s much more complex than that: it presents future laws being discussed in French parliament. The right part is the text that is being discussed, and on the left you have the amendments filed by parliamentarians, and which aim to change the text.

            But still, you’re right: “Why would you have used react for a site like this in the first place?” is precisely the question I asked when I discovered the modern tools of the hypermedia approach. But not because our application is simple: because the hypermedia approach can achieve a lot more than what most people think. It’s not just for “article aggregators”, “student projects”, “quick prototyping”, “small personal websites” and simple CRUDs. All day long I use professional tools that would benefit the hypermedia approach: Gmail, Sentry, AWS console, and others…

            And this is what my talk is about: breaking the FUD spread by some people about what is doable with “old-school” web apps (web pages) and what is not doable with that approach, thus requiring the whole API+SPA stack.

          1. 2

            Yeah I’ve been thinking about this a bit. Machine learning could be such a great thing for art, illustration, and other creative industries, but the mass-scale appropriation without attribution that the current tools employ, along with them only being accessible via for-profit companies makes it seem very exploitative. Fair use is a fundamental part of the creative process, but I do think that the context is different and that tech companies should shoulder a greater responsibility when it comes to being open about attribution vs. human creatives.

            As an artist or any other creative when you are inspired other creative works, you at least have a general idea of where you are drawing inspiration from, even if it is unconscious. If people ask your where your ideas come from you can tell them. Even if you wish to keep those details to yourself (which is entirely valid), others can glean that through other means, and the scale is usually still limited.

            The interaction when generating images via a machine learning model is much different. It’s great that these models give people the ability to try out creative ideas quickly, without needing to spend so much time developing a sense of style and taste or artistic skill, but that’s only part of the artistic process. You might have a cool picture, but you have no idea where to find more works like it, and what the motivations the original artists had… that’s such an important part of art and creativity that I wish we could provide more access to. It would be great if these tools would also provide references to their source material to help address this. I can think of a bunch of reasons why they don’t, but I still think they should.

            1. 3

              I guess.

              But the feeling among the original creators who provide raw material for the generator mills is that this is a bad deal.

              See for example Simon Stålenhag - https://twitter.com/simonstalenhag/status/1559796122083811328?s=20

              Anyway, I think ai-art, just like nfts, is a technology that just amplify all the shit I hate with being an artist in this feudal capitalist dystopia where every promising new tool always end up in the hands of the least imaginative and most exploitative and unscrupulous people.

              1. 3

                Yeah, my response is actually quite watered down from what I had originally wrote! I was trying to be more positive and find possibilities for compromise, but I’m definitely concerned about capitalists trashing the commons, and do worry about the artists and creative who will be caught up in this. I definitely know of many who find it a violation (a bit like programmers with Copilot), and I can definitely understand where they are coming from.

                It wouldn’t be the first time that machine learning people have:

                • barged into a domain the don’t understand
                • achieved surprising initial success replicating some hollow facsimile based on generations of work
                • convinced others that the old ways are no longer needed
                • get enormous amounts of funding
                • put the current practitioners out of work
                • only to later find that those practitioners had a huge amount to offer when the low hanging fruit has been exhausted

                Meanwhile the original people have retired or found other work, and the next generation hasn’t been taught, and funding has been slashed, an there needs to be a bunch of work put into to counteracting the misunderstandings people might have about the limitations of machine learning.

                1. 2

                  I’d hardly say this is universal. My partner is an artist and has peers who are artists and they feel differently. It’s dangerous to extrapolate general opinion from anecdata, let alone Twitter.

              1. 24

                The idea that models looking at copyrighted images is unethical does not just fly against current copyright laws, it is downright dangerous and unsustainable.

                Copyright is not absolute for a reason. We use copyrighted materials in remixed form all the time. Society just wouldn’t be possible without this. Only 30 years after the advent of copyright an exemption was carved out, the fair abridgment doctrine, all the way back in 1740. This led to the fair use doctrine that we have in the US today; a use is ok if it transforms the original into something new. It’s hard to see what can possibly be more transformative than a model that generates new images that have never existed before.

                If a court were to decide that models can’t look at copyrighted material, it would likely be the end of progress in AI for the foreseeable future. The burden of having to document every single image and every single piece of text we use to train models would make large datasets impossible to collect. We just couldn’t do ML/AI research anymore. It would also end indexes, so you couldn’t have Google search anymore.

                But it’s far worse than that. Declaring that models can’t look at copyrighted material would end many nascent AI applications, ones you wouldn’t expect would be impacted. Collecting data while models run and tuning them is critical. Copyrighted images easily sneak into such datasets. For example, the freedom of panorama is an issue in the EU right now. Some countries allow it, some don’t. There are literally images you can take in public that you aren’t allowed to use for commercial purposes; the person who made the artwork owns the copyright to them. How can you possibly create datasets and models of cities under these conditions? If such laws were enforced against models and datasets it would bar many robotics, autonomous driving, drone and other AI applications in the EU.

                If the idea that it’s unethical for models to learn from copyrighted images takes hold it will essentially halt human progress. Just like our predecessors discovered that copyright can’t be absolute in the 1700s, so we have codify the notion that models are a transformative use.

                1. 17

                  The core controversy here, which is best shown by lots of these models outright outputting the watermarks, is that the output result is so clearly derived from the input. It feels akin to sampling in music, where people do end up just … getting permission from the owners. If you look at these transformations as “I layered a bunch of images that matched your search queries” the “strict” interpretation of everything clearly lands in this laundering authorship.

                  Remixed materials are not fair game as-is, and fair use isn’t even a concept in every copyright regime! The right to panorama is another example where different parts of the world has come to different conclusions.

                  I think there is a lot of room for moving around, and there is an interpretation of copyright that would basically only cause issues for generative art, while leaving loads of other ML applications completely fine. I’m not a copyright maximalist, but models that can seemingly spit out the input material clearly seems to be non-theoretical at this point.

                  1. 6

                    I didn’t think any of the diffusion image models had been caught spitting out exact copies of input material.

                    I would expect them to be able to produce watermarks because they’ve been trained on thousands of images that incorporate watermarks - so they have plenty of examples they can use to generate a realistic copy of one.

                    My mental model of how they work is that they start from random noise and work backwards - the models don’t incorporate any pixel patterns that could be pasted back into an image, they’re all floating point weights in a giant matrix.

                    Stable Diffusion for example managed to compress data from hundred of millions of images into just 9GB of model - there’s nothing left of the original images. They just wouldn’t fit!

                    1. 8

                      It’s more the fact that you type in “Joe Biden” into txt2img and you clearly get composites of 20 different pictures of him (down to one option I got which was a split between him and his face merged with Trump, clearly a header picture from a news website). You can also of course just type any major Pokemon name and get those.

                      The giant matrix is what it is. But the training sessions really are “here’s some description and an image, try to commit that to memory in a way that makes sense”. This is perhaps the same as what a human does! But humans also get in trouble for copying other people’s work too closely.

                      I don’t have a policy answer, but I’m not the one trying to build out the magical art machine that happens to know what all the Pokemon look like. I barely have a problem statement, really.

                      1. 4

                        Where’d you get 9GB from? The most recent model file is only 4.27GB.

                        1. 1

                          Huh, even smaller than I thought!

                    2. 10

                      What’s ethical and what’s legal are different things though. I don’t feel qualified to make declarations about legality, but my current intuition is that the legal side of things won’t be a problem for these AI models.

                      Ethics is another thing: that’s not a black-or-white answer. That’s why I’m so keen on the analogy to veganism: plenty of people think it is unethical to consume animal products, but it remains legal to do so and plenty of people have made the ethical decision that they are OK with it.

                      1. 14

                        I think it’s more interesting that communities which have traditionally been copyright-minimalist are suddenly turning copyright-maximalist over “AI” tools. For example, look at the reactions to GitHub’s Copilot.

                        1. 12

                          I’m fairly certain that those communities have traditionally been copyright-minimalist because that way is easier for individuals to have equal access. But “AI” tools are essentially impossible for individuals to make because of the immense computational costs, and as such are only accessible for large entities. As such, those communities are now looking into utilizing copyright to keep the playing field level between corporations and individuals. And in this case, level doesn’t mean that everyone can train a model, because this way benefits the corporations alone.

                          1. 11

                            This is one of the things I find so interesting about Stable Diffusion: they are giving away the whole model! They seem to be living the original values hinted at by OpenAI far more closely than OpenAI themselves.

                            All of these models support transfer learning too, which means that if you want a custom model you can often take an existing large model and train it with just a few hundred more items (and in a way that is much more affordable in terms of time and hardware costs).

                            1. 5

                              I’ve said similar in another comment, but having a model does not help me if I want to try and explore building a model with a different architecture on my own. In similar fields, modifications mostly cost mental labor, but with AI, I am limited to what kinds of modifications I can do without paying the majority of the cost in compute.

                            2. 4

                              Free Software/open-source communities have the resources to train an “AI” on a ton of code if they want to. Community projects have trained some pretty impressive models in other fields – see Leela in the chess world for an example.

                              And honestly, if you had a time machine and could go back to frustrated-at-the-jamming-printer Richard Stallman and tell him “in the future, a magic box will exist that launders any code, under even the most restrictive license, into something you can inspect and hack on and redistribute at will, and the only ‘downside’ is everyone else can do the same to code you write”, do you really think he would have had a problem with that?

                              1. 4

                                community != individual. If I wanted to try out a modification of Leela on my own, I would need to spend a hefty amount of money to do that. Community requires a consensus. Trying to go against it with in e.g. programming doesn’t have such high immediate costs as trying to do so with AI.

                                1. 1

                                  So, do you have a moral problem with Leela? Or with other community-compute-supported projects like SETI@Home?

                                  Because the position I’m taking away from your comments is that you do, or at least that as a logical consequence of your stance you should, have a problem with them. And if so that’s where we have to disagree – there have always been projects that a random individual can’t replicate in full due to lack of infrastructure or other resources. Many of them are Free Software projects. I can’t see any way to have a consistent position that takes the non-replicability as grounds to condemn them.

                                  1. 2

                                    SETI@Home is very different from Leela. SETI@Home actually does distributed computing. Leela does distributed training data collection.

                                    Actual training with Leela still happens on a centralized server, though one that is sponsored by community. And that server has limited resources. If you can only test out 10 changes in a month, chances are, that the community will decide that only the changes from “trusted community members” are worthy of testing. The expensive training process is required to test the changes out, and it is effectively gated behind a social game of getting into the community and “proving your worth”.

                                    Meanwhile, for SETI@Home, or other similar distributed computing projects, I can fairly easily make a change, compile the code, and test it out on a small number of problems. It does not require me to go through a social game for my changes to be considered for larger scale testing, because I can give some preliminary results by myself, without spending a ton of compute to produce them.

                                    Essentially, my problem with AI community projects is that it becomes a game of you either having the social connections, or compute power to testing out and contributing your changes. Machine learning raises the costs of compiling your software to be ready for testing to such high levels, that compiles become a limited resource. And with limited resources, those who control them, tend to limit the distribution of them to those they think will use them well, even if that seems to be the wrong way.

                                2. 3

                                  Are you also telling young Stallman that the magic box is owned by a company that for a while was a huge enemy of opensource, that the code to the magic box is completely proprietary, that you have to pay to use it, and that the magic is derived from the free labor of millions of other hackers?

                              2. 7

                                Over the past few years I’ve learned not to be overly surprised by philosophical incongruities and internal contradictions exhibited by our colleagues. In a decade or two I may even have the wisdom to no longer be annoyed. Start your journey today if you haven’t already.

                                FWIW, I think that the reason for the about face is, as usual, economic. “They’ll never replace engineers with AI!” rings a little hollower every year.

                                1. 3

                                  I also suspect this is why the about face is so acute with the advent of art. I think many knowledge workers never thought AI could replace their output/jobs.

                                  1. 2

                                    Tbh I think the main reason this hasn’t happened is the lack of a corpus of project descriptions and outputs. I suspect the first application will come from mega consulting firms which do have the corpus of inputs and outputs.

                                    Who will benefit and lose out is too hard to predict. It’s possible that these advances will increase the productivity of software engineers and therefore increase demand; or do so for some subset at the cost of demand for others.

                                    My hunch is that this will benefit people in relatively low cost places by allowing them to really pump out the kind of thing that now is done by two underpaid early career engineers, and similarly to get a jump on eg Shopify themes. It’s also going to drive revenue for the kind of saas offerings that can make use of that combined human + ai configuration. CRM and e-commerce seem like obvious arenas; and fairly static websites, which can also have copy and art generated as well.

                                  2. 3

                                    I suspect that part of the problem there is that it’s pretty easy to get Copilot to spit out full phrases (including copyright statements and suchlike) from existing source code - because the potential options for sequences of text tokens are so much slimmer.

                                    An image generator that starts with Gaussian noise is vanishingly unlikely to come up with anything that looks like it has fragments directly copied in from an existing artwork, even while being able to imitate individual styles to an incredible degree.

                                    Also: the nature of open source requires that people engage far more deeply with the specific details of their chosen licenses - and so are more likely to call out anything that seems not to follow the letter or spirit of them.

                                    1. 3

                                      It’s not really so surprising that people who care about preserving the rights of users aren’t equally interested in preserving the rights of programs.

                                      1. 3

                                        Please look at the context before making a dismissive comment like this. Specifically, please look at my other reply to someone where I pose the hypothetical of using a time machine to tell past Richard Stallman about the future development of a magic box to launder away restrictive licensing from code, and what he would think about that.

                                        And a huge amount of objection to Copilot has been that it can “launder away” the GPL/AGPL. Which I don’t understand: if you have a magic license-laundering box, you no longer need copyleft license enforcement. If someone took your code and ran it through the magic license-laundering box to gain the freedom to incorporate it into non-Free software, then you can run their code through the magic license-laundering box and get back a copy which you have the freedom to inspect/modify/distribute/etc.

                                        1. 4

                                          I’d appreciate it if you’d elaborate on your second paragraph, because it does not align with my understanding of the reality of who has access and in which ways to that magic box. That “someone” that you mention administers the box and has control over its inputs, but they are in no way obligated to then submit their own code to it as an input. To be transparent, I have some real ethical hang-ups re Copilot, but I don’t think those are relevant to the fact that you’re imputing a power symmetry between GitHub/Microsoft and individual users of that platform that does not exist, as far as I can tell.

                                          1. 1

                                            If someone truly believes that Copilot is a magic license-laundering box, then the only consistent position is for them to believe they could train their own model on whatever corpus of code they feel like throwing at it and get the same result. As I’ve already pointed out in other replies, open-source communities have trained some impressive models in other fields, so I don’t see why a “community Copilot” keeps being treated as an impossibility.

                                            Nor do I see why it’s necessary for every individual to have the resources (computing power, corpus, etc.) to train their own. For example: I don’t have the resources, in multiple ways, to build a competitor to the Linux kernel from scratch, but nobody seems to be demanding that Linus stop using his superior resources to make and release Linux in order to be fair to potential competitors like me. In fact, precisely the opposite: the whole point of Linux as an open-source success story is that many people banding together had more resources and capability than any one of them alone.

                                    2. 4

                                      Except that not eating animals has positive effects on the environment. So convincing people about this ethical position is a long-term improvement.

                                      The consequences of convincing people that models looking at copyrighted material is unethical are simply catastrophic. Not just for AI/ML, but for medicine, the environment, science and society in general, and for the progress of humankind.

                                      It’s exactly the opposite of being vegetarian.

                                      1. 4

                                        Why is the idea that the producers of copyrighted material be compensated so catastrophic? If the economic benefits are so great, surely a small fraction of the profits can be put towards paying the people who made the model possible.

                                        1. 2

                                          Why is the idea that the producers of copyrighted material be compensated so catastrophic? If the economic benefits are so great, surely a small fraction of the profits can be put towards paying the people who made the model possible.

                                          If I as a scientist release a paper publishing a model, who would I pay? And from where does that money come from? Do I pay based on the value of the model when I made it? Zero. Do users of my model pay? Based on what? It’s an endless nightmare. But that’s not even the beginning of the nightmare.

                                          We have no registry of copyrights. So it’s impossible to determine who we have to pay. Or what amounts.

                                          But it gets so so much worse. Roomba would need to pay if you left a newspaper on the ground and it saw it with its camera and then included that data in its training set. How would Roomba’s software even know that this is copyrighted? What if you have an image of the wall that you don’t have full rights to? Even something as trivial and basic as your Roomba would become such a copyright nightmare as to be impossible. Never mind more advanced things like autonomous cars, etc.

                                          The simplest bread and butter ML and ML applications would be impossible under this regime.

                                          That’s why I say, without any hyperbole or exaggeration, the line is stark. Either models get to freely look at copyrighted materials and we have ML/AI/progress or they don’t, and we stop ML/AI/progress.

                                          1. 5

                                            If I as a scientist release a paper publishing a model, who would I pay?

                                            The creators of the data that was used to train it.

                                            And from where does that money come from?

                                            The research funding.

                                            We have no registry of copyrights. So it’s impossible to determine who we have to pay. Or what amounts.

                                            And yet, youtube somehow manages.

                                            That’s why I say, without any hyperbole or exaggeration, the line is stark. Either models get to freely look at copyrighted materials and we have ML/AI/progress or they don’t, and we stop ML/AI/progress.

                                            If ML/AI doesn’t produce enough benefit to support paying people for training data, perhaps ML/AI progress isn’t as valuable as claimed.

                                            1. 2

                                              The creators of the data that was used to train it.

                                              As I explained, those people are absolutely impossible to identify. Moreover, I gave you examples of how this is doubly-impossible. Not only can you not do it for a fixed dataset, it’s even more impossible to do it with images that your robot collects on the go. Even if a human looks at every example they cannot determine the copyright status of a random image on your wall at home. This just isn’t possible.

                                              What you are proposing is completely equivalent to saying that there should be no ML research anymore.

                                              And from where does that money come from?

                                              The research funding.

                                              There is no money to do this.

                                              Again, this is equivalent to saying there is no more ML research.

                                              We have no registry of copyrights. So it’s impossible to determine who we have to pay. Or what amounts.

                                              And yet, youtube somehow manages.

                                              Because YouTube has people sign up with their data and identify themselves. And even then, YouTube has copyright strikes, etc. Who can possibly manage this in the AI/ML research community? Who can accept the legal risk of mistakes and lawsuits? No one.

                                              There are many other reasons why we cannot use this kind of approach for ML. Even if we had a registry where you donated your data for use with the model, the resulting models would be hopelessly biased towards the kinds of data people like to contribute. It would make them basically useless for any applications.

                                              If ML/AI doesn’t produce enough benefit to support paying people for training data, perhaps ML/AI progress isn’t as valuable as claimed.

                                              This is a very myopic view of how research works. The vast majority of research is useless and provides no value to anyone. That research must happen, but cannot under this regime. Out of that soup emerges some work that provides more value. Which eventually may find applications. These people use the training data and make the progress happen, they don’t make any value. It’s the final end users, people who start companies doing new things that make value. They also cannot pay until far down the road even if they wanted to.

                                              Changing the law so that models cannot look at copyrighted material is literally the end of ML and AI research. There is no way around it. The legal liability is immense and impossible to overcome (it’s not even an issue of money, it’s just not possible). And the costs would be so high on the very people who cannot shoulder them as to make progress end.

                                              1. 2

                                                Because YouTube has people sign up with their data and identify themselves. And even then, YouTube has copyright strikes, etc. Who can possibly manage this in the AI/ML research community? Who can accept the legal risk of mistakes and lawsuits? No one.

                                                Risk is balanced against reward. Why do you think there’s no business model to be had around managing copyright for AI and ML? Is the reward so small that nobody would step up to make money off this?

                                                If AI researchers were required to follow copyright, I’d expect Shutterstocks for training data to spring up like crazy, and sell licenses to training data, taking on the copyright management and payment disbursement, both mitigating risk for researchers and allowing the people who produce training data to get paid for their work.

                                                Because YouTube has people sign up with their data and identify themselves. And even then, YouTube has copyright strikes, etc.

                                                Taking down infringing content is exactly the point of managing copyright.

                                                1. 1

                                                  Risk is balanced against reward. Why do you think there’s no business model to be had around managing copyright for AI and ML? Is the reward so small that nobody would step up to make money off this?

                                                  Because the people who take the risk under this scenario, scientists doing the research, have no money. And the rewards are basically zero.

                                                  Datasets are not static. We need to collect new datasets constantly for many different problem domains. It’s not like you collect 1 dataset and call it quits. There are tens of thousands of datasets out there for all sorts of things, and we need far more than we have today.

                                                  If AI researchers were required to follow copyright, I’d expect Shutterstocks for training data to spring up like crazy, and sell licenses to training data, taking on the copyright management and payment disbursement, both mitigating risk for researchers and allowing the people who produce training data to get paid for their work.

                                                  This would be the end of AI and ML.

                                                  Shutterstock doesn’t know what we need in training data. Not all training data is the same. Datasets aren’t designed by random people. They’re designed by scientists who work very hard for many years to understand what kinds of datasets are valuable for what kinds of problems in which conditions and for which models. This is not something that works on an assembly-line process. And whatever datasets Shutterstock makes will be hopelessly biased by their collection procedure, rendering them basically worthless.

                                                  And set that all aside.

                                                  Think of the Roomba scenario. Models are not “trained” and “then run in production until the end of time”. Models need to be updated on the fly from new training data gathered while they operate. We could never do that under these conditions.

                                                  1. 3

                                                    Because the people who take the risk under this scenario, scientists doing the research, have no money.

                                                    If nobody gets enough benefit from AI research to pay the people producing the data, perhaps it shouldn’t happen.

                                            2. 1

                                              If I as a scientist release a paper publishing a model, who would I pay?

                                              Well, if you’re scientist publishing a paper you’re probably poor and losing money already, so, I think this can easily fall into fair use rules?

                                              1. 1

                                                poor is a bad adjective, I apologize. Not super rich is more accurate.

                                                1. 1

                                                  Fair use is totally unrelated to how much you can afford to pay. Even whether your lose money or not only has a limited impact fair use.

                                                  1. 1

                                                    Well, maybe it should be. I know that in this conversation we tend to talk about copyright as it is, but it’s not written in stone.

                                            3. 1

                                              That’s a good argument - that supporting the idea that AI should not be trained on copyright materials is actively harmful.

                                          2. 7

                                            If the idea that it’s unethical for models to learn from copyrighted images takes hold it will essentially halt human progress.

                                            Could you please expound on how human progress will halt if models had to respect copyright? That seems like a fairly hyperbolic statement. For example, the models and datasets needed to train autonomous driving, is there a reason that the companies behind that can’t pay for their own training data or license it?

                                            In addition, I see a difference between “you cannot train an AI on copyrighted works when the intent is to generate another image that may appear similar” and “you can train an AI on copyrighted works for [good purpose X, such as learning the paintings in the Lourve to give an audio description when a blind person walks through].”

                                            1. 3

                                              If the idea that it’s unethical for models to learn from copyrighted images takes hold it will essentially halt human progress.

                                              Could you please expound on how human progress will halt if models had to respect copyright? That seems like a fairly hyperbolic statement

                                              It’s not hyperbolic. I mean it very literally. And I speak as an ML researcher.

                                              There is no progress in AI and ML if we say that models cannot look at copyrighted materials. And there is no more progress on industrial applications of ML either.

                                              Virtually all current progress has come from computer vision and natural language processing research. Without large datasets this research could never have happened. These are also the most active and fruitful areas now. Every major advance would have been impossible if we had to live with the condition that models cannot look at copyrighted materials. From the earliest of the modern era which developed CNNs, which rely on ImageNet, to the most recent which created Transformers that rely on massive text and sometimes image corpora from the web.

                                              Neither researchers at universities nor corporations could ever afford to pay for such datasets. But it wouldn’t matter if they could.

                                              In the real world, models are not “trained” and then you’re done. There is no external dataset for anything in reality. There are datasets to jump-start you to an application, but then you need to constantly evaluate and fine-tune the model; often a cascade of models that use the processing of earlier models. Autonomous car companies need to ingest their own data and train on it, no amount of external data will ever help.

                                              Moreover, the risk that your model is in violation without you knowing it because it uses billions of images or texts would be immense. It’s hard to see universities bearing that risk.

                                              To make any progress we would have to wind the clock back to the 90s. Throw away all of modern ML. Go back to the era for datasets that have just a few hand-curated examples on them. Very little progress was made in ML at that point.

                                              1. 6

                                                Virtually all current progress has come from computer vision and natural language processing research. Without large datasets this research could never have happened

                                                Some of those datasets explicitly grant rights to this kind of use. For example, a lot of the training data for translation models have been trained by the transcripts of the EU Parliament, which are translated into all member states’ languages by professional translators. The Spanish government also funded the creation of a fairly large dataset for training translation systems. The need for large data sets does not necessarily imply the need to build those large data sets by harvesting the creative output of others without their consent.

                                                Fair use (or fair dealings) has always been a tricky and subjective concept in copyright law and is constantly reevaluated but when a computer system can reproduce something exactly then it generally isn’t covered. There’s then a much more complex question of whether it counts as a derived work.

                                                1. 1

                                                  Thanks for the clear explanation. As not an ML researcher, I now better understand the line you draw between potentially copyrighted images and advances in fields that would not be obviously related to images.

                                                  Autonomous car companies need to ingest their own data and train on it, no amount of external data will ever help.

                                                  Isn’t this a good example of a non-copyrighted dataset or a dataset that the owner would hold the copyright to and be able to use without concern? The car manufacturer is using cameras in the car itself and, I assume, telemetry from the drive to better learn what was going on. You might take a picture of something copyrighted, like an Amazon logo, but this would never have a chance of “reproducing” that copyrighted image. The model output is a car that doesn’t drive into the side of an Amazon delivery van, not a car that turns into an Amazon delivery van.

                                                  1. 1

                                                    Isn’t this a good example of a non-copyrighted dataset or a dataset that the owner would hold the copyright to and be able to use without concern? The car manufacturer is using cameras in the car itself and, I assume, telemetry from the drive to better learn what was going on. You might take a picture of something copyrighted, like an Amazon logo, but this would never have a chance of “reproducing” that copyrighted image. The model output is a car that doesn’t drive into the side of an Amazon delivery van, not a car that turns into an Amazon delivery van.

                                                    A model that you train to take images and decide what a car should do next can easily be turned into a model that generates images. All you need to do is take off the last parts that predict what the cars should do, and use the intermediate results in an image generation scheme; like say with diffusion models. DALL-E 2’s internals for example were never intended for image generation; they had a model called CLIP and wrapped it in a diffusion model.

                                                    Really any model (ok, pretty much any model, there are rare edge cases) could easily be used to produce images. Of course, quality will vary, but there is nothing special about image-producing and image-non-producing models.

                                                    So even if you own the camera, and you own the car, and you take your own pictures on a public road, if showing a model an image with copyrighted material affects the copyright status of that model, you’ve ruled out autonomous cars, Roombas, etc.

                                                    1. 2

                                                      OK, so on the way of developing the tech to eliminate the jobs of truck drivers and cabbies, we got tech to eliminate the jobs of illustrators and copyrighters. Got it.

                                                      edit brainfart

                                                  2. 1

                                                    It’s not hyperbolic. I mean it very literally. And I speak as an ML researcher.

                                                    There is no progress in AI and ML if we say that models cannot look at copyrighted materials. And there is no more progress on industrial applications of ML either.

                                                    It is, IMO, hyperbolic at best to insist that human progress depends on progress in AI and ML. Indeed, their effect on humanity so far has been largely negative¹. Immense amounts of human progress could be made simply by making intelligent and conscious use of existing technologies, rather than leaving it up to the god of the market.

                                                2. 1

                                                  I don’t disagree with any of the broad points you made, and even the ones I kinda do, it’s a meh disagreement, not worth the finger stress.

                                                  However. I see a pretty clear distinction between generative and non-generative models. Maybe it’s because I don’t know shit about AI, but it seems to me that this is a fairly binary distinction, and that the ethical concerns are mainly with generative models.

                                                  Quick aside: I guess you could raise the point that if your model to detect whatever, trained on my copyrighted photo of a whatever, is making you millions, maybe I should have a cut, that is not the context in which moral concerns have been raised, not here, not anywhere else I remember seeing lately.

                                                  Back to the point I actually wanted to make: what are the actual, palpable, life saving/improving, use cases of generative models? Cool, I can make stuff for my DnD game, I can prototype games, I can do weird art stuff, yeah, nice, none of that is curing cancer. I value art, but if the full extent of the value this kind of model can bring is artistic, maybe it’s not worth throwing copyright away for the sake of it? And I don’t even like copyright that much.

                                                  1. 1

                                                    However. I see a pretty clear distinction between generative and non-generative models. Maybe it’s because I don’t know shit about AI, but it seems to me that this is a fairly binary distinction, and that the ethical concerns are mainly with generative models.

                                                    There is no such distinction. I can use any model (almost) to generate images from it. Of course quality will vary.

                                                    I understand why you feel like this distinction exists, because we often talk like it does, since we tune models for specific tasks. So people will say, X is used for generation and Y for recognition. But.. that’s shorthand. There is nothing special about a generative model these days (there used to be years ago).

                                                    1. 2

                                                      Hmm, interesting, I stand corrected.

                                                1. 18

                                                  I really don’t understand these things. A few of the online conferences during the pandemic had 3D things and they were vastly less efficient than a simple menu to navigate. I really liked GatherTown, but it explicitly gave a 2D top-down (8-bit Zelda-like) experience, which let me see a lot more of the environment than an immersive environment. The great thing about virtual environments is that they’re not limited to the constraints of real space.

                                                  Jef Raskin wrote that games are, by design, bad UIs. The simplest UI for an interactive game is a button that you press and then win. The point of a game interface is to hide that from you and make you do things that are more difficult to accomplish your task. Any time someone designs a UI that looks like a game, there’s a good chance that I’m in for a bad experience (even with GatherTown, I’ve managed to get lost in the environment and not be able to find the room I’m supposed to go to, which wouldn’t happen with a simple hyperlinked list of meeting rooms).

                                                  1. 7

                                                    I have to agree (not having used this interfaces tho!) IF people go to conferences, is trying to find the next room really what they want to replicate? Same with “3d offices” where avatars sit in meetings. Why would anyone want to replicate this experience?

                                                    In a few years we will see the “metaverse” (and other 3d envs) as the culmination of the low-interest rate twenty-teens exuberance. Along with fintech and NFTs.

                                                    1. 4

                                                      In a few years we will see the “metaverse” (and other 3d envs) as the culmination of the low-interest rate twenty-teens exuberance. Along with fintech and NFTs.

                                                      People have been playing MMORPGs and games like Minecraft for decades. World of Warcraft has been hugely popular and folks met lifelong friends and partners there. I think the ship has sailed on the 3d env part. NFTs and Fintech are not related to the post, but if you’re trying to be a cynical tech snarker, be my guest, that’s certainly not going away on the internet.

                                                      1. 2

                                                        I agree on games, I love games myself (but I don’t play MMORPGs). That’s daved_chisnall’s point too, game 3d works in games well, but games != work for the most part. 3D in games is not going away.

                                                        I think Meta would be more successful marketing 3d to Facebook - where people hang out after work (unlike our cynical set, people love Facebook! it’s where their friends are) but instead they needed to show “growth potential” and highlighted a dystopian 3d workplace. And the press dutifully reported it as “the future of work”. Just like they reported NFTs to be “the future of finance”.

                                                        I am not cynical by nature but it is obvious at lot of the mainstream press has been hijacked by people who are very very good at marketing bullshit.

                                                        1. 2

                                                          I think Meta would be more successful marketing 3d to Facebook - where people hang out after work (unlike our cynical set, people love Facebook! it’s where their friends are) but instead they needed to show “growth potential” and highlighted a dystopian 3d workplace. And the press dutifully reported it as “the future of work”. Just like they reported NFTs to be “the future of finance”.

                                                          But this has nothing to do with Meta. This is Mozilla Hubs, a 3D room project designed to run in the browser. Mozilla started on the project before Facebook rebranded to Meta. The project is FOSS and unlike Meta’s product or VRChat, is completely usable in the browser, and works well without a VR headset, even on your smartphone!

                                                          I hate to ask, but did you go to the posted link? I really don’t see how criticisms of corporate marketing are relevant here unless you’re more interested in trying to make a point than read the link. From what I’ve seen most uses of Hubs has been for classroom experiences or social experiences, vanishingly little for work related ones.

                                                          1. 1

                                                            I was replying about the use of 3d in conferences and work in general and the difference between work and games. I agree discussing marketing is not on topic!

                                                    2. 6

                                                      Have you played something like Half Life Alyx? During one of my playthroughs, one of those spidery headcrabs of yore came swooping by. Instantly and as if through sheer instinct, I grabbed it mid flight and held it hanging by one of its legs. It looked seriously annoyed by the whole affair.

                                                      Swinging it around as if imitating the rotor blades of a helicopter worked just fine (albeit not with the desired woosh-woosh sound). Putting the crab inside of a bucket, and putting the bucket upside down on the ground had the crab-bucket crawl away. Experiences like that ‘sold’ VR as HCI for me. Nowhere in the process did I think of a ‘press G to grab’ or ‘F to pay respects’ like setup - “I” was the input, the ‘living data’ the interface.

                                                      One of the many demos I held here for poor unsuspecting chums, was via Valve ‘the lab’. It has this one part with a little robot dog running around being adorable. You could throw objects and it would scurry after them, return and placing them at your feet. Anyhow, for a lark someone kneeled down and tried to pet it. It rolled over and got some belly scratches. The person subsequently removed the HMD and snuck away for a crying session. Former dog owner.

                                                      Another chumette took a deep sea dive via ‘The deep’, where the schene of a sea floor slumbering whale skeleton transitioned into a starry underwater sky of glowing jellyfish. The person froze and shook in horror. Trypophobia apparently, who knew.

                                                      My point is that the right mix of these things can strike at something unguarded and primal; possibly also tap into cognition that sees deeper patterns in ongoing computing for inferences previously unheard of. What Hubs is doing here has the potential of doing none of that. Excel fame ‘The Hall of Tortured Souls’ meets VRML.

                                                      1. 6

                                                        For conferences i agree that an accessible top-down 2d design might be the way to go. But for groups of people just hanging around, expressing themselves, the extra degrees of freedom afforded by 3D VR spaces are invaluable. There is a reason people flock to VRChat: body language.

                                                        1. 2

                                                          yeah it’s fun to shoot the shit with people you know in VR. the ability to see in 3D or grab virtual objects didn’t wow me, but seeing someone talk and gesture in VRChat (and being able to do the same) blew my mind.

                                                          1. 1

                                                            I think this is especially true for groups of people who have become familiar with each other’s physical presence in other venues, be it work in an office, meet-ups, or past conferences. Hard to scale any experience to large groups but not every technology has to scale to large groups to be a tool worthy of our use.

                                                          2. 3

                                                            Jef Raskin wrote that games are, by design, bad UIs. The simplest UI for an interactive game is a button that you press and then win.

                                                            I wonder what he would think of things like Cookie Clicker…

                                                            1. 1

                                                              Or Progress Quest! http://progressquest.com/

                                                            2. 1

                                                              The great thing about virtual environments is that they’re not limited to the constraints of real space.

                                                              We just have different constraints instead. When I’m in a shared space working on things, I can often walk over and start chatting with a friend. Some of my favorite experiences playing games or working on projects with friends has been the ability to just casually start a conversation. Yeah sometimes it meant that the project went nowhere and we went to beers, but that was a valuable, enjoyable experience. When I’m in a VC, there’s no such thing. I’m either broadcasting to the entire room or I’m not talking. Breakout rooms or sub-channels or whatever you want to call them just aren’t the same, you can’t form organic connection that way. On the other hand I have fond memories of chatting with a random person (eventual friend) at a personal hackathon about LaTeX even though most of the rest of the group had never used LaTeX for much at all.

                                                              even with GatherTown, I’ve managed to get lost in the environment and not be able to find the room I’m supposed to go to, which wouldn’t happen with a simple hyperlinked list of meeting rooms

                                                              Folks in XR/Metaverse/3D spaces talk about offering “cues” in rooms/scenes to help folks congregate, so this is a known pain point. Humans spend their whole lives in physical spaces and humans have been creating physical spaces for almost our entire history, so we know how this works very well. In metaverse, not so much. Also, this depends on the context. If efficiency is the goal, then sure, there’s no point getting lost. And perhaps when you’re working with someone for a large employer where your only point of union is that you are paid by the same large employer then sure, you want to get your work done and go home to your family/friends, so you just want to get into a meeting room and get done with it. But if encouraging serendipity of community is the goal, then getting lost in the environment is probably a bit more of a feature than a bug.

                                                              Some of this I suspect is a personality thing. Some people treat digital spaces as specific places where they want to get things done; they want to make some progress on some code they’ve written, get their finances in order, watch the video they’re searching for. Others perhaps want to simply “roam” digitally. These folks are going to be the ones roaming around in MMORPGs or Minecraft worlds.

                                                              Personally, working in the fully remote era of COVID has become quite alienating. In the past I met friends even partners through coworkers at work. Now, we see each other as talking heads or sources of audio, exchange some links, and get done with it. And having had a bout of COVID, I realize there are times when I want to be with friends of mine but travel is just not feasible. Chats and VCs are just not the same.

                                                              I might be in the minority though. And yeah if you’re the “My life is rich enough with just my close friends and family” type, then virtual socializing probably will never be for you.

                                                              1. 2

                                                                Some of my favorite experiences playing games or working on projects with friends has been the ability to just casually start a conversation

                                                                GatherTown, which I mentioned above, does this very well. As your avatar approaches someone, you hear their audio. As you get closer, you see their video. You can transition from this into a full video conferencing mode, or just have their video feed above.

                                                                1. 1

                                                                  Yup I’ve used GatherTown and I’m a fan! I did find the 2D-ness of the thing a bit disorienting, but for work conference events I really enjoy it. I attended a pandemic birthday party in GatherTown and I enjoyed it quite a bit also.

                                                            1. 4

                                                              Influenced by the above post, what are some good formally-specified minimal imperative languages that folks here know? Constraining an execution model for formal reasoning seems like it would scratch both the constrained programming itch and give you the safety to reason about your program in ways that just the human brain can’t. Are there examples of these?

                                                              1. 5

                                                                How about Scheme?

                                                                1. 2

                                                                  I know that Scheme can be written idiomatically quite imperatively and there are a lot of minimal scheme implementations. This wasn’t quite what I meant by “imperative language”, but still, are there formally verified subsets of Scheme? That would be quite cool.

                                                                  1. 2

                                                                    I’m curious in which ways scheme doesn’t fit your idea of “imperitive language”?

                                                                    1. 3

                                                                      It fits in every way that matters, just wasn’t what I “expected” when I wrote the question. The reason being that I was envisioning a bit of mechanical sympathy with the code I wrote and the processor, but of course by doing that I fell into the same trap as so many others with these minimal languages. Mechanical sympathy is not the same as minimal.

                                                                      I’d be happy with a formal, minimal Scheme to play with. Bonus points if it tends to reduce to performant instructions, but a formal, minimal Scheme is plenty.

                                                                    2. 1

                                                                      Verified in what sense?

                                                                      1. 1

                                                                        Formally verifiable, meaning either:

                                                                        1. We can translate this minimal Scheme code into something an automated theorem-prover can work with, and let it rip.

                                                                        2. If 1 isn’t possible, then a Scheme that renders minimally to theorem prover clauses that an author has to prove on their own in the prover.

                                                                        I’m guessing we’re at a state where 2 is doable but 1 is not yet, but I haven’t payed as much attention here as I should, so I’m curious.

                                                                          1. 1

                                                                            This is awesome, thanks!

                                                                  2. 2

                                                                    Depending how “simple” you want there is wasm

                                                                  1. 14

                                                                    Terrible article. The author simply mashes together concepts while apparently having only a superficial understanding of any of them. The comparison of uxn to urbit is particularly hilarious, considering that they have totally different goals. Well, yes, both are virtual machines and that is where it ends.

                                                                    Simplicity and elegance have appeals beyond performance, like ease of understanding and implementation, like straightforward development tool design. Judging the performance of a VM that (from what I see) has never been intended to be high-speed based on some arbitrary micro benchmark also doesn’t really demonstrate a particularly thorough methodology (the pervasive use of “we” does make the article sound somewhat scientific, I do grant that…)

                                                                    I suggest the author invests some serious effort into studying C. Moore’s CPU designs, the true meaning of “simplicity”, the fact that it can be very liberating to understand a software system inside out and that not everybody has the same goals when it comes to envisioning the ideal piece of software. The article just critizes, which is easy, but doesn’t present anything beyond that.

                                                                    1. 13

                                                                      Terrible article. The author simply mashes together concepts while apparently having only a superficial understanding of any of them.

                                                                      I suggest the author invests some serious effort into studying C. Moore’s CPU designs, the true meaning of “simplicity”, the fact that it can be very liberating to understand a software system inside out

                                                                      I don’t exactly agree with the author’s criticism of uxn (probably because I see it purely as a fun project, and not a serious endeavor), but let’s not descend into personal attacks please.

                                                                      1. 15

                                                                        Thanks.

                                                                        Now, with that out of way - this is not at all personal, the author is simply misrepresenting or confused, because there are numerous claims that have no basis;

                                                                        It is claimed this assembler is like Forth, but it is not interactive, nor it have the ability to define new immediate words; calling and returning are explicit instructions. The uxntal language is merely an assembler for a stack machine.

                                                                        Must Forth be interactive? What sense make immediate words in an assembler? Returning is an explicit instruction in Forth (EXIT, ;). That sentence suggests some wild claim has been made, but I can’t see where.

                                                                        Using software design techniques to reduce power usage, and to allow continued use of old computers is a good idea, but the uxn machine has quite the opposite effect, due to inefficient implementations and a poorly designed virtual machine, which does not lend itself to writing an efficient implementation easily.

                                                                        Again, I suggest studying Moore’s works, Koopman’s book and checking out the “Mill” to see that stacks can be very fast. The encoding scheme is possibly the simplest I’ve ever seen, the instruction set is fully orthogonal. A hardware implementation of the design would be orders of magnitude simpler than any other VM/CPU. Dynamic translation (which seems to be the author’s technique of choice) would be particularly straightforward. I see no poor design here.

                                                                        The uxn platform has been ported to other non-Unix-like systems, but it is still not self-hosting, which has been routinely ignored as a part of bootstrapping.

                                                                        This makes no sense. Why self-host a VM? “Routinely ignored”? What is he trying to say?

                                                                        After that the author discusses the performance of the uxn VM implementation, somehow assuming that is the only metric important enough to warrant an assessment of the quality of uxn (the “disaster”).

                                                                        Vectorisation is also out of the question, because there are no compilers for uxn code, let alone vectorising compilers.

                                                                        What does the author expect here?

                                                                        We can only conclude that the provided instruction sizes are arbitrary, and not optimised for performance or portability, yet they are not suitable for many applications either.

                                                                        (I assume data sizes are meant here, not instruction sizes, as the latter are totally uniform) Uxn is a 8/16 bit CPU model and supports the same data sizes as any historical CPU with similar word size. Again, I get the impression the author is just trying very hard to find things to complain about.

                                                                        Next the author goes to great lengths to evaluate uxn assembly as a high level programming tool, naturally finding numerous flaws in the untyped nature of assembly (surprise!).

                                                                        a performant implementation of uxn requires much of the complexity of modern optimising compilers.

                                                                        The same could be said about the JVM, I guess.

                                                                        To get to the end, I can only say this article is an overly strenous attempt to find shortcomings, of whatever nature, mixing design issues, implementation details, the authors idea of VM implementation, security topics, at one moment taking uxn as a VM design, then as a language, then as a compiler target, then as a particular VM implementation, then as a general computing platform.

                                                                        I like writing compilers, I have written compilers that target uxn, it is as good a target as any other (small) CPU (in fact, it is much easier than, say, the 6502). Claiming that “the design of uxn makes it unsuitable for personal computing, be it on new or old hardware” is simply false, as I can say from personal experience. This article is pure rambling, especially the end, where sentences like this that somehow let me doubt whether the author is capable of the required mental detachment to discuss technical issues:

                                                                        Minimalist computing is theoretically about “more with less”, but rather than being provided with “more”, we are instead being guilt-tripped and told that any “more” is sinful: that it is going to cause the collapse of civilisation, that it is going to ruin the environment, that it increases the labour required by programmers, and so on. Yet it is precisely those minimalist devices which are committing these sins right now; the hypocritical Church of Minimalism calls us the sinners, while it harbours its most sinful priests, and gives them a promotion every so often.

                                                                        No, brother, they are not out there to get you. They just want simple systems, that’s all. Relax.

                                                                        As O’Keefe said about Prolog: “Elegance is not optional”. This also applies to CPU and VM design. You can write an uxn assembler in 20 lines of Forth. There you have a direct proof that simplicity and elegance have engineering implications in terms of maintenance, understandability and (a certain measure of) performance.

                                                                        1. 16

                                                                          I agree with you in the sense that doing something for fun is obviously allowed, but I feel like the criticism in the article is not that you shouldn’t build anything simple and minimalist for fun, but that the things we build are usually not as revolutionary as some may claim just because they’re simple. Now, if the author of uxn made no such claims then that’s fine; however, that doesn’t mean something cannot be criticized for its perceived flaws (whether you agree with the style and tone of the criticism or not).

                                                                          I also agree that the Church of Minimalism stuff is a bit over-the-top.

                                                                          1. 6

                                                                            FWIW I had exactly the same reaction to this article as you, and I haven’t even heard of any of these projects. The article seems like it is in bad faith.

                                                                            Minimalist computing is theoretically about “more with less”, but rather than being provided with “more”, we are instead being guilt-tripped and told that any “more” is sinful: that it is going to cause the collapse of civilisation, that it is going to ruin the environment, that it increases the labour required by programmers, and so on. Yet it is precisely those minimalist devices which are committing these sins right now; the hypocritical Church of Minimalism calls us the sinners, while it harbours its most sinful priests, and gives them a promotion every so often.

                                                                            This part in particular is so hyperbolic as to be absurd. Completely unnecessary. Still, I guess if your goal is to garner attention, hyperbole sells.

                                                                            1. 12

                                                                              This part in particular is so hyperbolic as to be absurd. Completely unnecessary. Still, I guess if your goal is to garner attention, hyperbole sells.

                                                                              I wouldn’t say so. I’ve had folks tell me on tech news aggregators that the only way to make computing ethical is for computing to be reimplemented on uxn stacks so that we can all understand our code, or else the code we use can be used for exploitation. Now this may not be the actual uxn project’s stance on the matter at all, but much like Rust seems to have a bit of a reputation of really pushy fans, I think it’s fair to say that uxn has attracted a fanbase that often pushes this narrative of “sinful computing”.

                                                                              1. 2

                                                                                Oh interesting, do you have any links? I’m intrigued by this insanity.

                                                                                Edit: though presumably this is a vocal minority, making this still quite a hyperbolic statement.

                                                                                1. 3

                                                                                  I did a light search and found nothing off-hand. I’ll DM you if I manage to find this since I don’t like naming and shaming in public.

                                                                                  Edit: And yeah I’m not saying this has been my experience with a majority at all. The folks I’ve heard talk about uxn have been mixed with most having fun with the architecture the same way folks seem to like writing PICO-8. It just has some… pushy folks involved also.

                                                                                  1. 7

                                                                                    I believe the only way to make computing ethical is to reinvent computing to do more with less. I also believe uxn is trying to reinvent computing (for a very specific use case) to do more with less. But those two statements still don’t add up to any claim that it’s the only way out, or even that it’s been shown to work in broader use cases.

                                                                                    Disclaimer: I’ve also tried to reinvent computing to do more with less. So I have a knife in this fight.

                                                                              2. 4

                                                                                Actually, we regularly get posts here on lobste.rs espousing exactly that sort of ideology. I think there’s one or two trending right now. perhaps you’ve hit on the right set of tag filters so you never see them?

                                                                            2. 2

                                                                              “C. Moore…” as in Chuck Moore…

                                                                              1. 1

                                                                                The paste was cut off. Fixed.

                                                                            3. 13

                                                                              The comparison of uxn to urbit is particularly hilarious, considering that they have totally different goals.

                                                                              they both market themselves as “clean-slate computing stacks”, they both begin with a basic admission that no new OS will ever exist (so you have to host your OS on something else), they both are supported by a cult of personality, they both are obsessed with ‘simplicity’ to the point of losing pragmatic use and speed. I’d say they’re pretty similar!

                                                                              1. 3

                                                                                they both are supported by a cult of personality

                                                                                Strong disagree, from someone who’s been moderately involved with uxn community previously. Who is the cult leader in this scenario?

                                                                            1. 14

                                                                              What surrprised me about Tainter’s analysis (and I haven’t read his entire book yet) is that he sees complexity as a method by which societies gain efficiency. This is very different from the way software developers talk about complexity (as ‘bloat’, ‘baggage’, ‘legacy’, ‘complication’), and made his perspective seem particularly fresh.

                                                                              1. 31

                                                                                I don’t mean to sound dismissive – Tainter’s works are very well documented, and he makes a lot of valid points – but it’s worth keeping in mind that grand models of history have made for extremely attractive pop history books, but really poor explanations of historical phenomena. Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.

                                                                                His explanation of the fall of the evolution and the eventual fall of the Roman Empire makes a number of valid points about the Empire’s economy and about some of the economic interests behind the Empire’s expansion, no doubt. However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.

                                                                                Yes, some particular military expeditions were specifically motivated by the desire to get more grain or more cows. But many weren’t – in fact, some of the greatest Roman wars, like (some of) the Roman-Parthian wars, were not driven specifically by Roman desire to get more grains or cows. Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone). The increase of cost and decrease in efficiency, too, are not a matter of half-rational historical determinism – they had economic as well as cultural and social causes that rationalising things in terms of energy not only misses, but distorts to the point of uselessness. The breakup of the Empire was itself a very complex social, cultural and military story which is really not something that can be described simply in terms of the dissolution of a central authority.

                                                                                That’s also where this mismatch between “bloat” and “features” originates. Describing program features simply in terms of complexity is a very reductionist model, which accounts only for the difficulty of writing and maintaining it, not for its usefulness, nor for the commercial environment in which it operates and the underlying market forces. Things are a lot more nuanced than “complexity = good at first, then bad”: critical features gradually become unneeded (see Xterm’s many emulation modes, for example), markets develop in different ways and company interests align with them differently (see Microsoft’s transition from selling operating systems and office programs to renting cloud servers) and so on.

                                                                                1. 6

                                                                                  However, explaining even the expansion – let alone the fall! – of the Roman Empire strictly in terms of energy requirements is about as correct as explaining it in terms of class struggle.

                                                                                  Of course. I’m long past the age where I expect anyone to come up with a single, snappy explanation for hundreds of years of human history.

                                                                                  But all models are wrong, only some are useful. Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.

                                                                                  1. 6

                                                                                    Especially in our practice, where we often feel overwhelmed by complexity despite everyone’s best efforts, I think it’s useful to have a theory about the origins and causes of complexity, even if only for emotional comfort.

                                                                                    Indeed! The issue I take with “grand models” like Tainter’s and the way they are applied in grand works like Collapse of Complex Societies is that they are ambitiously applied to long, grand processes across the globe without an exploration of the limits (and assumptions) of the model.

                                                                                    To draw an analogy with our field: IMHO the Collapse of… is a bit like taking Turing’s machine as a model and applying it to reason about modern computers, without noting the differences between modern computers and Turing machines. If you cling to it hard enough, you can hand-wave every observed performance bottleneck in terms of the inherent inefficiency of a computer reading instructions off a paper tape, even though what’s actually happening is cache misses and hard drives getting thrashed by swapping. We don’t fall into this fallacy because we understand the limits of Turing’s model – in fact, Turing himself explicitly mentioned many (most?) of them, even though he had very little prior art in terms of alternative implementations, and explicitly formulated his model to apply only to some specific aspects of computation.

                                                                                    Like many scholars at the intersections of economics and history in his generation, Tainter doesn’t explore the limits of his model too much. He came up with a model that explains society-level processes in terms of energy output per capita and upkeep cost and, without noting where these processes are indeed determined solely (or primarily) by energy output per capita and upkeep post, he proceeded to apply it to pretty much all of history. If you cling to this model hard enough you can obviously explain anything with it – the model is explicitly universal – even things that have nothing to do with energy output per capita or upkeep cost.

                                                                                    In this regard (and I’m parroting Walter Benjamin’s take on historical materialism here) these models are quasi-religious and are very much like a mechanical Turk. From the outside they look like history masterfully explaining things, but if you peek inside, you’ll find our good ol’ friend theology, staunchly applying dogma (in this case, the universal laws of complexity, energy output per capita and upkeep post) to any problem you throw its way.

                                                                                    Without an explicit understanding of their limits, even mathematical models in exact sciences are largely useless – in fact, a big part of early design work is figuring out what models apply. Descriptive models in humanistic disciplines are no exception. If you put your mind to it, you can probably explain every Cold War decision in terms of Vedic ethics or the I Ching, but that’s largely a testament to one’s creativity, not to their usefulness.

                                                                                  2. 4

                                                                                    Furthermore, periods of rampant, unsustainably rising military and infrastructure upkeep costs were not associated only with expansionism, but also with mounting outside pressure (ironically, sometimes because the energy per capita on the other side of the Roman border really sucked, and the Huns made it worse on everyone).

                                                                                    Not to mention all the periods of rampant rising military costs due to civil war. Those aren’t wars about getting more energy!

                                                                                    1. 1

                                                                                      Tainter’s Collapse of Complex Societies, while obviously based on a completely different theory (and one with far less odious consequences in the real world) is based on the same kind of scientific thinking that brought us dialectical materialism.

                                                                                      Sure. This is all about a framing of events that happened; it’s not predictive, as much as it is thought-provoking.

                                                                                      1. 7

                                                                                        Thought-provoking, grand philosophy was certainly a part of philosophy but became especially popular (some argue that it was Nathaniel Bacon who really brought forth the idea of predicting progress) during the Industrial Era with the rise of what is known as the modernist movement. Modernist theories often differed but frequently shared a few characteristics such as grand narratives of history and progress, definite ideas of the self, a strong belief in progress, a belief that order was superior to chaos, and often structuralist philosophies. Modernism had a strong belief that everything could be measured, modeled, categorized, and predicted. It was an understandable byproduct of a society rigorously analyzing their surroundings for the first time.

                                                                                        Modernism flourished in a lot of fields in the late 19th early 20th century. This was the era that brought political philosophies like the Great Society in the US, the US New Deal, the eugenics movement, biological determinism, the League of Nations, and other grand social and political engineering ideas. It was embodied in the Newtonian physics of the day and was even used to explain social order in colonizing imperialist nation-states. Marx’s dialectical materialism and much of Hegel’s materialism was steeped in this modernist tradition.

                                                                                        In the late 20th century, modernism fell into a crisis. Theories of progress weren’t bearing fruit. Grand visions of the future, such as Marx’s dialectical materialism, diverged significantly from actual lived history and frequently resulted in a magnitude of horrors. This experience was repeated by eugenics, social determinism, and fascist movements. Planck and Einstein challenged the neat Newtonian order that had previously been conceived. Gödel’s Incompleteness Theorem showed us that there are statements we cannot evaluate the validity of. Moreover many social sciences that bought into modernist ideas like anthropology, history, and urban planning were having trouble making progress that agreed with the grand modernist ideas that guided their work. Science was running into walls as to what was measurable and what wasn’t. It was in this crisis that postmodernism was born, when philosophers began challenging everything from whether progress and order were actually good things to whether humans could ever come to mutual understanding at all.

                                                                                        Since then, philosophy has mostly abandoned the concept of modeling and left that to science. While grand, evocative theories are having a bit of a renaissance in the public right now, philosophers continue to be “stuck in the hole of postmodernism.” Philosophers have raised central questions about morality, truth, and knowledge that have to be answered before large, modernist philosophies gain hold again.

                                                                                        1. 3

                                                                                          I don’t understand this, because my training has been to consider models (simplified ways of understanding the world) as only having any worth if they are predictive and testable i.e. allow us to predict how the whole works and what it does based on movements of the pieces.

                                                                                          1. 4

                                                                                            You’re not thinking like a philosopher ;-)

                                                                                            1. 8

                                                                                              Models with predictive values in history (among other similar fields of study, including, say, cultural anthropology) were very fashionable at one point. I’ve only mentioned dialectical materialism because it’s now practically universally recognized to have been not just a failure, but a really atrocious one, so it makes for a good insult, and it shares the same fallacy with energy economic models, so it’s a doubly good jab. But there was a time, as recent as the first half of the twentieth century, when people really thought they could discern “laws of history” and use them to predict the future to some degree.

                                                                                              Unfortunately, this has proven to be, at best, beyond the limits of human understanding and comprehension. This is especially difficult to do in the study of history, where sources are imperfect and have often been lost (case in point: there are countless books we know the Romans wrote because they’re mentioned or quoted by ancient authors, but we no longer have them). Our understanding of these things can change drastically with the discovery of new sources. The history of religion provides a good example, in the form of our understanding of Gnosticism, which was forever altered by the discovery of the Nag Hammadi library, to the point where many works published prior to this discovery and the dissemination of its text are barely of historical interest now.

                                                                                              That’s not to say that developing a theory of various historical phenomenons is useless, though. Even historical materialism, misguided as they were (especially in their more politicized formulations), were not without value. They forced an entire generation of historians to think more about things that they never really thought about before. It is certainly incorrect to explain everything in terms of class struggle, competition for resources and the means of production, and the steady march from primitive communism to the communist mode of production – but it is also true that competition for resources and the means of production were involved in some events and processes, and nobody gave much thought to that before the disciples of Marx and Engels.

                                                                                              This is true here as well (although I should add that, unlike most materialistic historians, Tainter is most certainly not an idiot, not a war criminal, and not high on anything – I think his works display an unhealthy attachment for historical determinism, but he most certainly doesn’t belong in the same gallery as Lenin and Mao). His model is reductionist to the point where you can readily apply much of the criticism of historical materialism to it as well (which is true of a lot of economic models if we’re being honest…). But it forced people to think of things in a new way. Energy economics is not something that you’re tempted to think about when considering pre-industrial societies, for example.

                                                                                              These models don’t really have predictive value and they probably can’t ever gain one. But they do have an exploratory value. They may not be able to tell you what will happen tomorrow, but they can help you think about what’s happening today in more ways than one, from more angles, and considering more factors, and possibly understand it better.

                                                                                              1. 4

                                                                                                That’s something historians don’t do anymore. There was a period where people tried to predict the future development of history, and then the whole discipline gave up. It’s a bit like what we are witnessing in the Economics field: there are strong calls to stop attributing predictive value to macroeconomic models because after a certain scale, they are just over-fitting to existing patterns, and they fail miserably after a few years.

                                                                                                1. 1

                                                                                                  Well, history is not math, right? It’s a way of writing a story backed by a certain amount of evidence. You can use a historical model to make predictions, sure, but the act of prediction itself causes changes.

                                                                                            2. 13

                                                                                              (OP here.) I totally agree, and this is something I didn’t explore in my essay. Tainter doesn’t see complexity as always a problem: at first, it brings benefits! That’s why people do it. But there are diminishing returns and maintenance costs that start to outstrip the marginal benefits.

                                                                                              Maybe one way this could apply to software: imagine I have a simple system, just a stateless input/output. I can add a caching layer in front, which could win a huge performance improvement. But now I have to think about cache invalidation, cache size, cache expiry, etc. Suddenly there are a lot more moving parts to understand and maintain in the future. And the next performance improvement will probably not be anywhere near as big, but it will require more work because you have to understand the existing system first.

                                                                                              1. 2

                                                                                                I’m not sure it’s so different.

                                                                                                A time saving or critically important feature for me may be a “bloated” waste of bits for somebody else.

                                                                                                1. 3

                                                                                                  In Tainter’s view, a society of subsistence farmers, where everyone grows their own crops, makes their own tools, teaches their own children, etc. is not very complex. Add a blacksmith (division of labour) to that society, and you gain efficiency, but introduce complexity.

                                                                                              1. 6

                                                                                                Related article that explains some of the concepts, etc in HTTP/3 - https://www.smashingmagazine.com/2021/08/http3-core-concepts-part1/

                                                                                                1. 1

                                                                                                  My key takeaway from this is that TCP+TLS is still faster for high throughput (without flaky connections) and HTTP/3 optimizations are only relevant if you need to help people with very unstable connections, and the actual results can vary a lot. For my basic nginx reverse proxy setup it’s kinda irrelevant and I’m hesitating to open UDP ports for that. If debian ships nginx with http/3 I’ll probably enable it, until then it seems to perform not that great in nginx and apache.

                                                                                                  1. 2

                                                                                                    It’s a bit more subtle than that, though regardless if you’re not interested in being in the bleeding edge of the space I too would wait until nginx or apache enable http/3.

                                                                                                    A couple points in no general order:

                                                                                                    • A lot of the overhead for small web pages or AJAX requests especially from TCP+TLS is the 3 round trips necessary to establish the TLS stream. Assuming a conservative TCP packet size would be ~ 1280 bytes (a conservative MTU of 1320 bytes, resulting in a TCP MSS of 1280 bytes), and HTTP request and response pair for a small blog post can easily fit in 2-3 packets (1 packet for the request and 1-2 packets for the response), and an AJAX request/response is usually 2 packets. This means the entire HTTP interaction over TCP for the AJAX request would result in 1.5RTT (for TCP establishment) + 2 RTT = 3.5 RTT. TCP+TLS has 3RTT (for TCP+TLS establishment) + 2RTT = 5RTT. This is ~42% overhead just for TLS establishment. If page weight is high though (or requests are being pipelined), the overhead on connection establishment is decreased. TCP Fast Open and TLS False Start can get this down to 1RTT connection establishment. TLS 1.3 has support for 0RTT connection establishment but this is tricky. Default QUIC connection establishment is 1.5RTT, just like regular TCP, and there are 0RTT modes available for QUIC.

                                                                                                    • “Flaky” connections can be more common than you think. The internet is mostly designed around maximizing throughput, and near after-work or after-school hours you’re going to see congestion on lots of routers as everyone starts using bandwidth-intensive multimedia services. Moreover if you’re ever on cafe/airport wifi, building free wifi, or just been far from an AP, then you’ll be hit with flakiness and dropped packets. QUIC could increase “reliability” in these situations dramatically.

                                                                                                    • Multimedia is especially impacted by HoL blocking. Dropping a packet or two when streaming a video is fine for stream quality but can cause the stream to stutter and stop while your connection waits for a blocked packet to ACK. Moreover if an ACK isn’t received, packets will be resent, adding delays and congesting the network further leading to a negative spiral. This is one common answer to “Why is Netflix slow after work?” and can improve experiences broadly.

                                                                                                    • QUIC supports using a connection ID to maintain a persistent connection even when IP endpoints change. This means that if you walk from one part of a building to another with a different WiFi SSID, you come back from elsewhere and plug into your desk’s Ethernet, or a NAT mapping changes silently for you that your existing connections will stay established instead of dropping and all reconnecting.

                                                                                                    There’s other stuff too, but the above points are some examples of the fat that can be trimmed on the net by moving to HTTP/3. Though personally I’m more excited by being able to use QUIC for non-HTTP traffic, and even using QUIC through p2p-webtransport so we can send/receive non-HTTP traffic directly from the browser. Happy to talk more about this stuff as I’m super excited for QUIC.

                                                                                                    1. 1

                                                                                                      I’ve actually read all 3 articles. Still it seems like a lot of overhead for diminishing returns for now. I think the biggest change is that we can replace parts and iterate on the protocol much faster now. (By choosing the only other possibly non-blocked protocol, UDP.) I fear for the DDoS resistance when looking at some of the overhead all the new compression, first-packet optimization and ID re-use adds on top (while actually storing multiple IDs for changing them on interface / ISP change, so more stuff to store in memory).

                                                                                                      1. 1

                                                                                                        I think the biggest change is that we can replace parts and iterate on the protocol much faster now.

                                                                                                        By having HTTP go over QUIC, QUIC gets to essentially play chicken with ossified middleboxes. “Support this or web traffic won’t work.” But because QUIC is so general-purpose, we can also push other traffic over it. It’s exciting to think that we can send arbitrary traffic over what looks like regular traffic (though folks do that today over TLS sockets on port 443.)

                                                                                                        I fear for the DDoS resistance when looking at some of the overhead all the new compression, first-packet optimization and ID re-use adds on top (while actually storing multiple IDs for changing them on interface / ISP change, so more stuff to store in memory)

                                                                                                        I’m hopeful that connection IDs offer a new way to throttle/block for DDoS also but yeah it’s something to keep in mind as HTTP/3 rolls out.

                                                                                                1. 2

                                                                                                  QPACK uses separate unidirectional streams to modify and track field table state, while encoded field sections refer to the state of the table without modifying it.

                                                                                                  I’m gonna need to see this before I fully understand it.

                                                                                                  1. 2

                                                                                                    QPACK is defined in RFC9204. It uses two unidirectional QUIC streams, an encoder->decoder stream and a decoder->encoder stream. The gory details are in the RFC, and it seemed relatively straightforward to me. This page has a bunch of QUIC and HTTP/3 implementations along with some pure QPACK implementations if you’re curious.

                                                                                                    1. 2

                                                                                                      Wow this is so cool, I really appreciate you giving me this level of information. You’re very kind to do so!

                                                                                                  1. 16

                                                                                                    I think the reason it’s primarily “grumpy old developers” (and I count myself amongst that crowd) complaining about software bloat is that we were there 20 years ago, so we have the benefit of perspective. We know was possible with the limited hardware available at the time, and it doesn’t put today’s software in a very flattering light.

                                                                                                    The other day I was editing a document in Pages and it made my MacBook Pro slow down to a crawl. To be fair my machine isn’t exactly new, but as far as I can tell Pages isn’t doing anything that MS Word 2000 wasn’t doing 20 years ago without straining my 200Mhz Pentium. Sure, Pages renders documents in HD but does that really require 30 times the processing power?

                                                                                                    1. 14

                                                                                                      This might be selective memory of the good old days. I was in high school when Office 97 came out, and I vaguely remember one of my classmates complaining about it being sluggish.

                                                                                                      1. 7

                                                                                                        I think there’s A LOT of this going around. I used Office 97 in high school and it was dog shit slow (tick tick tick goes the hard disk)! Yes, the school could have sprung for $2,500 desktops instead of $1,500 desktops (or whatever things cost back then) but, adjusted for inflation, a high-end laptop today costs what a low-end laptop cost in 1995. So we’re also comparing prevailing hardware.

                                                                                                        1. 2

                                                                                                          Should’ve gone for the Pentium II with MMX

                                                                                                        2. 10

                                                                                                          Word processing programs were among the pioneers of the “screenshot your state and paint it on re-opening” trick to hide how slow they actually were at reaching the point where the user could interact with the app. I can’t remember a time when they were treated as examples of good computing-resource citizens, and my memory stretches back a good way — I was using various office-y tools on school computers in the late 90s, for example.

                                                                                                          Modern apps also really are generally doing more; it’s not like they stood still, feature-wise, for two decades. Lots of things have “AI” running in the background to offer suggestions and autocompletions and offer to convert to specific document templates based on what they detect you writing; they have cloud backup and live collaborative editing; they have all sorts of features that, yes, consume resources. And that some significant number of people rely on, so cutting them out and going back to something which only has the feature set of Word 97 isn’t really an option.

                                                                                                          1. 5

                                                                                                            When a friend of mine showed me Youtube, before the Google acquisition, on the high-school library computers, I told him “Nobody will ever use this, it uses Macromedia Flash in the browser and Flash in browser is incredibly slow and nobody will be able to run it. Why don’t we just let users download the videos from an FTP server?” I ate those words hard. “grumy old developers” complain about software bloat because they’re always looking on the inside, never the out. When thinking about Youtube, I too was looking on the inside. But fundamentally people use software not for the sake of software but for the sake of deriving value from software.

                                                                                                            In other words, “domain expert is horrified at the state of their own domain. News at 11.”

                                                                                                          1. 10

                                                                                                            My kingdom for a garbage-collected, green-threaded rust.

                                                                                                            It feels impossible to write quality, correct software without sum types (especially Maybe and Result).

                                                                                                            1. 4

                                                                                                              Would Kotlin be close to what you want? It doesn’t have sum types natively, but people seem to implement them easily using sealed classes.

                                                                                                              1. 3

                                                                                                                I like a lot of Kotlin! Sealed classes do the job for me, and they’re the only language to have done async correctly, IMO.

                                                                                                                But it’s still full of exception pitfalls, and their Result doesn’t really protect you from how hard it is to reason about it all.

                                                                                                                That, and the toolchain is astronomically hard to use.

                                                                                                                1. 3

                                                                                                                  I find kotlins attempt at null-safety in my android programming to be useless 90% of the time. Or you have to at least double-check all the time, because something somewhere could’ve changed it between your if-null and the actual usage. And sprinkling ?.run{} ?: everywhere is really annoying. Oh and then there are these problem with deserializing and getting null-values or 0s as default for missing ints. So you have to define everything as ?.

                                                                                                                  1. 1

                                                                                                                    With the caveat that I’ve never touched Android, just server-side Kotlin, in my experience some of the things you’re talking about are a symptom of using mutable data structures. I’ve found Kotlin pretty good at remembering when I’ve already done a null check on a val property.

                                                                                                                    But maybe the data structures in question aren’t ones you control, in which case yeah, it’s a pain.

                                                                                                                    1. 1

                                                                                                                      Yeah I don’t actually have a choice to define them immutable. For the deserialization that could be something to apply, but then I’d have to duplicate my data classes (I use them also for the stuff I send back, to store everything in one object which the UI then changes but uses as source of truth).

                                                                                                              2. 3

                                                                                                                In addition to other suggestions, what about Ocaml which Rust has a lot of DNA from?

                                                                                                                1. 2

                                                                                                                  I’m waiting until multicore lands to check it out. I’ve heard very few reviews of its toolchain, is it as nice to use as cargo?

                                                                                                                  1. 3

                                                                                                                    These days it’s quite nice but there’s been lots of churn in the past. It’s true that Multicore is what’s probably keeping a lot of people hanging off of Ocaml for now.

                                                                                                                2. 2

                                                                                                                  Nim ?

                                                                                                                  1. 2

                                                                                                                    Swift has Result and Maybe (Optional) without the borrow checker. It also has structured concurrency.

                                                                                                                    1. 3

                                                                                                                      My impression is that Swift is still highly bound to the Apple ecosystem, despite being open source. Is there a vibrant ecosystem for things like backend development?

                                                                                                                      1. 2

                                                                                                                        I think it’s about as vibrant as Kotlin’s or Rust’s. The Swift server-side workgroup does a lot of work. https://github.com/swift-server/guides

                                                                                                                        I know that one of their current issues is providing something equivalent to rustup for a plug-and-play tool to start building backend Swift apps.

                                                                                                                        1. 1

                                                                                                                          Swift is an OCaml anyway, like rust is, and doesn’t have GC. OCaml or GHC Haskell may be closest to what you’re looking for.

                                                                                                                          1. 1

                                                                                                                            That is true, but Swift doesn’t really need a GC since it has ARC instead.

                                                                                                                      2. 2

                                                                                                                        You can use a GC with rust if you want, but the whole point is you shouldn’t have to, since the compiler is smart enough to do most of it at compile time.

                                                                                                                        As for green threads… pretty sure rust actually has async syntax right in the language now. So you don’t even need to “pick a syntax” there is an official one.

                                                                                                                        1. 1

                                                                                                                          There is no real GC for rust. Even if there’s a crate providing one, there is no part of the ecosystem that will work ergonomically with it.

                                                                                                                          The compiler’s “smarts” do not help the additional cognitive load. It’s nice for lower-level software– Rust is an excellent replacement for C++. But I want something higher.

                                                                                                                          I’m well aware of async. And this article highlights all the issue with it. Both the syntax and semantics suffer without native green thread support, a la golang.

                                                                                                                          1. 1

                                                                                                                            …async and go style green threads are literally the same thing. This does not compute for me.

                                                                                                                            1. 2

                                                                                                                              I’ve always seen “async” used as event loop programming and “threads” (whether green or kernel level) to be, well, threads. The distinction makes perfect sense in most contexts I’ve used it/seen it used.

                                                                                                                              1. 1

                                                                                                                                I think I’m using the green thread terminology wrong. What I really mean is having one way of doing IO, without different syntax or semantics. In go, all blocking IO looks like async IO.

                                                                                                                                1. 1

                                                                                                                                  Oh, yes, I can see what you mean now. Haskell and Ruby 3 have this too, I agree it’s very nice.

                                                                                                                        1. 6

                                                                                                                          Moreover, my prediction is that Rust will never be as popular as Java or Python.

                                                                                                                          I’m fine with that. People get far too excited about popularity. The fact is that Rust has had a huge permanent impact due to it’s applications in systems software. A lot of this work is even usable in higher level languages via C FFI or WASM.

                                                                                                                          The main benefit of massive Rust popularity would be more applications running more efficiently. While that would be great, it would come at a high complexity cost, and complexity is already spiraling out of control within software engineering. Keep Rust focused on what it does, systems software; be happy when it applies outside that scope but don’t corrupt the language for the sake of popularity.

                                                                                                                          1. 7

                                                                                                                            The honest companion to the statement you quote would be: C isn’t as popular as Java or Python either.

                                                                                                                            1. 4

                                                                                                                              People get far too excited about popularity

                                                                                                                              Fully agree. The ergonomics should continue to be worked on, but Rust was designed to be a systems language, and not every problem requires that.

                                                                                                                              Additionally, I much prefer a community of users that is consciously choosing said language for intended domains versus one where users choose it mostly because it is popular. I realize that sounds somewhat elitist, but mainstream popularity is a double-edged sword.

                                                                                                                              1. 2

                                                                                                                                The space of all programs is very large and IMO too much for most humans to handle. Having opinions in your language is a necessary evil to tame cognitive load. I think it’s fine that Rust is forging its own path closer to other systems languages. As long as there are other options in the space (and there are tons of PLs), there’s nothing wrong with focusing on a certain set of applications, IMO.

                                                                                                                              2. 2

                                                                                                                                Keep Rust focused on what it does, systems software

                                                                                                                                Then why is it called “general purpose”?

                                                                                                                                1. 11

                                                                                                                                  Because it’s “general purpose”?

                                                                                                                                  Really, it’s only systems software languages (C, C++, Ada, Rust, etc) that are general purpose in the broadest sense – most other high level languages have made decisions (typically, to have GC or a significant runtime) that rules them out for writing certain classes of software.

                                                                                                                                  “General purpose” doesn’t mean it’s as convenient to write a CRUD web app in Rust as it is in Ruby; it just means that you could write a CRUD web app, an OS, or anything in between in Rust, whereas Ruby has less generality. You’re still better off using Ruby for the web app and saving Rust for the systems stuff, though.

                                                                                                                                  1. 2

                                                                                                                                    You’re still better off using Ruby for the web app and saving Rust for the systems stuff, though.

                                                                                                                                    I agree, but now I have to go and hide all my rust CRUD apps, which definitely took longer to make than just throwing rails or django at them.

                                                                                                                                    1. 2

                                                                                                                                      Hey nothing wrong with choosing the less “convenient” path either. Sometimes the journey is the goal, not getting to the destination ASAP.

                                                                                                                                      1. 1

                                                                                                                                        well you always tell yourself that its more efficient / runs a lot more robust (error handling vs throwing in production) etc

                                                                                                                                        But maybe that’s completely irrelevant ? I guess I’m still not at the point where I know exactly what counts for me. At least I can say that the stuff I did deploy runs very smooth. But it can be a ton of overhead to get started.

                                                                                                                                  2. 7

                                                                                                                                    Is it? I think Rust calls itself a systems programming language (although that term is also a bit fuzzy). Either way, I don’t think it’s technically wrong to call it that. It is reasonable to use it for a wide range of programs. If you don’t insist on abstractions having absolutely zero cost (like the TFA) it’s not even that difficult. But for programs that don’t need to maximize efficiency there are tons of nice languages to choose from, so while Rust may be usable, it’s not the best choice for everything.

                                                                                                                                1. 5

                                                                                                                                  gRPC is an IDL-based protocol, and like all IDL-based protocols, it relies on communicating parties sharing knowledge of a common schema a priori. That shared schema provides benefits: it reduces a category of runtime risks related to incompatibilities, and it — can, sometimes —improve wire performance. That schema also carries costs, chief among them that it requires producers and consumers to share a dependency graph, and usually one that’s enforced at build-time. That represents a coupling between services. But isn’t one of the main goals of a service-oriented architecture to decouple services?

                                                                                                                                  Over many years, and across many different domains, I’ve consistently found that, for service-to-service communication, informally-specified HTTP/JSON APIs let teams work at very high velocity, carry negligible runtime risk over time, and basically never represent a performance bottleneck in the overall system. Amusingly, I’ve found many counter-factuals — where gzipped HTTP+JSON APIs significantly outperformed gRPC and/or custom binary protocols.

                                                                                                                                  I’m sure there are situations where gRPC is right tool for the job! But all my experience suggests it’s a far narrower set of use-cases than is commonly understood, almost all in closed software ecosystems. But maybe I’m missing some angle?

                                                                                                                                  1. 2

                                                                                                                                    I’m sure there are situations where gRPC is right tool for the job! But all my experience suggests it’s a far narrower set of use-cases than is commonly understood, almost all in closed software ecosystems. But maybe I’m missing some angle?

                                                                                                                                    I’ve found cases where gRPC works better than gzipped HTTP+JSON, but I have to agree that it’s in very limited cases. Specifically I’ve worked with a rate limiter which has a “frontend” service to talk to the backend which actually keeps track of counts, and here the requests were both very repetitive (increment in-flight request, decrement in-flight request) and the fields that changed very specific. The service would receive very high request throughput and its repetitive requests made a gRPC implementation a lot faster (and less compute heavy to avoid any de/compression) than an HTTP+JSON implementation. We ran rigorous tests and found anywhere from 3-20x speedups depending on the type of load we were receiving.

                                                                                                                                    I think gRPC matters more for services that see high scale, but if you’re working at a shop that has a few high scale services, it may still make more sense to standardize around gRPC just so that the high-scale services don’t have to work completely differently than the rest of the shop. gRPC has a lot less (how many channels to create, how will interceptors work, etc) tooling around it than HTTP+JSON so it pays to develop that expertise in-house. When we decided to use gRPC for a few services, it was painful having to learn the ecosystem of debugging and monitoring tools especially when there were so many easily available tools and well documented RFCs for HTTP+JSON on the general net.

                                                                                                                                    EDIT: If low latency is important to your service though, gRPC staves off a lot of the overhead inherent in setting up and transmitting/receiving an HTTPS stream. If latency is of the utmost importance (say you’re building an SDP/VoIP signaling layer) though, gRPC may be the way to go.

                                                                                                                                    1. 1

                                                                                                                                      If low latency is important to your service though, gRPC staves off a lot of the overhead inherent in setting up and transmitting/receiving an HTTPS stream.

                                                                                                                                      Is this still true with HTTP/2?

                                                                                                                                  1. 8

                                                                                                                                    My daughter is 5 - I don’t want her dialing 911.

                                                                                                                                    It’s weird to think of this as a problem since the entire world had this potential concern for a while and it was… fine?

                                                                                                                                    Super cool though. I’m jealous of the payphone.

                                                                                                                                    1. 10

                                                                                                                                      In Britain the number is 999, which lends itself very well to being dialled by any toddler who sees fit to mash a single button. Ask me how I know!

                                                                                                                                      1. 6

                                                                                                                                        Kids today will never know the joy of crank calling a random phone number.

                                                                                                                                        1. 3

                                                                                                                                          Maybe not, but the “It’s Lenny” crowd sure have fun with pranking the scammers who call them instead. https://old.reddit.com/r/itslenny/

                                                                                                                                          1. 3

                                                                                                                                            Hopefully not, but there are many folks that end up stuck doing that for a living!

                                                                                                                                            1. 1

                                                                                                                                              I wonder if there’s a YouTube video of someone playing The Jerky Boys for some kids and seeing what they make of it.

                                                                                                                                            2. 5

                                                                                                                                              I find it very weird. Why wouldn’t they want their child to be able to call for help?

                                                                                                                                              1. 4

                                                                                                                                                You know I’ve never thought about the positive case. What do people do nowadays? Instruct their kid on how to take the iPhone out of their pocket and make an emergency call from it?

                                                                                                                                                1. 4

                                                                                                                                                  We bought our children their own phones, partly for this reason. Especially as we encourage them to wander the neighbourhood and catch trains from age 7.

                                                                                                                                              2. 4

                                                                                                                                                I actually did dial 911 on a dare on a payphone as a kid, and surprise, emergency services showed up (and I was being an idiot but what can I say). According to a friend who works in emergency dispatch, this used to be quite common (though because we’re both of around the same age I don’t know if any of us have stats on how 911 dialing has changed in the mobile phone era.)

                                                                                                                                                So yeah this is definitely a concern, but there’s also the positive case. My mother was in ill-health when I was a child and I did have to dial 911 for her a couple times. I was a technical kid, but it was nice that I could just take a few specific actions (namely dialing 9-1-1 on a phone) and get help for my mother. (You would think, having both dialed out to 911 as a prank and in legitimate need, that I would understand what a silly thing it was to have dialed out on a dare, but it took me a few years for that self-reflection ability to happen in my child brain.) I’m curious about what kids do these days especially since most folks keep their phones locked. Do folks keep a home phone around specifically to dial out to emergency services?

                                                                                                                                                1. 5

                                                                                                                                                  Locked phones typically still have an “emergency call” button that allows dialing the local emergency number. On Android it also allows you to dial any of the contacts stored as emergency contacts in the phone. The question is still a valid one if a child was home without someone’s cell phone, but in general a locked phone shouldn’t stop an emergency call.

                                                                                                                                                  EDIT: And, yes, I also have a landline for just such a situation.

                                                                                                                                                  1. 4

                                                                                                                                                    Even phones without a SIM are supposed to be able to call the local emergency number, so that slightly narrows the use-case for a landline.

                                                                                                                                                2. 4

                                                                                                                                                  My parents had a lot of older phone and computer equipment laying around that we liked to play with growing up. One item was an old rotary phone that belonged to one of my grandparents and I recall my father one day explaining the differences in dialing between it and DTMF. To let us hear the differences, he plugged it into one of the phone jacks and, after booping a few buttons on the touch-tone phone, he then showed us the difference in the rotary.

                                                                                                                                                  He first dialed a “9” to let us hear all the clicks, then to show the difference he dialed a “1”. Because that was so short, he repeated the number to make sure we heard it. Two minutes later, the emergency operator called back and he had to explain himself…

                                                                                                                                                  1. 1

                                                                                                                                                    I was almost in big trouble when a random kid I was playing with at the Burger King nearest my childhood home wouldn’t quit calling 911 on the pay phone. I don’t remember details. I’m not sure if I was just assumed to be the mastermind, or if the other kid tried to pin it on me.

                                                                                                                                                  1. 1

                                                                                                                                                    Twilio’s API makes it very convenient to do fun phone hacks - plus you can wire it straight into your existing phone number as you’d like. I’d recommend it pretty strongly; the only headache I’ve found is that some places that require a phone number will reject signups using Twilio-owned numbers.

                                                                                                                                                    Back when “going to conferences” was a thing I’d buy a number and forward voice/texts into my phone. Then I could drop the number when the conference was over, before the sales droids started cold-calling it.

                                                                                                                                                    1. 1

                                                                                                                                                      the only headache I’ve found is that some places that require a phone number will reject signups using Twilio-owned numbers

                                                                                                                                                      Phone numbers, at least those under NANP (North American Numbering Plan), are usually tagged as “residential” numbers or “VOIP” numbers. Twilio and most other VOIP numbers are marked as such and sometimes rejected when sending 2FA SMS or otherwise used for signup.

                                                                                                                                                    1. 22

                                                                                                                                                      I’m not a huge fan of this article. Besides having an overly-catastrophizing view of what the consequences of climate change are likely to be, I think the inferential leap between the posited disruption of industrial infrastructure (in some parts of the world) because of climate change on the scale of the next few decades and a lot of the specific programming practices mentioned is too great to take very seriously.

                                                                                                                                                      Like, it’s probably true that rates of seawater flooding in low-lying coastal areas will go up because of sea level rise on those timescales, but the realistic amount of sea level rise that’s going to happen by the end of the 21st century is on the order of inches. Jumping from that to assuming that the cost of bandwidth (as applied to developer-to-developer personal communication) will go up prohibitively, and then jumping from that to posit that “people will do more TDD” or “the big winners in 2050 will be Rust, Clojure and Go” is a prediction way too specific and contingent to take seriously. Is there no other phenomenon in the world that will affect what programming languages are in common use in 2050 besides climate change? Is it really likely that those three specific languages and no others will be “big winners” in 2050?

                                                                                                                                                      Offices in 2050 for programmers will have been a thing of the past for awhile. It isn’t going to make sense to force people to all commute to a single location, not out of a love of the employees but as a way to decentralize the businesses risk from fires, floods and famines.

                                                                                                                                                      This struck me as an interesting prediction because something like this is in the process of happening - as a 2nd-order consequence of a global pandemic, which was completely unrelated to climate change. There was already an increasing trend towards remote work for most types of programming jobs, and the sharp global emergency of the pandemic really does seem to have accelerated that trend (although we’re still in the midst of this change, and it will take years to accurately see how trends in remote work stabilize).

                                                                                                                                                      In any case it does go too far to say that the office has become a thing of the past for programmers because of the pandemic - there are benefits as well as costs to commuting and working in person will colleagues, and plenty of programmers today do still work in offices, the pandemic nonwithstanding. And the course of time will keep unfolding, and there will be a myriad of phenomena in the world besides climate change over the next few decades that will affect how people work.

                                                                                                                                                      Expect to see a lot more work that involves less minute to minute communication with other developers. Open up a PR, discuss the PR, show the test results and then merge in the branch without requiring teams to share a time zone. Full-time employees will serve mostly as reviewers, ensuring the work they are receiving is up to spec and meets standards. Corporate loyalty and perks will be gone, you’ll just swap freelance gigs whenever you need to or someone offers more money.

                                                                                                                                                      Is also something that happens now. In fact, it’s a reasonably accurate description of how I work in my current job, where I’m in a different time zone than a lot of my colleagues, and I’m hardly unique in that respect. So positing this as a future prediction of a downstream consequence of future climate change rings false to me. But also, why should climate-change-related disruptions of industrial society make it harder to send an asynchronous textual message to a colleague, but not to post a PR on something like a shared git repository? Is Slack really somehow uniquely vulnerable to climate change, when GitLab is not?

                                                                                                                                                      1. 30

                                                                                                                                                        Besides having an overly-catastrophizing view of what the consequences of climate change are likely to be,

                                                                                                                                                        Funny, I thought the opposite; it seems to me incredibly optimistic to believe there is still going to be a programming industry at all

                                                                                                                                                        1. 11

                                                                                                                                                          same here; the collapseOS view seems more likely to me than this version where it’s like the present just with a few little tweaks here and there.

                                                                                                                                                        2. 4

                                                                                                                                                          Non-climate change issues like user-experienced latency and bandwidth costs are already pushing a larger number of SaaS companies to push data as close to the “edge” as possible, colocating data into nearby data centers. A CDN is essentially just pushing data as close to the user as feasible with current logistics.

                                                                                                                                                        1. 2

                                                                                                                                                          This is tangential, but:

                                                                                                                                                          In particular, there is almost always a gap between domain experts (the people who have a need which can be met by creating a new, or adapting an existing, program) and programmers (the people who write programs).

                                                                                                                                                          Why haven’t we yet made programming approachable enough that the domain experts can be the programmers rather than having to delegate to programmers? The immediate cynical answer that comes to mind is that we programmers like our job security. But I wonder if there are other, better reasons.

                                                                                                                                                          1. 24

                                                                                                                                                            I think the more likely answer is that making programming approachable is a lot harder than we think it is.

                                                                                                                                                            1. 3

                                                                                                                                                              What do you think about this essay which argues that things like Visual Basic and HyperCard were on the right track, but then the late 90s web boom (and, according to a later essay, open source), halted progress in that area?

                                                                                                                                                              1. 8

                                                                                                                                                                I’m not hwayne, but I agree with him—it’s a lot harder than we think it is. Basically, programming requires tracking detail, enough detail that would daunt most people. Witness the number of articles about fallacies that programmers (people trained to track such details) make about human names, addresses, phone numbers or dates, just to name a few areas.

                                                                                                                                                                Here’s my question to you—how do you define “computer literacy”?

                                                                                                                                                                1. 7

                                                                                                                                                                  Poppycock. There are few imaginary products I can think of that would be more valuable to their creator than the “AI that replaces programmers”, it’s just not something we have any idea how to do.

                                                                                                                                                                  Small parts of programming do get automated over the years, with things like garbage collection and managed runtimes, but so far this has always lead to an increase in the kinds of tasks we expect computers to handle, rather than doing the same basic tasks with fewer programmers. This makes sense because it gives the business an advantage over competitors in whatever their core business happens to be. They’d (the companies that survive) rather do more and charge more / get more customers, than do the same for slightly less.

                                                                                                                                                                  1. 2

                                                                                                                                                                    and, according to a later essay, open source

                                                                                                                                                                    That essay seems to confused open source with not charging money for things…

                                                                                                                                                                    1. 1

                                                                                                                                                                      First of all, I’ll say that I agree with hwayne and think that’s the primary reason we don’t have many non-programmer friendly coding/automation tools.

                                                                                                                                                                      The first essay you linked alludes to this, but I think the point should be emphasized, there’s an incentive mismatch between programmers and end-users. Programmers often like to program because they enjoy the act of programming. Look at how many links we get on this forum about programmers waxing and waning about the joys of TUIs, architectural simplicity, or networks run for and by skilled operators. These are all things that are immaterial to, or even detrimental toward, the user experience of a non-programming SME. Even with today’s world of skilled programmers running large cloud systems, programmers still complain about how much they need to accommodate the whims of non-technical users.

                                                                                                                                                                      This isn’t unique to programming. Trades folks in a lot of trades often talk shop about better access platforms/crawl spaces, higher quality parts, more convenient diagnostic tools, and other stuff that non-tradespeople would find spurious expenses/concerns that sometimes may even make the tradesperson’s work less aesthetic (say in a residence.) I think there are many complicated factors that make this incentive mismatch worse in programming than in trades. As long as this incentive mismatch exists, I think you’ll only see limited progress toward non-technical programming accessibility.

                                                                                                                                                                  2. 13

                                                                                                                                                                    Having been in the position of “software engineer for SME’s” a few times… Making really good software that you would actually want to use in production is a craft, a skill of its own, and one that takes a lot of time and work to learn. Most software people are interested in software for its own sake, because the craft is fun. Most SME’s are not, and so they will learn as much as is necessary to bang together a solution to their problem and it doesn’t really matter how nasty it is. They want to be working on their subject matter, not understanding cache lines or higher order functions.

                                                                                                                                                                    We can rephrase the question: “Why haven’t we yet made woodworking approachable enough that the people who use furniture can be the carpenters rather than having to delegate to carpenters?” Sure, if you are actually interested in the process of building furniture then you can make lots of amazing stuff as a non-professional, and there’s more sources out there than ever before for an interested novice getting started. But for most people, even assembling IKEA furniture is more work and suffering than they really want to expend.

                                                                                                                                                                    1. 1

                                                                                                                                                                      I think the whole idea is to make the “band together something that solves the problem” option more possible and more common.

                                                                                                                                                                      So many people spend so much of their lives using computers to manually do trivially automated things, but the things are all too bespoke for a VC funded startup to tackle making a “product”.

                                                                                                                                                                      1. 3

                                                                                                                                                                        This works pretty well as long as the tools those people build are only used by that person. Which is pretty important! The problem appears when someone’s bespoke little tool ends up with its tendrils throughout an organization, and now suddenly even if it isn’t a “product” it is essential infrastructure.

                                                                                                                                                                        1. 2

                                                                                                                                                                          I think that’s actually a good thing / goal, and work on “making programming accesible” should work on reducing the ways in which that is a problem.

                                                                                                                                                                          Note that “a dev with a stick up their ass seeing it will say mean things” is not by itself a valid problem for anyone but that dev ;)

                                                                                                                                                                    2. 5

                                                                                                                                                                      I would say it’s for the same reason why programmers can’t be the domain experts; expertise in any field takes time, effort and interest to develop.

                                                                                                                                                                      For example, a tax application where all the business rules were decided by developers and a tax application developed by accountants would probably both be pretty terrible in their own ways.

                                                                                                                                                                      1. 4

                                                                                                                                                                        A lot of the other responses I almost entirely agree with, but to add my own experience:

                                                                                                                                                                        I’ve been a part of some implementations of these types of tools, and also read a lot about this subject. Most people building these tools aren’t building “programming that’s easy for non-developer” but “I find ____ easy so I’m going to remove features so that it’s more approachable”. It also leads a lot to either visual programming languages, which don’t directly solve the complexity issues, or config languages, which lack the necessary surface area to be usable for many tasks.

                                                                                                                                                                        A prior team of mine tried to go down the config route, building out 2 different config languages that “can be used by managers and PMs to configure our app so that we can focus on features.” Needless to say, that never happened. No one did any research on prior attempts to do build these types of languages. No one tested with PMs and managers. It ended up being built by-devs-for-devs.


                                                                                                                                                                        There’s also this idea that floats around software that somehow simpler languages aren’t “real” languages, so they often get a lot of hate. For many years I’ve heard that Go isn’t for real devs, that it’s only for stupid Google devs who can’t be bothered to learn a real language like Java. JS is still considered by many to be a joke language because it’s for the web and “real” developers program servers, desktops, and mobile. Way back in the day, Assembly was for the weak, “real” devs wrote out their machine code by hand/punch card. Unless we can overcome that instance of what a “real” programming language is, we’ll likely continue to struggle to find and build accessible languages.


                                                                                                                                                                        One of the few people I know writing about approach-ability of programming, and attempting to actually build it is Evan C. I won’t claim that Elm is perfect, I do think we can do better, but Evan has worked very hard to make it approachable. So much so that both its error message approach and its Elm Architecture have permeated many other languages and frameworks without people realizing it.

                                                                                                                                                                        The Syntax Cliff

                                                                                                                                                                        When you start learning a programming language, how much time do you spend stuck on syntax errors? [..] how many people do not make it past these syntax errors?

                                                                                                                                                                        Compilers as Assistants

                                                                                                                                                                        Compilers should be assistants, not adversaries.

                                                                                                                                                                        Compiler Errors for Humans

                                                                                                                                                                        Most terminal tools came into existence well before our industry really started focusing on making apps and websites feel great for their users. We all collectively realized that a hard to use app or website is bad for business, but the same lessons have not really percolated down to tools like compilers and build tools yet.

                                                                                                                                                                        1. 2

                                                                                                                                                                          The answer to me comes down to time. I can gather requirements, speak to stakeholders, and organize projects, or I can write code, test code, and deploy code. I do not have the time (or attention span, really) for both.

                                                                                                                                                                          1. 2

                                                                                                                                                                            People have been trying this for a very long time. It results in very bad programs. The idea that programming can be magic’d away and we can fire all the programmers is held only by marketing departments and (for some reason) a few programmers.

                                                                                                                                                                          1. 22

                                                                                                                                                                            I find that, having written a lot of Haskell and done a lot of pure math, I am less and less interested in using category theory to describe what I am doing. The category theory seems like the implementation detail, and not the thing I care about. The way I have come to think of things like monads is more like model theory: the monad is a theory, and there are models of it. Those models happen to be isomorphic to categorical constructions.

                                                                                                                                                                            This text seems to suffer from what every other category theory text does: where are the theorems? There are lot of definitions here, but definitions are cheap. They alter over time to keep theorems true as new corner cases are found. If someone knows of a category theory text that isn’t a giant collection of definitions, I would love a pointer to it.

                                                                                                                                                                            1. 16

                                                                                                                                                                              I feel that I am in the same boat. I enjoy Haskell and mathematics. It brings me intense joy to use mathematics to gain new insights into other area, so I leapt at the opportunity. Just like your experience, the first few chapters were purely defining terms, but I kept pushing through. Finally, we started getting into some relations.

                                                                                                                                                                              Given f : A → (B, C), ∃ g : A → B and h : A → C
                                                                                                                                                                              

                                                                                                                                                                              This was treated as some mind-blowing revelation on par with the Euler identity. Pages and pages of exercises were dedicated to showing the applicability of this formula across domains. There was an especially deep interest showing real world examples. That by asking a barber for a Shave and a Haircut, then telling them to skip the shave, I could get just a haircut.

                                                                                                                                                                              The next chapter introduced symmetric monoidal categories with the earth-shattering property that I could take a value (B, C) and get a value (C, B). There was a brief mention that there existed braided categories without this property, and that they had deep connections to quantum mechanics (and my thesis!), but that they were outside the scope of this book. What was in scope was a collection of example problems about how, given waffles and chicken, we can construct a meal of chicken and waffles.

                                                                                                                                                                              1. 3

                                                                                                                                                                                Stellar comment, category theory (at least at my depth) tends to explain the obvious with the obscure.

                                                                                                                                                                              2. 6

                                                                                                                                                                                It makes sense to me why a theory of associative maps (to put it glibly) might be useful for someone designing high-level abstractions, since it can help to identify what the important invariants of those abstractions should be. What chafes a little for me in Haskell’s aesthetic embrace of category theory is precisely that a lot of its enthusiasts have the opposite inclination from you and seem to want to refer everything about what they’re doing to category theory. This feels deeply silly to me, because the vast majority of the interesting semantic properties of any given program are carried by concrete implementors of those category-theoretic abstractions. It’s nice that they’re there to provide the interfaces and invariants that allow me to productively compose concretions, but the generality that allows them to do that is also what prevents them from having much to say about what my program actually does. At the risk of seeming uncharitable, I wonder if a lot of the fetishism is down to Haskell being the most obvious place to go for people who belatedly realized that they’d have preferred to pursue pure math than be focused on software.

                                                                                                                                                                                If someone knows of a category theory text that isn’t a giant collection of definitions, I would love a pointer to it.

                                                                                                                                                                                I think Emily Riehl’s Category Theory in Context is one such text. It’s pretty typical of a math text targeted at mathematicians in its very terse Definition -> Theorem -> Worked Proof -> Exercises format, but the balance between those elements seems similar to anything else in the genre.

                                                                                                                                                                                1. 3

                                                                                                                                                                                  I think that there’s computer science and there’s software engineering and Haskell happens to be a good tool for either. As a result you get a lot of writings out of scope for any given person’s interest.

                                                                                                                                                                                  1. 2

                                                                                                                                                                                    Absolutely! It’s just been my experience that a lot of the prominent writing about Haskell from the computer science perspective in particular tends to defer overwhelmingly to category theory in a way that feels reductive to me. It’s certainly possible that I’m just working with a non-representative sample.

                                                                                                                                                                                    1. 2

                                                                                                                                                                                      FWIW as someone who’s done a decent amount of higher math, I agree.

                                                                                                                                                                              1. 14

                                                                                                                                                                                I’m very curious how these companies address the fact that there are countries where smartphones are not universally owned (because of cost, or lack of physical security for personal belongings).

                                                                                                                                                                                1. 8

                                                                                                                                                                                  At least Microsoft has multiple paths for 2FA - an app, or a text sent to a number. It’s hard to imagine them going all in on “just” FIDO.

                                                                                                                                                                                  Now, as to whether companies should support these people - from a purely money-making perspective, if your customers cannot afford a smartphone, maybe they’re not worth that much as customers?

                                                                                                                                                                                  A bigger issue is if public services are tied to something like this, but in that case, subsidizing smartphone use is an option.

                                                                                                                                                                                  1. 24

                                                                                                                                                                                    if your customers cannot afford a smartphone, maybe they’re not worth that much as customers?

                                                                                                                                                                                    I had a longer post typed out and I don’t think at all you meant this but at a certain point we need to not think of people as simply customers and begin to think that we’re taking over functions typically subsidized or heavily regulated by the government like phones or mail. It was not that long ago that you probably could share a phone line (telcos which were heavily regulated) with family members or friends when looking for a job or to be contacted about something. Or pay bills using the heavily subsidized USPS. Or grab a paper to go through classifieds to find a job.

                                                                                                                                                                                    Now you need LinkedIn/Indeed, an email address, Internet, your own smartphone, etc. to do anything from paying bills to getting a job. So sure if you’re making a throwaway clickbait game you probably don’t need to care about this.

                                                                                                                                                                                    But even this very website, do we want someone who is not doing so well financially to be deprived of keeping up with news on their industry or someone too young to have a cellphone from participating? I don’t think it is a god-given right but the more people are not given access to things you or I have access to, the greater the divide becomes. Someone who might have a laptop, no Internet, but have the ability to borrow a neighbor’s wifi. Similarly a family of four might not have a cell phone for every family member.

                                                                                                                                                                                    I could go on but like discrimination or dealing with people of various disabilities it is something that’s really easy to forget.

                                                                                                                                                                                    1. 15

                                                                                                                                                                                      I should have been clearer. The statement was a rhetorical statement of opinion, not an endorsement.

                                                                                                                                                                                      Viewing users as customers excludes a huge number of people, not just those too poor to have a computer/smartphone, but also people with disabilities who are simply too few to economically cater to. That’s why governments need to step in with laws and regulations to ensure equal access.

                                                                                                                                                                                      1. 11

                                                                                                                                                                                        I think governments often think about this kind of accessibility requirement exactly the wrong way around. Ten or so years ago, I looked at the costs that were being passed onto businesses and community groups to make building wheelchair accessible. It was significantly less than the cost of buying everyone with limited mobility a motorised wheelchair capable of climbing stairs, even including the fact that those were barely out of prototype and had a cost that reflected the need to recoup the R&D investment. If the money spent on wheelchair ramps had been invested in a mix of R&D and purchasing of external prosthetics, we would have spent the same amount and the folks currently in wheelchairs would be fighting crime in their robot exoskeletons. Well, maybe not the last bit.

                                                                                                                                                                                        Similarly, the wholesale cost of a device capable of acting as a U2F device is <$5. The wholesale cost of a smartphone capable of running banking apps is around $20-30 in bulk. The cost for a government to provide one to everyone in a country is likely to be less than the cost of making sure that government services are accessible by people without such a device, let alone the cost to all businesses wanting to operate in the country.

                                                                                                                                                                                        TL;DR: Raising people above the poverty line is often cheaper than ensuring that things are usable by people below it.

                                                                                                                                                                                        1. 12

                                                                                                                                                                                          Wheelchair ramps help others than those in wheelchairs - people pushing prams/strollers, movers, emergency responders, people using Zimmer frames… as the population ages (in developed countries) they will only become more relevant.

                                                                                                                                                                                          That said, I fully support the development of powered exoskeletons to all who need or want them.

                                                                                                                                                                                          1. 8

                                                                                                                                                                                            The biggest and most expensive problem around wheelchairs is not ramps, it’s turn space and door sizes. A wheelchair is broader (especially the battery-driven ones you are referring to) and needs more space to turn around than a standing human. Older buildings often have too narrow pathways and doors.

                                                                                                                                                                                            Second, all wheelchairs and exoskeletons here would need to be custom, making them inappropriate for short term disability or smaller issues like walking problems that only need crutches. All that while changing the building (or building it right in the first place) is as close to a one-size-fits-all solution as it gets.

                                                                                                                                                                                            1. 5

                                                                                                                                                                                              I would love it if the government would buy me a robo-stroller, but until then, I would settle for consistent curb cuts on the sidewalks near my house. At this point, I know where the curb cuts are and are not, but it’s a pain to have to know which streets I can or can’t go down easily.

                                                                                                                                                                                            2. 7

                                                                                                                                                                                              That’s a good point, though I think there are other, non-monetary concerns that may need to be taken into account as well. Taking smartphones for example, even if given out free by the government, some people might not be real keen on being effectively forced to own a device that reports their every move to who-knows-how-many advertisers, data brokers, etc. Sure, ideally we’d solve that problem with some appropriate regulations too, but that’s of course its own whole giant can of worms…

                                                                                                                                                                                              1. 2

                                                                                                                                                                                                The US government will already buy a low cost cellphone for you. One showed up at my house due to some mistake in shipping address. I tried to send it back, but couldn’t figure out how. It was an ancient Android phone that couldn’t do modern TLS, so it was basically only usable for calls and texting.

                                                                                                                                                                                                1. 2

                                                                                                                                                                                                  Jokes aside - it is basically a requirement in a certain country I am from; if you get infected by Covid you get processed by system and outdoors cameras monitor so you don’t go outside, but to be completely sure you’re staying at home during recovery it is mandatory to install a government-issued application on your cellphone/tablet that tracks your movement. Also some official check ups on you with videocalls in said app to verify your location as well several times per day at random hours.

                                                                                                                                                                                                  If you fail to respond in time or geolocation shows you left your apartments you’ll automatically get a hefty fine.

                                                                                                                                                                                                  Now, you say, it is possible to just tell them “I don’t own a smartphone” - you’ll get cheap but working government-issued android tablet, or at least you’re supposed to; as lots of other things “the severity of that laws is being compensated by their optionality” so quite often devices don’t get delivered at all.

                                                                                                                                                                                                  By law you cannot decline the device - you’ll get fined or they promise to bring you to hospital as mandatory measure.

                                                                                                                                                                                              2. 7

                                                                                                                                                                                                Thank you very much for this comment. I live in a country where “it is expected” to have a smartphone. The government is making everything into apps which are only available on Apple Appstore or Google Play. Since I am on social welfare I cannot afford a new smartphone every 3-5 years and old ones are not supported either by the appstores or by the apps themselves.

                                                                                                                                                                                                I have a feeling of being pushed out by society due to my lack of money. Thus I can relate to people in similar positions (larger families with low incomes etc.).

                                                                                                                                                                                                I would really like more people to consider that not everybody has access to new smartphones or even a computer at home.

                                                                                                                                                                                                I believe the Internet should be for everyone not just people who are doing well.

                                                                                                                                                                                            3. 6

                                                                                                                                                                                              If you don’t own a smartphone, why would you own a computer? Computers are optional supplements to phones. Phones are the essential technology. Yes, there are weirdos like us who may choose to own a computer but not a smartphone for ideological reasons, but that’s a deliberate choice, not an economic one.

                                                                                                                                                                                              1. 7

                                                                                                                                                                                                In the U.S., there are public libraries where one can use a computer. In China, cheap internet cafés are common. If computer-providing places like these are available to non-smartphone-users, that could justify services building support for computer users.

                                                                                                                                                                                                1. 1

                                                                                                                                                                                                  In my experience growing up in a low income part of the US, most people there now only have smartphones. There most folks use laptops in office or school settings. It remains a difficulty for those going to college or getting office jobs. It was the same when I was growing up there except there were no smartphones, so folks had flip phones. Parents often try and save up to buy their children nice smartphones.

                                                                                                                                                                                                  I can’t say this is true across the US, but for where I grew up at least it is.

                                                                                                                                                                                                  1. 1

                                                                                                                                                                                                    That’s a good point, although it’s my understanding that in China you need some kind of government ID to log into the computers. Seems like the government ID could be made to work as a FIDO key.

                                                                                                                                                                                                    Part of the reason a lot of people don’t have a computer nowadays is that if you really, really need to use one to do something, you can go to the library to do it. I wonder though if the library will need to start offering smartphone loans next.

                                                                                                                                                                                                  2. 5

                                                                                                                                                                                                    How are phones the “essential technology”? A flip phone is 100% acceptable these days if you just have a computer. There is nothing about a smartphone that’s required to exist, let alone survive.

                                                                                                                                                                                                    A computer, on the other hand, (which a smart phone is a poor approximation of), is borderline required to access crucial services outside of phone calls and direct visits. “Essential technology” is not a smartphone.

                                                                                                                                                                                                    1. 2

                                                                                                                                                                                                      There’s very little I can only do on a computer (outside work) that I can’t do on a phone. IRC and image editing, basically. Also editing blog posts because I do that in the shell.

                                                                                                                                                                                                      I am comfortable travelling to foreign lands with only a phone, and relying on it for maps, calls, hotel reservations, reading books, listening to music…

                                                                                                                                                                                                      1. 1

                                                                                                                                                                                                        The flip phones all phased out years ago. I have friends who deliberately use flip phones. It is very difficult to do unless you are ideologically committed to it.

                                                                                                                                                                                                      2. 3

                                                                                                                                                                                                        I’m curious about your region/job/living situation, and what about is making phones “the essential technology”? I barely need a phone to begin with, not to mention a smartphone. It’s really only good as a car navigation and an alarm clock to me.

                                                                                                                                                                                                        1. 1

                                                                                                                                                                                                          People need to other people to live. Most other people communicate via phone.

                                                                                                                                                                                                          1. 1

                                                                                                                                                                                                            It’s hardly “via phone” if it’s Signal/Telegram/FB/WhatsApp or some other flavor of the week instant messenger. You can communicate with them on your PC just as well.

                                                                                                                                                                                                            1. 4

                                                                                                                                                                                                              I mean I guess so? I’m describing how low income people in the US actually live, not judging whether it makes sense. Maybe they should all buy used Chromebooks and leech Wi-Fi from coffee shops. But they don’t. They have cheap smartphones and prepaid cards.

                                                                                                                                                                                                              1. 2

                                                                                                                                                                                                                You can not connect to WhatsApp via the web interface without a smartphone running the WhatsApp app, and Signal (which does not have this limitation) requires a smartphone as the primary key with the desktop app only acting as a subkey. I think Telegram also requires a smartphone app for initial provisioning.

                                                                                                                                                                                                                I think an Android Emulator might be enough, if you can manually relay the SMS code from a flip phone, maybe.

                                                                                                                                                                                                          2. 2

                                                                                                                                                                                                            You’re reasoning is logical if you’re presented a budget and asked what to buy. Purchasing does not happen in a vacuum. You may inherit a laptop, borrow a laptop, no longer afford a month to month cell phone bill, etc. Laptops also have a much longer life cycle than phones.

                                                                                                                                                                                                            1. 4

                                                                                                                                                                                                              I’m not arguing that this is good, bad, or whatever. It’s just a fact that in the USA today if you are a low income person, you have a smartphone and not a personal computer.