Threads for Corbin

  1. 2

    At first, I laughed. But then I considered what it would feel like to bump a version. The axioms of ZFC only have two ways to build a new finite set, union and pairing, and the empty set is the only prebuilt set.

    To bump any version, pair it with the empty set. If your version is a Von Neumann ordinal, then this is a plus-one operation; versions could be mapped to 0, 1, 2, etc. Given two versions (perhaps from two components), we can pair them; if one version is already a pair of versions and the other is a Von Neumann ordinal, then we might write these as pairs, triples, quadruples, etc.

    Some operations don’t have semantic-versioning counterparts. We can think of the union of a set as the set of elements of elements. In the previous paragraph, every element of elements is either a Von Neumann ordinal or a tuple, so a union of one of those versions would also be a version. But each ordinal is stepped backwards and each tuple is shorter by one component. Maybe this is analogous to an octopus merge, but I think it’s different.

    1. 5

      I wasn’t satisfied with the blog’s markup and JS requirement, so I went to look for the original article, because I hoped it was written in Markdown or something similar. It is not.

      1. 3

        ngl this is wild

        1. 2

          Oh dear…

        1. 2

          I suggest removing the “distributed” tag as this is not really about distributed systems.

          1. 1

            I think it’s highly relevant to distributed-systems research; a fork is one way that a split can resolve. But I won’t suggest adding the tag again.

          1. 3

            It really is too bad that SafeCurves is a static site and not a living application like the L-functions and modular forms database.

            1. 35

              As I was reading this wonderful writeup, I had a nagging feeling that most certainly ‘someone on the internet’ will bring up some half-assed moralizing down to bear on the author. And sure enough, it’ the first comment on lobsters.

              I think it’s a beautiful and inspiring project, making one think about the passage of time and how natural processes are constantly acting on human works.

              @mariusor, I recommend you go troll some bonsai artists, they too are nothing but assholes who carve hearts in trees.

              1. 8

                We do have an ethical obligation to consider how our presence distorts nature. Many folks bend trees for many purposes. I reuse fallen wood. But we should at least consider the effects we have on nature, if for no other reason than that we treat nature like we treat ourselves.

                I could analogize bonsai to foot-binding, for example. And I say that as somebody who considered practicing bonsai.

                1. 11

                  Foot binding is a social act in which women are deliberately crippled in exchange for access to certain social arrangements in which they don’t need to be able to walk well. The whole practice collapsed once the social arrangement went away. It’s very different than just getting a cool gauge piercing or whatever.

                  1. 6

                    Thank you Corbin for addressing the substance of my admittedly hot-headed comment. It did give me food for thought.

                    I am definitely in agreement with you on the need to consider the impact of our actions on the environment. I have a bunch of 80-year old apple trees in my yard which were definitely derailed, by human hands, from their natural growth trajectory. This was done in the interest of horticulture, and I still benefit from the actions of the now-deceased original gardener. All in all I think the outcome is positive, and perhaps will even benefit others in the future if my particular heritage variety of apple gets preserved and replicated in other gardens. In terms of environmental impact, I’d say it’s better for each backyard to have a “disfigured” but fruitful apple tree than to not have one, and rely on industrial agriculture for nutrition.

                    Regarding the analogy with foot-binding, which I think does hold to a large extent (i.e it involves frustrating the built-in development pattern of another, without the other’s consent) – the key difference is of course the species of the object of the operation.

                    1. 7

                      Scale matters too, I think.

                      I’m a gardener who grows vegetables, and I grow almost everything from seed - it’s fun and cheap. That means many successive rounds of culling: I germinate seeds, discard the weakest and move the strongest to nursery pots, step out the strongest starts to acclimatize to the weather, plant the healthiest, and eventually thin the garden to only the strongest possible plants. I may start the planting season with two or three dozen seeds and end up with two plants in the ground. Then throughout the year, I harvest and save seeds for next, often repeating the same selecting/culling process.

                      Am I distorting nature? Absolutely, hundreds of times a year - thousands, perhaps, if I consider how many plants I put in the ground. But is my distortion significant? I don’t think so; I don’t think that, even under Kant’s categorical imperative, every back-yard gardener in the universe selecting for their own best plants is a problem. It fed the world, after all!

                      1. 3

                        My friend who is a botanist told me about research he did into how naïve selection produces worse results. Assume you have a plot with many variants of wheat, and at the end of the season, you select the best in the bunch for next year. If you’re not careful, the ones you select are the biggest hoarders of nutrients. If you had a plot with all that genotype, it would do poorly, because they’re all just expertly hoarding nutrients away from each other. The ones you want are the ones that are best at growing themselves while still sharing nutrients with their fellow plants. It’s an interesting theory and he’s done some experiment work to show that it applies in the real world too.

                        1. 2

                          The ones you want are the ones that are best at growing themselves while still sharing nutrients with their fellow plants.

                          So maybe you’d also want to select some of the ones next to the biggest plant to grow in their own trials as well.

                  2. 3

                    I think it’s a beautiful and inspiring project, making one think about the passage of time and how natural processes are constantly acting on human works.

                    I mean… on the one hand, yes, but then on the other hand… what, we ran out of ways to make one think about the passage of time and how natural processes are constantly acting on human works without carving into things, so it was kind of inevitable? What’s wrong with just planting a tree in a parking lot and snapping photos of that? It captures the same thing, minus the tree damage and leaving an extra human mark on a previously undisturbed place in the middle of the forest.

                    1. 14

                      As I alluded in my comment above, we carve up and twist apple trees so that the actually give us apples. If you just let them go wild you won’t get any apples. Where do you get your apples from? Are you going to lecture a gardener who does things like grafting, culling, etc., to every tree she owns?

                      The same applies here: the artist applied his knowledge of tree biology and his knowledge of typography to get a font made by a tree. I think that’s pretty damn cool. I am very impressed! You can download a TTF! how cool is that?

                      Also, it’s not ‘in the middle of a forest’, but on his parents’ property, and the beech trees were planted by his parents. It’s his family’s garden and he’s using it to create art. I don’t get the condemnation, I think people are really misapplying their moral instincts here.

                      1. 4

                        Are you going to lecture a gardener who does things like grafting, culling, etc., to every tree she owns?

                        No, only the gardeners who do things like grafting, culling etc. just to write a meditative blog post about the meaning of time, without otherwise producing a single apple :-). I stand corrected on the forest matter, but I still think carving up trees just for the cool factor isn’t nice. I also like, and eat, beef, and I am morally conflicted about it. But I’m not at all morally conflicted about carving up a living cow just for the cool factor, as in, I also think it’s not nice. Whether I eat fruit (or beef) has no bearing on whether stabbing trees (or cows) for fun is okay.

                        As for where I get my apples & co.: yes, I’m aware that we carve up and twist apple trees to give us apples. That being said, if we want to be pedantic about it, back when I was a kid, I had apples, a bunch of different types of prunes, sour cherries, pears and quince from my grandparents’ garden, so yeah, I know where they come from. They pretty much let the trees go wild. “You won’t get any apples” is very much a stretch. They will happily make apples – probably not enough to run a fruit selling business off of them, but certainly enough for a family of six to have apples – and, as I very painfully recall, you don’t even need to pick them if you’re lazy, they fall down on their own. The pear tree is still up, in fact, and at least in the last 35 years it’s never been touched in any way short of picking the pears on the lowest two or three branches. It still makes enough pears for me to make pear brandy out of them every summer.

                        1. 6

                          I concede your point about the various approaches as to what is necessary and unnecessary tree “care” :)

                          No, only the gardeners who do things like grafting, culling etc. just to write a meditative blog post about the meaning of time, without otherwise producing a single apple :-).

                          But my argument is that there was an apple produced, by all means. You can enjoy it here: https://bjoernkarmann.dk/occlusion_grotesque/OcclusionGrotesque.zip

                    2. 3

                      Eh. I hear what you’re saying, but you can’t ignore the fact that “carving letters into trees” has an extremely strong cultural connection to “idiot disrespectful teenagers”.

                      I can overlook that and appreciate the art. I do think it’s a neat result. But then I read this:

                      The project challenges how we humans are terraforming and controlling nature to their own desires, which has become problematic to an almost un-reversible state. Here the roles have been flipped, as nature is given agency to lead the process, and the designer is invited to let go of control and have nature take over.

                      Nature is given agency, here? Pull the other one.

                      1. 3

                        You see beautiful and wonderful writeup, I see an asshole with an inflated sense of self. I think it’s fair that we each hold to our own opinions and be at peace with that. Disrespecting me because I voiced it is not something I like though.

                        1. 15

                          I apologize at venting my frustration at you in particular.

                          This is a public forum though, and just as you voiced your opinion in public, so did I. Our opinions differ, but repeatedly labeling other as “assholes” (you did in in your original post and in the one above) sets up a heated tone for the entire conversation. I took the flame bait, you might say.

                          Regarding ‘inflated sense of self’ – my experience with artists in general (I’ve lived with artists) is that it’s somewhat of a common psychological theme with them, and we’re better off judging the art, not the artist.

                      1. 2

                        There are many ideas from Toyota which would be nice to see more often in our line of work. I often think about andon cords.

                        1. 10

                          I hope the author gets the help they need, but I don’t really see how the blame for their psychological issues should be laid at the feet of their most-recent employer.

                          1. 50

                            In my career I’ve seen managers cry multiple times, and this is one of the places that happened. A manager should never have to ask whether they’re a coward, but that happened here.

                            I dunno, doesn’t sound like they were the only person damaged by the experience.

                            Eventually my physicians put me on forced medical leave, and they strongly encouraged me to quit…

                            Seems pretty significant when medical professionals are telling you the cure for your issues is “quit this job”?

                            1. 16

                              Seems pretty significant when medical professionals are telling you the cure for your issues is “quit this job”?

                              A number of years ago I developed some neurological problems, and stress made it worse. I was told by two different doctors to change or quit my job. I eventually did, and it helped, but the job itself was not the root cause, nor was leaving the sole cure.

                              I absolutely cannot speak for OP’s situation, but I just want to point out that a doctor informing you to rethink your career doesn’t necessarily imply that the career is at fault. Though, in this case, it seems like it is.

                              1. 4

                                It doesn’t seem like the OP’s doctors told them to change careers though, just quit that job.

                                1. 3

                                  To clarify, I’m using “career change” in a general sense. I would include quitting a job as a career change, as well as leaving one job for another in the same industry/domain. I’m not using it in the “leave software altogether” sense.

                            2. 24

                              I’m trusting the author’s causal assessment here, but employers (especially large businesses with the resources required) can be huge sources of stress and prevent employees from having the time or energy needed to seek treatment for their own needs, so they can both cause issues and worsen existing ones.

                              It’s not uncommon, for example, for businesses to encourage unpaid out-of-hours work for salaried employees by building a culture that emphasizes personal accountability for project success; this not only increases stress and reduces free time that could otherwise be used to relieve work-related stress, it teaches employees to blame themselves for what could just as easily be systemic failures. Even if an employee resists the social pressure to put in extra hours in such an environment, they’ll still be penalized with (real or imagined) blame from their peers, blame from themselves for “not trying hard enough”, and likely less job safety or fewer benefits.

                              In particular, there’s relevance from the business’ failure to support effective project management, manage workloads, or generally address problems repeatedly and clearly brought up to them. These kinds of things typically fuel burnout. The author doesn’t go into details enough for an outside observer to make a judgment call one way or the other, but if you trust the author’s account of reality then it seems reasonable to blame the employer for, at the least, negligently fueling these problems through gross mismanagement.

                              Arguably off-topic, but I think it might squeak by on the grounds that it briefly ties the psychological harm to the quality of a technical standard resulting from the mismanaged business process.

                              1. 3

                                a culture that emphasizes personal accountability for project success; this not only increases stress and reduces free time that could otherwise be used to relieve work-related stress, it teaches employees to blame themselves for what could just as easily be systemic failures.

                                This is such a common thing. An executive or manager punts on actually organizing the work, whether from incompetence or laziness, and then tries to make the individuals in the system responsible for the failures that occur. It’s hardly new. Deming describes basically this in ‘The New Economics’ (look up the ‘red bead game’).

                                More cynically, is WebAssembly actuall in Google’s interests? It doesn’t add revenue to Google Cloud. It’s going to make their data collection harder (provide Google analytics libraries for how many languages?). It was clearly a thing that was gaining momentum, so if they were to damage it, they would need to make sure they had a seat at the table and then make sure that the seat was used as ineffectually and disruptively as possible.

                                1. 9

                                  More cynically, is WebAssembly actually in Google’s interests?

                                  I think historically the answer would have been yes. Google has at various points been somewhat hamstrung by shipping projects with slow front end JS in them and responded by trying to make browsers themselves faster. e.g. creating V8 and financially contributing to Mozilla.

                                  I couldn’t say if Google now has any incentive to not make JS go fast. I’m not aware of one. I suspect still the opposite. I think they’re also pushing mobile web apps as a way to inconvenience Apple; I think Google currently want people to write portable software using web tech instead of being tempted to write native apps for iOS only.

                                  That said, what’s good for the company is not the principle factor motivating policy decisions. What’s good for specific senior managers inside Google is. Otherwise you wouldn’t see all these damn self combusting promo cycle driven chat apps from Google. A company is not a monolith.

                                  ‘The New Economics’

                                  I have this book and will have to re-read at least this bit tomorrow. I have slightly mixed feelings about it, mostly about the writing style.

                                  1. 1

                                    Making JS fast is one thing. Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                                    Your point about the senior managers’ interests driving what’s done is on point, though. Google and Facebook especially are weird because ads fund the company, and the rest is all some kind of loss leader floating around divorced from revenue.

                                    The only thing I’ll comment about Deming is that the chapter on intrinsic vs extrinsic motivation should be ignored, as that’s entirely an artifact despite its popularity. The rest of the book has held up pretty well.

                                    1. 10

                                      Making JS fast is one thing. Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                                      Google doesn’t need to maintain their analytics libraries in many other languages, only to expose APIs callable from those languages. All WebAssembly languages can call / be called by JavaScript.

                                      More generally, Google has been the biggest proponent of web apps instead of web services. Tim Berners-Lee’s vision for the web was that you’d have services that provided data with rich semantic markup. These could be rendered as web pages but could equally plug into other clients. The problem with this approach is that a client that can parse the structure of the data can choose to render it in a way that simply ignores adverts. If all of your adds are in an <advert provider="google"> block then an ad blocker is a trivial browser extension, as is something that displays ads but restricts them to plain text. Google’s web app push has been a massive effort to convince everyone to obfuscate the contents of their web pages. This has two key advantages for Google:

                                      • Writing an ad blocker is hard if ads and contents are both generated from a Turing-complete language using the same output mechanisms.
                                      • Parsing such pages for indexing requires more resources (you can’t just parse the semantic markup, you must run the interpreter / JIT in your crawler, which requires orders of magnitude more hardware than simply parsing some semantic mark-up. This significantly increases the barrier to entry for new search engines, protecting Google’s core user-data-harvesting tool.

                                      WebAssembly fits very well into Google’s vision for the web.

                                      1. 2

                                        I used to work for a price-comparison site, back when those were actual startups. We had one legacy price information page that was Java applet (remember those?) Supposedly the founders were worried about screen scrapers so wanted the entire site rendered with applets to deter them.

                                        1. 1

                                          This makes more sense than my initial thoughts. Thanks.

                                        2. 2

                                          Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                                          This is something I should have stated explicitly but didn’t think to: I don’t think wasm is actually going to be the future of non-JS languages in the browser. I think they for the next couple decades at least, wasm is going to be used for compute kernels (written in other langs like C++ and Rust) that get called from JS.

                                          I’m taking a bet here that targeting wasm from langs with substantial runtimes will remain unattractive indefinitely due to download weight and parsing time.

                                          about Deming

                                          I honestly think many of the points in that book are great but hoo boy the writing style.

                                  2. 1

                                    That is exactly what I thought while reading this. I understand that to a lot of people, WebAssembly is very important, and they have a lot of emotions vested into the success. But to the author’s employer, it might not be as important, as it might not directly generate revenue. The author forgets that to the vast, vast majority of people on this earth, having the opportunity to work on such a technology at a company like Google is an unparalleled privilege. Most people on this earth do not have the opportunity to quit their job just because a project is difficult, or because meetings run long or it is hard to find consensus. Managing projects well is incredibly hard. But I am sure that the author was not living on minimum wage, so there surely was compensation for the efforts.

                                    It is sad to hear that the author has medical issues, and I hope those get sorted out. And those kinds of issues do exacerbate stressful jobs. But that is not a good reason for finger pointing. Maybe the position just was not right for the author, maybe there are more exciting projects that are waiting in the future. I certainly hope so. But it is important not to blame one’s issues on others, that is not a good attitude in life.

                                    1. 25

                                      Using the excuse that because there exist others less fortunate, it’s not worth fighting to make something better is also not a good attitude in life.

                                      Reading between the lines, it feels to me like there was a lot that the author left unsaid, and that’s fine. It takes courage to share a personal story about mental wellbeing, and an itemized list of all the wrongs that took place is not necessary to get the point the author was trying to make across.

                                      My point is that I’d be cautious about making assumptions about the author’s experiences as they didn’t exactly give a lot of detail here.

                                      1. 3

                                        Using the excuse that because there exist others less fortunate, it’s not worth fighting to make something better is also not a good attitude in life.

                                        This is true. It is worth fighting to make things better

                                        Reading between the lines, it feels to me like there was a lot that the author left unsaid, and that’s fine. It takes courage to share a personal story about mental wellbeing, and an itemized list of all the wrongs that took place is not necessary to get the point the author was trying to make across.

                                        There is a lot of things that go into mental wellbeing. Some things you can control, some things are genetic. I don’t know what the author left out, but I have not yet seen a study showing that stressful office jobs give people brain damage. There might be things the author has not explained, but at the same time that is a very extreme claim. In fact, if that were true, I am sure that the author should receive a lot in compensation.

                                        My point is that I’d be cautious about making assumptions about the author’s experiences as they didn’t exactly give a lot of detail here.

                                        I agree with you, but I also think that if someone makes a very bold claim about an employer, especially about personal injury, that these claims should be substantiated. There is a very big difference between “working there was hard, I quit” and “the employer acted recklessly and caused me personal injury”. And I don’t really know which one the author is saying, because from the description could be interpreted as it just being a difficult project to see through.

                                        1. 8

                                          In fact, if that were true, I am sure that the author should receive a lot in compensation.

                                          By thinking about it for a few seconds you can realize that this can easily not happen. The OP itself says that they don’t have documented evidence from the time because of all the issues they were going through. And it’s easy to see why: if your mental health is damaged, your brain is not working right, would you be mindful enough to take detailed notes of every incident and keep a trail of evidence for later use in compensation claims? Or are you saying that compensation would be given out no questions asked?

                                          1. 3

                                            All I’m saying is, there is a very large difference between saying this job was very stressful, I had trouble sleeping and it negatively affected my concentration and memory and saying this job gave me brain damage. Brain damage is relatively well-defined:

                                            The basic definition of brain damage is an injury to the brain caused by various conditions such as head trauma, inadequate oxygen supply, infections, or intracranial hemorrhage. This damage may be associated with a behavioral or functional abnormality.

                                            Additionally, there are ways to test for this, a neurologist can make that determination. I’m not a neurologist. But it would be the first time I heard that brain damage be caused by psychosomatic issues. I believe that the author may have used this term in error. That’s why I said what I said — if you, or anyone, has brain damage as a result of your occupation, that is definitely grounds for compensation. And not a small compensation either, as brain damage is no joke. This is a very different category from mere psychological stress from working for an apparently mismanaged project.

                                            1. 5

                                              Via https://www.webmd.com/brain/brain-damage-symptoms-causes-treatments

                                              Brain damage is an injury that causes the destruction or deterioration of brain cells.

                                              Anxiety, stress, lack of sleep, and other factors can potentially do that. So I don’t see any incorrect use of the phrase ‘brain damage’ here. And anyway, you missed the point. Saying ‘This patient has brain damage’ is different from saying ‘Working in the WebAssembly team at Google caused this patient’s brain damage’. When you talk about causation and claims of damage and compensation, people tend to demand documentary evidence.

                                              I agree brain damage is no joke, but if you look at society it’s very common for certain types of relatively-invisible mental illnesses to be downplayed and treated very lightly, almost as a joke. Especially by people and corporations who would suddenly have to answer for causing these injuries.

                                              1. 4

                                                Anxiety, stress, lack of sleep and other factors cannot, ever, possibly, cause brain damage. I think you have not completely read that article. It states – as does the definition that I linked:

                                                All traumatic brain injuries are head injuries. But head injury is not necessarily brain injury. There are two types of brain injury: traumatic brain injury and acquired brain injury. Both disrupt the brain’s normal functioning.

                                                • Traumatic Brain Injury(TBI) is caused by an external force – such as a blow to the head – that causes the brain to move inside the skull or damages the skull. This in turn damages the brain.
                                                • Acquired Brain Injury (ABI) occurs at the cellular level. It is most often associated with pressure on the brain. This could come from a tumor. Or it could result from neurological illness, as in the case of a stroke.

                                                There is no kind of brain injury that is caused by lack of sleep or stress. That is not to say that these things are not also damaging to one’s body and well-being.

                                                Mental illnesses can be very devastating and stressful on the body. But you will not get a brain injury from a mental illness, unless it makes you physically impact your brain (causing traumatic brain injury), ingest something toxic, or have a stroke. It is important to be very careful with language and not confuse terms. The term “brain damage” is colloquially often used to describe things that are most definitely not brain damage, like “reading this gave me brain damage”. I hope you understand what I’m trying to state here. Again, the author has possibly misused the term “brain damage”, or there is some physical trauma that happened that the author has not mentioned in the article.

                                                I hope you understand what I am trying to say here!

                                                1. 9

                                                  Anxiety and stress raise adrenaline levels, which in turn cause short- and long-term changes in brain chemistry. It sounds like you’ve never been burnt out; don’t judge others so harshly.

                                                  1. 3

                                                    Anxiety and stress are definitely not healthy for a brain. They accelerate aging processes, which is damaging. But brain damage in a medical context refers to large-scale cell death caused by genetics, trauma, stroke or tumors.

                                                  2. 8

                                                    There seems to be a weird definitional slide here from “brain damage” to “traumatic brain injury.” I think we are all agreed that her job did not give her traumatic brain injury, and this is not claimed. But your claim that stress and sleep deprivation cannot cause (acquired) brain injury is wrong. In fact, you will find counterexamples by just googling “sleep deprivation brain damage”.

                                                    “Mental illnesses can be … stressful on the body.” The brain is part of the body!

                                                    1. 1

                                                      I think you – and most of the other people that have responded to my comment – have not quite understood what I’m saying. The argument here is about the terms being used.

                                                      Brain Damage

                                                      Brain damage, as defined here, is damage caused to the brain by trauma, tumors, genetics or oxygen loss, such as during a stroke. This leads to potentially large chunks of your brain to die off. This means you can lose entire brain regions, potentially permanently lose some abilities (facial recognition, speech, etc).

                                                      Sleep Deprivation

                                                      See Fundamental Neuroscience, page 961:

                                                      The crucial role of sleep is illustrated by studies showing that prolonged sleep deprivation results in the distruption of metabolic processes and eventually death.

                                                      When you are forcibly sleep deprived for a long time, such as when you are being tortured, your body can lose the ability to use nutrients and finally you can die. You need to not sleep at all for weeks for this to happen, generally this is not something that happens to people voluntarily, especially not in western countries.

                                                      Stress

                                                      The cells in your brain only have a finite lifespan. At some point, they die and new ones take their place (apoptosis). Chronic stress and sleep deprivation can speed up this process, accelerating aging.

                                                      Crucially, this is not the same as an entire chunk of your brain to die off because of a stroke. This is a very different process. It is not localized, and it doesn’t cause massive cell death. It is more of a slow, gradual process.

                                                      Summary

                                                      Mental illnesses can be … stressful on the body.” The brain is part of the body!

                                                      Yes, for sure. It is just that the term “brain damage” is usually used for a very specific kind of pattern, and not for the kind of chronlc, low-level damage done by stress and such. A doctor will not diagnose you with brain damage after you’ve had a stressful interaction with your coworker. You will be diagnosed with brain damage in the ICU after someone dropped a hammer on your head. Do you get what I’m trying to say?

                                                      1. 4

                                                        I get what you are trying to say, I think you are simply mistaken. If your job impairs your cognitive abilities, then it has given you brain damage. Your brain, is damaged. You have been damaged in your brain. The cells and structures in your brain have taken damage. You keep trying to construct this exhaustive list of “things that are brain damage”, and then (in another comment) saying that this is about them not feeling appreciated and valued or sort of vaguely feeling bad, when what they are saying is that working at this job impaired their ability to form thoughts. That is a brain damage thing! The brain is an organ for forming thoughts. If the brain can’t thoughts so good no more, then it has been damaged.

                                                        The big picture here is that a stressful job damaged this person’s health. Specifically, their brain’s.

                                                        1. 3

                                                          I understand what you are trying to say, but I think you are simply mistaken. We (as a society) have definitions for the terms we use. See https://en.wikipedia.org/wiki/Brain_damage:

                                                          Neurotrauma, brain damage or brain injury (BI) is the destruction or degeneration of brain cells. Brain injuries occur due to a wide range of internal and external factors. In general, brain damage refers to significant, undiscriminating trauma-induced damage.

                                                          This is not “significant, undiscriminating trauma-induced damage” (for context, trauma here refers to physical trauma, such as an impact to the head, not psychological trauma). What the author describes does not line up with any of the Causes of Brain Damage. It is simply not the right term.

                                                          Yes, the author has a brain, and there is self-reported “damage” to it. But just because someone is a man and feels like he polices the neighborhood, does not make me a “police man”. Just because I feel like my brain doesn’t work right after a traumatic job experience does not mean I have brain damage™.

                                                          1. 1

                                                            The Wikipedia header is kind of odd. The next sentence after “in general, brain damage is trauma induced” lists non-trauma-induced categories of brain damage. So I don’t know how strong that “in general” is meant to be. At any rate, “in general” is not at odds with the use of the term for non-trauma induced stress/sleep depriv damage.

                                                            At any rate, if you click through to Acquired Brain Injury, it says “These impairments result from either traumatic brain injury (e.g. …) or nontraumatic injury … (e.g. listing a bunch of things that are not traumatic.)”

                                                            Anyway, the Causes of Brain Damage list is clearly not written to be exhaustive. “any number of conditions, including” etc.

                                                    2. 2

                                                      There is some evidence that lack of sleep may kill brain cells: https://www.bbc.com/news/health-26630647

                                                      It’s also possible to suffer from mini-strokes due to the factors discussed above.

                                                      In any case, I feel like you’re missing the forest for the trees. Sure, it’s important to be correct with wording. But is that more important than the bigger picture here, that a stressful job damaged this person’s health?

                                                      1. 2

                                                        the bigger picture here, that a stressful job damaged this person’s health

                                                        Yes, that is true, and it is a shame. I really wish that the process around WASM be less hostile, and that this person not be impacted negatively, even if stressful and hard projects are an unfortunate reality for many people.

                                                        I feel like you’re missing the forest for the trees.

                                                        I think that you might be missing the forest for the trees – I’m not saying that this person was not negatively impacted, I am merely stating that it is (probably, unless there is evidence otherwise) to characterize this impact as “brain damage”, because from a medical standpoint, that term has a more narrow definition that damage due to stress does not fulfill.

                                              2. 4

                                                Hello, you might enjoy this study.

                                                https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4561403/

                                                I looked through a lot of studies to try and find a review that was both broad and to the point.

                                                Now, you are definitely mixing a lot of terms here… but I hope that if you read the research, you can be convinced, at the very least, that stress hurts brains (and I hope that reading the article and getting caught in this comment storm doesn’t hurt yours).

                                                1. 2

                                                  Sleep Deprivation and Oxidative Stress in Animal Models: A Systematic Review tells us that sleep deprivation can be shown to increase oxidative stress:

                                                  Current experimental evidence suggests that sleep deprivation promotes oxidative stress. Furthermore, most of this experimental evidence was obtained from different animal species, mainly rats and mice, using diverse sleep deprivation methods.

                                                  Although, https://pubmed.ncbi.nlm.nih.gov/14998234/ disagrees with this. Furthermore, it is known that oxidative stress promotes apoptosis, see Oxidative stress and apoptosis :

                                                  Recent studies have demonstrated that reactive oxygen species (ROS) and the resulting oxidative stress play a pivotal role in apoptosis. Antioxidants and thiol reductants, such as N-acetylcysteine, and overexpression of manganese superoxide (MnSOD) can block or delay apoptosis.

                                                  The article that you linked Stress effects on the hippocampus: a critical review mentions that stress has an impact on the development of the brain and on it’s workings:

                                                  Uncontrollable stress has been recognized to influence the hippocampus at various levels of analysis. Behaviorally, human and animal studies have found that stress generally impairs various hippocampal-dependent memory tasks. Neurally, animal studies have revealed that stress alters ensuing synaptic plasticity and firing properties of hippocampal neurons. Structurally, human and animal studies have shown that stress changes neuronal morphology, suppresses neuronal proliferation, and reduces hippocampal volume

                                                  I do not disagree with this. I think that anyone would be able to agree that stress is bad for the brain, possibly by increasing apoptosis (accelerating ageing), decreasing the availability of nutrients. My only argument is that the term brain damage is quite narrowly defined (for example here) as (large-scale) damage to the brain caused by genetics, trauma, oxygen starvation or a tumor, and it can fall into one of two categories: traumatic brain injuries and acquired brain injuries. If you search for “brain damage” on pubmed, you will find the term being used like this:

                                                  You will not find studies or medical diagnoses of “brain damage due to stress”. I hope that you can agree that using the term brain damage in a context such as the author’s, without evidence of traumatic injury or a stroke, is wrong. This does not take away the fact that the author has allegedly experienced a lot of stress at their previous employer, one of the largest and high-paying tech companies, and that this experience has caused the author personal issues.

                                                  On an unrelated note: what is extremely fascinating to me is that some chemicals such as methamphetamine (at low concentrations) or minocycline are neuroprotective being able to lessen brain damage for example due to stroke. But obviously, at larger concentrations the opposite is the case.

                                                  1. 1

                                                    How about this one then? https://www.sciencedirect.com/science/article/abs/pii/S0197458003000484

                                                    We can keep going, it is not difficult to find these… Your’re splitting a hair which should not be split.

                                                    What’s so wrong about saying a bad work environment can cause brain damage?

                                                    1. 1

                                                      Your’re splitting a hair which should not be split.

                                                      There is nothing more fun than a civil debate. I would argue that any hair deserves being split. Worst case, you learn something new, or form a new opinion.

                                                      What’s so wrong about saying a bad work environment can cause brain damage?

                                                      Nothing is wrong with that, if the work environment involves heavy things, poisonous things, or the like. This is why OSHA compliance is so essential in protecting people’s livelihoods. I just firmly believe, and I think that the literature agrees with me on this, that “brain damage” as a medical definition refers to large-scale cell death due to trauma or stroke, and not chronic low-level damage caused by stress. The language we choose to use is extremely important, it is the only facility we have to exchange information. Language is not useful if it is imprecise or even wrong.

                                                      How about this one then?

                                                      Let’s take a look what we got here. I’m only taking a look at the abstract, for now.

                                                      Stress is a risk factor for a variety of illnesses, involving the same hormones that ensure survival during a period of stress. Although there is a considerable ambiguity in the definition of stress, a useful operational definition is: “anything that induces increased secretion of glucocorticoids”.

                                                      Right, stress causes elevated levels of glucocorticoids, such as cortisol.

                                                      The brain is a major target for glucocorticoids. Whereas the precise mechanism of glucocorticoid-induced brain damage is not yet understood, treatment strategies aimed at regulating abnormal levels of glucocorticoids, are worth examining.

                                                      Glucocorticoids are useful in regulating processes in the body, but they can also do damage. I had never heard of the term glucocorticoid-induced brain damage, and searching for it in the literature only yields this exact article, so I considered this a dead end. However, in doing some more research, I did find two articles that somewhat support your hypothesis:

                                                      In Effects of brain activity, morning salivary cortisol, and emotion regulation on cognitive impairment in elderly people, it is mentioned that high cortisol levels are associated with hippocampus damage, supporting your hypothesis, but it only refers to elderly patients with Mild Cognitive Impairment (MCI):

                                                      Cognitive impairment is a normal process of aging. The most common type of cognitive impairment among the elderly population is mild cognitive impairment (MCI), which is the intermediate stage between normal brain function and full dementia.[1] MCI and dementia are related to the hippocampus region of the brain and have been associated with elevated cortisol levels.[2]

                                                      Cortisol regulates metabolism, blood glucose levels, immune responses, anti-inflammatory actions, blood pressure, and emotion regulation. Cortisol is a glucocorticoid hormone that is synthesized and secreted by the cortex of adrenal glands. The hypothalamus releases a corticotrophin-releasing hormone and arginine vasopressin into hypothalamic-pituitary portal capillaries, which stimulates adrenocorticotropic hormone secretion, thus regulating the production of cortisol. Basal cortisol elevation causes damage to the hippocampus and impairs hippocampus-dependent learning and memory. Chronic high cortisol causes functional atrophy of the hypothalamic-pituitary-adrenal axis (HPA), the hippocampus, the amygdala, and the frontal lobe in the brain.

                                                      Additionally, Effects of stress hormones on the brain and cognition: Evidence from normal to pathological aging mentions that chronic stress is a contributor to memory performance decline.

                                                      We might be able to find a few mentions of brain damage outside of the typical context (as caused by traumatic injury, stroke, etc) in the literature, but at least we can agree that the term brain damage is quite unusual in the context of stress, can we not? Out of the 188,764 articles known by pubmed, only 18,981 mention “stress”, and of those the almost all are referring to “oxidative stress” (such as that experienced by cells during a stroke). I have yet to find a single study or article that directly states brain damage as being a result of chronic stress, in the same way that there are hundreds of thousands of studies showing brain damage from traumatic injuries to the brain.

                                                      1. 2

                                                        Well, if anybody asks me I will tell them that too much stress at work causes brain damage… and now I can even point to some exact papers!

                                                        I agree that it’s a little hyperbolic, but it’s not that hyperbolic. If we were talking about drug use everyone would kind of nod and say, ‘yeah, brain damage’ even if the effects were tertiary and the drug use was infrequent.

                                                        But stress at work! Ohohoho, that’s just life my friend! Which really does not need to be the way of the world… OP was right to get out, especially once they started exhibiting symptoms suspiciously like the ones cited in that last paper (you know, the sorts of symptoms you get when your brain is suffering from some damage).

                                                        1. 2

                                                          If someone tells me that they got brain damage from stress at work, I will laugh, tell them to read the Wikipedia article article and then move on. But that is okay, we can agree to disagree. I understand that there are multiple possible definitions for the term brain damage.

                                                          If we were talking about drug use everyone would kind of nod and say, ‘yeah, brain damage’ even if the effects were tertiary and the drug use was infrequent.

                                                          In my defense, people often use terms incorrectly.

                                                          OP was right to get out

                                                          I agree. Brain damage or not, Google employee or not, if you are suffering at work you should not stay there. We all have very basic needs, and one of them is being valued and being happy to work.

                                                          Anyways, I hope you have a good weekend!

                                                2. 6

                                                  I have not yet seen a study showing that stressful office jobs give people brain damage.

                                                  This is a bizarre and somewhat awful thread. Please could you not post things like this in future?

                                                  1. 8

                                                    I disagree. The post seemed polite, constructive, and led to (IMO) a good conversation (including some corrections to the claims in the post).

                                                    1. 4

                                                      Parent left a clear method for you to disprove them by providing a counter-example.

                                                      If you can point to some peer-reviewed research on the topic, by all means do so.

                                                      1. 5

                                                        Yea but this is an obnoxious, disrespectful, and disingenuous way to conduct an argument. I haven’t read any studies proving anything about this subject one way or another. Because I am not a mental health researcher. So it’s easy for me to make that claim, and present the claim as something that matters, when really it’s a pointless claim that truly does not matter at all.

                                                        Arguing from an anecdotal position based on your own experience, yet demanding the opposing side provide peer-reviewed studies to contradict your anecdotal experience, places a disproportionate burden on them to conduct their argument. And whether intentional or not, it strongly implies that you have little to no respect for their experiences or judgement. That you will only care about their words if someone else says them.

                                            1. 5

                                              In general, we recommend regularly auditing your dependencies, and only depending on crates whose author you trust.

                                              Or… Use something like cap-std to reduce ambient authority like access to the network.

                                              1. 8

                                                My understanding is that linguistic-level sandboxing is not really possible. Capability abstraction doesn’t improve security unless capabilities are actually enforced at runtime, by the runtime.

                                                To give two examples:

                                                • cap-std doesn’t help you ensure that deps are safe. Nothing prevents a dep from, eg, using inline assembly to make a write syscall directly.
                                                • deno doesn’t allow disk access by default. If you don’t pass --allow-net, no dependency will be able to touch the network. At the same time, there are no linguistic abstractions to express capabilities. (https://deno.land/manual/getting_started/permissions)

                                                Is there a canonical blog post explaining that you can’t generally add security to “allow-all” runtime by layering abstraction on top (as folks would most likely find a hole somewhere), and that instead security should start with adding unforgeable capabilities at the runtime level? It seems to be a very common misconception, cap-std is suggested as a fix in many similar threads.

                                                1. 2

                                                  Sandboxing is certainly possible, with some caveats.

                                                  You don’t need any runtime enforcement: unforgeable capabilities (in the sense of object capabilities) can be created with, for example, a private constructor. With a (package/module) private constructor, only your own package can hand out capabilities, and no one else is allowed to create them.

                                                  cap-std doesn’t help you ensure that deps are safe.

                                                  That is true, in the sense that no dependency is forced to use cap-std itself. But, if we assumed for a second that cap-std was the rust standard library, then all dependencies would need to go through it to do anything useful.

                                                  Nothing prevents a dep from, eg, using inline assembly to make a write syscall directly.

                                                  This can also be prevented by making inline assembly impossible to use without possesing a capability. You can do the same for FFI: all FFI function invokations have to take a FFI capability. With regards to the rust-specific unsafe blocks, you can either do the same (capabilities) or compiler-level checks: no dependencies of mine can use unsafe blocks unless I grant them explicit permission (through a compiler flag, for example).

                                                  Is there a canonical blog post explaining that you can’t generally add security to “allow-all” runtime by layering abstraction on top […] and that instead security should start with adding unforgeable capabilities at the runtime level?

                                                  I would go the other way, and recommend Capability Myths Demolished, which shows that object capabilities are enough to enforce proper security and that they can support irrevocability.

                                                  1. 4

                                                    With a (package/module) private constructor, only your own package can hand out capabilities, and no one else is allowed to create them.

                                                    This doesn’t generally work-out in practice: linguistic abstractions of privacy are not usually sufficiently enforced by the runtime. In Java/JavaScript you often can use reflection to get the stuff you are not supposed to get. In Rust, you can just cast a number to a function pointer and call that.

                                                    I would sum up it as follows: languages protect their abstractions, and good languages make it impossible to accidentally break them. However, practical languages include escape hatches for deliberate circumventing of abstractions. In the presence of such escapes, we cannot rely on linguistic abstractions for security. Java story is a relevant case study: https://orbilu.uni.lu/bitstream/10993/38190/1/paper.txt.

                                                    Now, if you design a language with water-tight abstractions, this can work, but I’d probably call the result a “runtime” rather than a language. WASM, for example, can implement capabilities in a proper way, and Rust would run on WASM, using cap-std as n API for runtime. The security properties won’t be in cap-std, they’ll be in WASM.

                                                    This can also be prevented by making inline assembly impossible to use without possesing a capability

                                                    I don’t think this general approach would work for Rust. In Rust, unsafe is the defining feature of the language. Moving along these lines would make Rust more like Java in terms of expressiveness, and probably won’t actually improve security (ie, the same class of exploits from the linked paper would work).

                                                    I would go the other way, and recommend Capability Myths Demolished

                                                    Thanks, going to read that, will report back if I shift my opinions!

                                                    EDIT: it seems that the paper is entirely orthogonal to what I am trying to say. The paper argues that cap model is better that ACL model. I agree with that! What I am saying is that you can’t implement the model on the language level. That is, I predict that even if Java used capability objects instead of security manager, it would have been exploitable more or less in the same way, as exploits breaking ACL would also break capabilities.

                                                    1. 3

                                                      Go used to have a model where you could prohibit the use of package unsafe and syscall to try to get security. App Engine, for example, used this. But my understanding is that they ended up abandoning it as unworkable.

                                                      1. 2

                                                        Your points are sharp. Note that there was an attempt to make Java capability-safe (Joe-E), and it ended up becoming E because taming Java was too much work. Note also that there was a similar attempt for OCaml (Emily), and it was better at retaining the original language’s behavior, because OCaml is closer than Java to capability-safety.

                                                        ECMAScript is almost capability-safe. There are some useful tools, and there have been attempts to define safe subsets like Secure ECMAScript. But you’re right that, just like with memory-safety, a language that is almost capability-safe is not capability-safe.

                                                        While you’re free to think of languages like E as runtimes, I would think of E as a language and individual implementations like E-on-Java or E-on-CL as runtimes.

                                                  2. 2

                                                    porquoi no los dos?

                                                  1. 2

                                                    I enjoy reading posts about jq so thought others might too, hence this submission. It sort of also relates to the recent submission about jq and qz which also inspired a post. Thanks.

                                                    1. 3

                                                      I love seeing folks write in jq, whether it’s entire modules or short one-line programs. It’s one of the few languages which makes me feel hope for our profession, and it’s becoming a standard tool which distros include by default. Thanks for sharing your experience.

                                                      1. 2

                                                        This is a bit meta, but I was wondering - would you folks have any objection in me sharing the main post now here on Lobsters, for which this post was a sort of “prequel”? I’ve just finished it, and it goes into more detail and includes further explorations of jqs functions….

                                                    1. 3

                                                      I know it’s bad form to nitpick the introduction of an otherwise-great article which is itself an explanation of another great article, but something about the opening categorization of languages bothered me. To reword:

                                                      1. Data races provoke undefined behavior
                                                      2. Data races provoke object tearing, dirty reads, or other bugs common to data races
                                                      3. Data races are not possible

                                                      I’m not sure whether this is exhaustive. There should be at least one more family: languages where data races are possible, but do not result in bugs. In E, the only possible data race is when a promise resolves multiple times within a single turn, and E defines this case to cause an exception for whichever resolver loses. In Haskell, with the STM monad, data races become transactions and losing writers are able to retry until they succeed/win. In both languages, data races turn from a bug into a feature, because multiple resolvers/writers can cooperate to incrementally advance a distributed state. (Sure, the state is only distributed between threads, but sharing state between threads is hard!)

                                                      1. 2

                                                        I’m not familiar with the E language so I’m basing the following entirely on the description that you’ve provided above. If a promise resolves multiple times and this situation is reliably detected and turns into an exception then it sounds to me like there are no data races in E. If there were data races then I would expect undesirable things to happen. One result could be “silently” lost, or the result could change out from under you once you’ve observed it, etc.

                                                        Is this a semantic problem where we’re not using the same definition of “data race”? I’m using the definition from https://docs.oracle.com/cd/E19205-01/820-0619/geojs/index.html

                                                        A data race occurs when:

                                                        • two or more threads in a single process access the same memory location concurrently, and
                                                        • at least one of the accesses is for writing, and
                                                        • the threads are not using any exclusive locks to control their accesses to that memory.

                                                        If E can reliably detect the situation that you described then I would think that under the covers it must be using a lock or atomic memory operations in the implementation. If that’s correct then there would be no data races.

                                                        Are you perhaps thinking of “race condition” instead of “data race”?

                                                        1. 1

                                                          E and Haskell are implemented with green threads. Perhaps this makes them unfair examples, since we would want to see them implemented with OS threads before believing my claims. Indeed, I think it is fair to say that data races aren’t possible in E. I gather that STM still works with OS threads, though it uses atomic compare-and-swap operations.

                                                          Thanks for both your perspective and the link.

                                                      1. 5

                                                        TL;DR—use the tools I use to manage YAML files.

                                                        1. 3

                                                          Yup, and also, strange that Helm isn’t mentioned, given that’s pretty much Industry Best Practice for handling this exact problem.

                                                          1. 2

                                                            If you’re using something like the Helm–k3s integration, then Helm can be relatively lightweight; otherwise, it’s a hassle. The mandatory boilerplate is heavy and confusing. Just like with Docker, a registry is required. Composition of charts is messy.

                                                            1. 1

                                                              If you’re willing to use them, git submodules can replace a registry.

                                                          2. 2

                                                            Kustomize might have a silly name, but it is built into the standard Kubernetes tools at this point, and even has documentation. I read this article more as a recommendation to not write a brand-new tool just to replace Kustomize.

                                                            1. 1

                                                              Yeah, this does look opinionated, but it’s better than a tool we built at the last place that conflated business logic, templates, and data. All in one giant erb. So you’d need to know Ruby, ERB, k8s YAML/theory, terraform, AWS, and whatever language your service was written in. Plus, if you had to add a section, you’d have a bunch of duplication, logic, loops, and shell outs all in the ERB.

                                                            1. 8

                                                              I kind of feel that the author, intent on proving a point, himself misses the entire point of Hare as a language

                                                              1. -2

                                                                Well, the author is being polite. The point of Hare is to aggrandize its inventor.

                                                                1. 26

                                                                  I have mercilessly flagged rants and sourcehut spam, as well as gotten into slapfights with Drew here. Believe me when I say, without any love or affection for the fellow, that I’m pretty sure your take here is wrong.

                                                                  1. 16

                                                                    Oof. You’ve now convinced bunch of onlookers never to share their projects here. Aren’t we all here to learn new things via sharing?

                                                                    1. 13

                                                                      The point of Hare is to aggrandize its inventor.

                                                                      I’m genuinely curious how you arrived that conclusion.

                                                                      1. 6

                                                                        Corbin doesn’t like Drew

                                                                        1. 4

                                                                          I hope ad hominem attacks don’t become the norm here.

                                                                      2. 2

                                                                        Working hard on something for years is the best way to aggrandize yourself. Shit posting on the other hand…

                                                                    1. 2

                                                                      How does Cinder differ from PyPy? Or more specifically why build Cinder when PyPy exists?

                                                                      1. 2

                                                                        Cinder:

                                                                        • is built on top of CPython, so no C-API compatibility issues
                                                                        • is method-at-a-time instead of tracing

                                                                        It also bundles Static Python, a type-checked and type-optimized subset of Python.

                                                                        There are also some vague internal reasons like “we tried PyPy and it wasn’t promising on our workload”.

                                                                        1. 2

                                                                          That’s interesting, as it implies that Cinder compares favorably to both CPython and PyPy on some representative benchmark. Would you be willing to submit a benchmark upstream to PyPy’s list of benchmarks? They generally treat slowness as a bug and would be interested in understanding.

                                                                          1. 2

                                                                            I don’t know if it’s that simple. It was a multi-person effort at least 5 years ago to port our workload to PyPy and I think all people involved have since moved on to greener pastures.

                                                                            I don’t mean to say that PyPy was necessarily slow; the problems could have been in the forking model, or memory consumption, or taking some time to warm up, or not being able to optimize across the C extension boundary… something in that realm.

                                                                            PyPy may well have been faster, even, but it’s an enormous task to try and internalize the whole runtime and see where we can make improvements :)

                                                                      1. 2

                                                                        At the boundaries, applications are themselves objects, and the JSON you use to communicate with them are immutable messages. They, if we’re also talking (micro)services, are the closest thing to what Alan Kay envisioned as “objects”. (And similarly what exists in Erlang, if I’m not mistaken.)

                                                                        Also, as a point:

                                                                        While an object is data with behaviour, closures are behaviour with data. Such first-class values cross system boundaries with the same ease as objects do: Not very well.

                                                                        Yes, well, state being bound up in a function would be a problem, but otherwise a pure stateless function is going to cross just fine. Assuming it executes in a limited context that has its dependencies. (Alternatively, you could transform said function into a single function of all its dependencies inlined.)

                                                                        And, a function can include local data, as static information within itself. Of course, none of this is literally an object/closure with running state, but, if we wanted to get pedantic, we could totally send an image of a running process or module and spin it up on the other end, like communication via passing Lisp images, but that’s really stretching the imagination… or maybe an amusing thought exercise… At this point we’re reaching similar modeling of the biological world, and maybe Alan Kay, being a biologist, would find this amusing.

                                                                        1. 1

                                                                          Assuming it executes in a limited context that has its dependencies.

                                                                          If we’re going to assume the remote has the right dependencies, why not just assume that the remote has the stateless function as well and just send the data that parameterizes the function? That’s basically how RPC works today (and has worked for a long time). It doesn’t seem very useful to be able to marshal functions but only to remotes which have the dependencies pre-installed. It seems like we shouldn’t assume dependencies are installed and either do our normal RPC stuff or we come up with a scheme for marshaling the dependencies over the wire as well (perhaps with some caching so we’re not marshaling them to remotes that already have them).

                                                                          Moreover, regardless of whether we’re sending dependencies or not, sending stateless functions isn’t particularly easy in a natively compiled language because you’re assuming the remote has the same architecture and a compatible kernel (or libc in many cases). So you need to either need to enforce that invariant or else you need to keep a platform agnostic version of the function (e.g., source code or VM bytecode) and ship that which means some kind of compiler/runtime on the remote.

                                                                          All of this seems to support the “not very well” characterization of marshaling closures and objects.

                                                                          1. 1

                                                                            Of course the inventors of object-orientation would likely take issue with your deduction that microservices, immutable messages and Erlang best constitute their concepts (it wasn’t Alan Kay). This is taking the magical “it’s all about messaging” to an absurd extreme.

                                                                            are the closest thing to what Alan Kay envisioned as “objects”

                                                                            This implication that Alan Kay somehow invented objects is incorrect, yet disturbingly becoming “the new truth”. I think such historical revisionism should not go uncommented.

                                                                            1. 1

                                                                              Would you care to set the record straight? The version of the story I know involves Scheme and actors, but also contains immutable messages, Erlang, and microservices (in that order). It’s not just about messages, but about the realization that stateless code can be packed into messages. Note that Wikipedia lists Kay in their history of object-oriented programming.

                                                                              1. 1

                                                                                Rather than repeat the historical record, perhaps a place to start the historical journey is the following quote by Alan Kay…

                                                                                “I don’t think I invented “Object-oriented” but more or less “noticed” what was really powerful about just making everything from complete computers communicating with non-command messages. This was all chronicled in the HOPL II chapter I wrote “The Early History of Smalltalk”.

                                                                                1. This raises the question of who did invent it?
                                                                                2. What exactly did Smalltalk add in terms of object orientation that did not already exist, other than “objects all the way down”?
                                                                                3. Should object orientation be therefore viewed by an Alan Kay perspective of “it’s about messaging”, particularly because his context was effectively limited to the language, platform and environment that is Smalltalk?
                                                                                1. 1

                                                                                  Wow, so let’s see if I interpreted this chain of replies correctly:

                                                                                  • “Wow, you’re wrong. You’re so wrong, it’s disturbing how wrong you are.”
                                                                                  • “Okay, well where are we all wrong?”
                                                                                  • “Oh, such impressive knowledge took me great effort to learn by trawling historic tomes by my lonesome, and you yourself should put in the same effort I did if you wish to become as enlightened as I.”

                                                                                  If so, I’m just going to pass on this conversation, and note, that at no point did I even claim Alan Kay invented OO, so I don’t understand what prompted this pedantic anti-Kay/messaging reply.

                                                                          1. 5

                                                                            Ok so go isn’t rust, scala, or haskell. Pretty much all these criticisms could be leveled at Python or ruby (at least without their optional type checkers).

                                                                            I don’t think it’s coincidental- golang is a c flavored language that targets much the same programmer use cases as Python and ruby. Lots of programming has a lot of io, a lot of iteration over requirements that are completely ill defined, and needs to be easy to read. Yes this potentially lets bugs hide and requires that you run the code to achieve reasonable certainty that it works, but there’s a reason we’re not all programming in agda.

                                                                            1. 4

                                                                              I think your comment is an interesting intermediate phase which precedes the point in the discussion where Go is directly compared to Rust alone.

                                                                              In the first part of the article, we are linked to two case studies for Go’s unsuitability as a low-level systems language. In the first case study, a standard library structure is mutable and large. In the second case study, the standard linker and GC have large footprints. Scala and Haskell are no better than Python and Ruby at managing runtime footprints or structure size, because none of them are low-level systems languages. This leaves Rust as the lone contender in the discussion.

                                                                              1. 5

                                                                                I don’t really believe that the majority of Go programs have much overlap with the majority of Rust programs. Sure there are cases where both Go and Rust are legitimate candidates for the same software, but I think Go is much more often competing with Java and Python than Rust, and Rust is more often competing with C++/C than Go.

                                                                                1. 2

                                                                                  Go was intended to replace C++. This is a well-cited motivation; Wikipedia says, “[Go’s] designers were primarily motivated by their shared dislike of C++.” I was at Google when Go was ascendant, and it was internally pitched as a replacement for C++, which was (and still is) one of the main pillars of Google code.

                                                                                  It was a curious turn of events when many Python and Java programmers switched to Go; it had not been anticipated. Go lacked the powerful dependency-injection tooling available to Java programmers, and also the flexible metaprogramming of Python; it wasn’t intended to be a compelling alternative for them.

                                                                                  1. 2

                                                                                    In fairness, go is a good replacement for the use case of “I want C++ because it’s compiled and I’m not especially interested in safety or latency”.

                                                                            1. 12

                                                                              I think the Unicode consortium made a huge mistake giving in to adding emojis to Unicode. It’s a bottomless pit, very politically charged and definitely ambiguous (compare for example the different emoji-styles across operating systems/fonts).

                                                                              It severely complicates most of the Unicode algorithms (grapheme cluster detection, word/sentence/line-segmentation, etc.) and, compared to dead and alive languages, feels very short-lived, like a fashion.

                                                                              How will emojis be seen in 50 years? I can already feel the second-hand-embarassment.

                                                                              1. 15

                                                                                It looks like people were already using emoji, and Unicode had to add them for compatibility. https://unicode.org/emoji/principles.html

                                                                                1. 12

                                                                                  Every thread about emoji has a “Unicode shouldn’t have added them” comment (or several), and I feel like I then always step in to remind those commenters that basically every single chat/message system humans have built in the internet era has reinvented emoticons in some form or another, whether purely textual (“:-)” and “:/“ and friends) or custom graphics, or a mix of text abbreviations that get replaced by graphics.

                                                                                  This suggests that they are a non-negotiable part of how humans conduct written communications in this era. Which means Unicode must find a way to capture them, by the nature of Unicode itself.

                                                                                  1. 4

                                                                                    This suggests that they are a non-negotiable part of how humans conduct written communications in this era. Which means Unicode must find a way to capture them, by the nature of Unicode itself.

                                                                                    You might as well use the same argument to claim that Unicode should capture all words, too.

                                                                                    1. 3

                                                                                      Doesn’t it try? Morally, is there any difference between a code sequence of letters representing a word, and a code sequence of letters and combining characters that come together to create a single glyph?

                                                                                    2. 1

                                                                                      This is solved well with ligatures at the font level.

                                                                                      Solving it at the font level has the additional benefit of not blocking the addition of new emoji on a standards body, as well as allowing graceful degradation to character sequences that anyone, including those on older software, can view.

                                                                                      1. 8

                                                                                        Ligatures can’t and don’t solve all the traditional emoticons, let alone emoji.

                                                                                        Emoji are a part of written communication, no matter how much someone might personally dislike them, and as such belong in Unicode.

                                                                                        1. 1

                                                                                          Ligatures can’t and don’t solve all the traditional emoticons, let alone emoji.

                                                                                          Why not? This approach is more or less used for flags, where flag emoji are – for political reasons like ‘TW’ – ligatures of country codes in a special unicode range. If you happen to put ‘Flag{T}’ beside ‘Flag{W}’, you may get the letters ‘TW’, or you may get a flag that enrages China, depending on your font.

                                                                                          If you want to avoid ASCII ‘bar:(foo)’ from being interpreted as a smiley emoji , maybe unicode could standardize non-rendering ‘emoji brackets’ as a way of hinting to a font system that it could render a sequence of characters as an emoji ligature.

                                                                                          There’s no need to restrict emoji to the slow pace of the unicode consortium, when dropping in a new font will get you the new hotness, especially since using text sequences will render legibly for everyone not using that font.

                                                                                          This is win/win. It makes things more usable for those that dislike emoji, and it makes more emoji available to those that like emoji.

                                                                                          1. 5

                                                                                            Because fonts cannot change emoticons into images? They have different meaning, so font ligature processing, which is essentially replaceAll(characters/glyphs/whatever, graphic), does not work.

                                                                                            No one can adopt a system font that magically turns one set of characters into another system. Because it can’t be adopted as the system font, then no apps get emoji. A person can’t simply change the default, for the same reason the system couldn’t: you made ligatures that potentially change meaning of bytes.

                                                                                            As far a font is concerned, there is no difference between :) in “see you :)” “(: I’ve seen this comment format somewhere :)” but you ligature “solution” makes the latter nonsense.

                                                                                            Emoticons also have characters that have no equivalent emoticon, either due to number of characters, or the lack of color.

                                                                                            Now, you may not like emoji, but arguing “we didn’t need it before” is pretty weak sauce: we didn’t have it. The goal of text is to communicate, and it is clear that a vast proportion of all people alive use emojis in their communication. So computers should be able to facilitate that communication rather than requiring workarounds.

                                                                                            The use of semagrams in alphabetic languages is nothing new - even hieroglyphics used semagrams.

                                                                                            1. 2

                                                                                              Because fonts cannot change emoticons into images?

                                                                                              That’s… just untrue.

                                                                                              You ignored the entire paragraph where I pointed out that flags ALREADY work this way. Then, you ignored the second paragraph which addresses the problem you mentioned in the third paragraph, where something like an RTL marker could mark emoji. Then you invented me saying “we didn’t need it before”.

                                                                                              In fact, you seem to have ignored everything I wrote.

                                                                                              It would be nice if you responded to what I said, rather than what you imagined I said.

                                                                                            2. 3

                                                                                              The simple counterpoint to this is to imagine the Unicode Consortium declaring that all the writing systems and characters which ever will be needed have been invented already — anything new will just be a variant or a ligature of something existing!

                                                                                              That would be dangerously incorrect, and would not work at all.

                                                                                              So, look. I get that some people really really really really don’t like emoji and wish they didn’t exist. But they do exist and they are a perfectly valid form of written communication and they are not sufficiently captured by ligatures or other attempts to layer on top of ASCII emoticons, any more than an early-2000s forum would have been happy with just the ASCII forms. For decades we’ve been used to a richer set of these, and it is right and proper for Unicode to include them. Complaints about them, to me, feel like ranting that kids these days say ”lol” instead of typing out the fully-punctuated-and-capitalized sentence “That is funny!”

                                                                                              1. 1

                                                                                                and they are not sufficiently captured by ligatures or other attempts to layer on top of ASCII emoticons, any more than an early-2000s forum would have been happy with just the ASCII forms.

                                                                                                So far in this thread, I’ve seen this asserted – but I don’t see why flags are appropriately captured by ligatures, while emoticons are not. What is the technical difference that allows one to work while the other does not?

                                                                                                Again, I’m arguing that for emoji lovers, ligatures are BETTER and MORE FUNCTIONAL than encoding emoji individually into unicode. That this would be an improvement in availability and usability, not a regression.

                                                                                                We already have messaging programs ignoring the emoji range and adding their own custom :emoji: sequences because Unicode moves too slowly for them. We can wait years for unicode to standardize animated party parrots, or we can add :party_parrot: as text that gets interpreted by our application. Slack, and most others programs, chose the latter. Not to mention adding stickers – which arguably need the same position in Unicode as emoji.

                                                                                                Unicode’s charter is to standardize existing practice. Why not let Unicode standardize the way that emoji ranges are worked around in practice, today, this with standardized “emoji brackets” that allow clients to mark any text sequence as an emoji ligature? This matches the way things actually work, and fills the need for custom emoji (and stickers) that the Unicode consortium is not serving.

                                                                                                1. 1

                                                                                                  I offer the following counter proposal: since you seem to think it’s at least possible and perhaps even easy, I challenge you to pick, say, 20 code points at random from among the emoji and come up with distinct, memorable ASCII sequences you think would suffice to be ligature’d into those emoji. I think that this will help you to understand why I don’t think “just ligature them” is going to work.

                                                                                          2. 2

                                                                                            This is solved well with ligatures at the font level.

                                                                                            Demonstrably false by the number of systems that screw up trying to auto detect smileys from colons and parentheses. 🙂 is unambiguous semantically; “:)” is not.

                                                                                            1. 3

                                                                                              I actually feel the opposite. “:)” is unambiguously a smiling face, and is mostly uniform in appearance across system UI fonts. The icon “🙂” is rendered differently depending on not only the operating system but also the specific app being used. The recipient of my message may see a completely different image then I intend for them to see. Even worse, the meaning and tone of my past emoji messages can completely change whenever Apple or Google or Telegram decides to redesign their emoji.

                                                                                              Too many apps have no way to disable auto-replacement of ascii faces.

                                                                                        2. 4

                                                                                          I think the Unicode consortium made a huge mistake giving in to adding emojis to Unicode. It’s a bottomless pit, very politically charged and definitely ambiguous (compare for example the different emoji-styles across operating systems/fonts).

                                                                                          This applies to other planes in Unicode, due to https://en.wikipedia.org/wiki/Han_unification

                                                                                          Also, any kind of character system is politically charged an interesting read here is: https://www.hastingsresearch.com/net/04-unicode-limitations.shtml (I do not agree with the points here and history has proven the author wrong, but it’s a good specimen to look at political unicode arguments pre-Emoji)

                                                                                          1. 4

                                                                                            I was going to mock your post by pointing out all of the other stuff in Unicode which is “politically charged”, from Tibetan to Han unification to the Hangul do-over to that time that a single character was added just for Japan’s government. But this is a grand understatement of exactly how political and pervasive the Consortium’s work is. Peruse the list of versions of Unicode and you’ll see that we already have a “bottomless pit” of natural writing systems to catalogue.

                                                                                            I think that the most inaccurate part of your claim is that emoji are “like a fashion”. Ideograms are millennia old and have been continuously used for communicating mathematics.

                                                                                            1. 2

                                                                                              It severely complicates most of the Unicode algorithms (grapheme cluster detection, word/sentence/line-segmentation, etc.)

                                                                                              If there were no emojis in Unicode, but everything else remained, would any of these things really be simpler? The impression I get is there are corner cases across the languages Unicode covers for all of the complexity, independent of emoji; emoji just exposes them to westerners more.

                                                                                            1. 3

                                                                                              The constructed language toki pona has an emoji orthography: https://sites.google.com/view/sitelenemoji

                                                                                              1. 2

                                                                                                So does Lojban (Reddit; click the image links). In the Lojban case, the encoding is almost completely inscrutable.

                                                                                                1. 2

                                                                                                  🗣❗️(toki a)

                                                                                                  👉🧠❌🧠⏩👇 (sina sona ala sona e ni?)

                                                                                                  📄👇▶️🗣❌ (lipu ni li toki ala)

                                                                                                  📄👇▶️🗣👍🏻⏹📄 (lipu ni li toki pona pi lipu)

                                                                                                  📄▶️🗣❌ (lipu li toki ala)

                                                                                                  📄▶️🔧🗣⏩🗣 (lipu li ken toki e toki)

                                                                                                  📄▶️🗣❌ (lipu li toki ala)

                                                                                                1. 24

                                                                                                  Am I the only one being completely tired of these rants/language flamewars? Just use whatever works for you, who cares

                                                                                                  1. 11

                                                                                                    You’re welcome to use whatever language you like, but others (e.g. me) do want to see debates on programming language design, and watch the field advance.

                                                                                                    1. 6

                                                                                                      Do debates in blogs and internets comments meaningfully advance language design compared to, say, researchers and engineers exploring and experimenting and holding conferences and publishing their findings? I think @thiht was talking about the former.

                                                                                                      1. 2

                                                                                                        I’m idling in at least four IRC channels on Libera Chat right now with researchers who regularly publish. Two of those channels are dedicated to programming language theory, design, and implementation. One of these channels is regularly filled with the sort of aggressive discussion that folks are tired of reading. I don’t know whether the flamewars help advance the state of the art, but they seem to be common among some research communities.

                                                                                                        1. 5

                                                                                                          Do you find that the researchers, who publish, are partaking in the aggressive discussions? I used to hang out in a couple Plan 9/9front-related channels, and something interesting I noticed is that among the small percentage of people there who made regular contributions (by which I mean code) to 9front, they participated in aggressive, flamey discussion less often than those that didn’t make contributions, and the one who seemed to contribute the most to 9front was also one of the most level-headed people there.

                                                                                                          1. 2

                                                                                                            It’s been a while since I’ve been in academia (I was focusing on the intersection of PLT and networking), and when I was there none of the researchers bothered with this sort of quotidian language politics. Most of them were focused around the languages/concepts/papers they were working with and many of them didn’t actually use their languages/ideas in real-world situations (nor should they, the job of a researcher is to research not to engineer.) There was plenty of drama in academia but not about who was using which programming language. It had more to do with grant applications and conference politics. I remember only encountering this sort of angryposting about programming languages in online non-academic discussions on PLT.

                                                                                                            Now this may have changed. I haven’t been in academia in about a decade now. The lines between “researcher” and “practitioner” may have even become more porous. But I found academics much more focused on the task at hand than the culture around programming languages among non-academics. To some extent academics can’t be too critical because the creator of an academic language may be a reviewer for an academic’s paper submission at a conference.

                                                                                                            1. 2

                                                                                                              I’d say that about half of the aggressive folks have published programming languages or PLT/PLD research. I know what you’re saying — the empty cans rattle the most.

                                                                                                      2. 8

                                                                                                        You are definitely not the only one. The hide button is our friend.

                                                                                                        1. 2

                                                                                                          So I was initially keen on Go when it first came out. But have since switched to Rust for a number of different reasons, correctness and elegance among them.

                                                                                                          But I don’t ever say “you shouldn’t use X” (where ‘X’ is Go, Java, etc.). I think it is best to promote neat projects in my favorite language. Or spending a little time to write more introductory material to make it easier for people interested to get started in Rust.

                                                                                                          1. 2

                                                                                                            I would go further, filtering for rant, meta and law makes Lobsters much better.

                                                                                                            rant is basically the community saying an article is just flamebait, but short of outright removing it. You can choose to remove it.

                                                                                                          2. 5

                                                                                                            I think this debate is still meaningful because we cannot always decide what we use.

                                                                                                            If there are technical or institutional barriers, you can ignore $LANG, such as if you’re writing Android apps, where you will use a JVM language (either Kotlin or Java) but if you are writing backend services, outside forces may compel you to adopt Go, despite its shortcomings detailed in this post (and others by the author).

                                                                                                            Every post of this kind helps those who find themselves facing a future where they must write Go to articulate their misgivings.

                                                                                                          1. 11

                                                                                                            As someone who is rather new to languages like C (I only recently got into it by making a game with it), I have a few newbie questions:

                                                                                                            • Why do people want to replace C? Security reasons, or just old and outdated?

                                                                                                            • What does Hare offer over C? They say that Hare is simpler than C, but I don’t understand exactly how. Same with Zig. Do they compile to C in the end, and these languages just make it easier for user to write code?

                                                                                                            That being said, I find it cool to see these languages popping up.

                                                                                                            1. 33

                                                                                                              Why do people want to replace C? Security reasons, or just old and outdated?

                                                                                                              • #include <foo.h> includes all functions/constants into the current namespace, so you have no idea what module a function came from
                                                                                                              • C’s macro system is very, very error prone and very easily abused, since it’s basically a glorified search-and-replace system that has no way to warn you of mistakes.
                                                                                                              • There are no methods for structs, you basically create struct Foo and then have to name all the methods of that struct foo_do_stuff (instead of doing foo_var.do_stuff() like in other languages)
                                                                                                              • C has no generics, you have to do ugly hacks with either void* (which means no type checking) or with the macro system (which is a pain in the ass).
                                                                                                              • C’s standard library is really tiny, so you end up creating your own in the process, which you end up carrying around from project to project.
                                                                                                              • C’s standard library isn’t really standard, a lot of stuff isn’t consistent across OS’s. (I have agreeable memories of that time I tried to get a simple 3kloc project from Linux running on Windows. The amount of hoops you have to jump through, tearing out functions that are Linux-only and replacing them with an ifdef mess to call Windows-only functions if you’re on compiling on Windows and the Linux versions otherwise…)
                                                                                                              • C’s error handling is completely nonexistant. “Errors” are returned as integer codes, so you need to define an enum/constants for each function (for each possible returned error), but if you do that, you need to have the actual return value as a pointer argument.
                                                                                                              • C has no anonymous functions. (Whether this matters really depends on your coding style.)
                                                                                                              • Manual memory management without defer is a PITA and error-prone.
                                                                                                              • Weird integer type system. long long, int, short, etc which have different bit widths on different arches/platforms. (Most C projects I know import stdint.h to get uint32_t and friends, or just have a typedef mess to use usize, u32, u16, etc.)

                                                                                                              EDIT: As Forty-Bot noted, one of the biggest issues are null-terminated strings.

                                                                                                              I could go on and on forever.

                                                                                                              What does Hare offer over C?

                                                                                                              It fixes a lot of the issues I mentioned earlier, as well as reducing footguns and implementation-defined behavior in general. See my blog post for a list.

                                                                                                              They say that Hare is simpler than C, but I don’t understand exactly how.

                                                                                                              It’s simpler than C because it comes without all the cruft and compromises that C has built up over the past 50 years. Additionally, it’s easier to code in Hare because, well, the language isn’t trying to screw you up every 10 lines. :^)

                                                                                                              Same with Zig. Do they compile to C in the end, and these languages just make it easier for user to write code?

                                                                                                              Zig and Hare both occupy the same niche as C (i.e., low-level manual memory managed systems language); they both compile to machine code. And yes, they make it a lot easier to write code.

                                                                                                              1. 15

                                                                                                                Thanks for the great reply, learned a lot! Gotta say I am way more interested in Hare and Zig now than I was before.

                                                                                                                Hopefully they gain traction. :)

                                                                                                                1. 15

                                                                                                                  #include <foo.h> includes all functions/constants into the current namespace, so you have no idea what module a function came from

                                                                                                                  This and your later point about not being able to associate methods with struct definitions are variations on the same point but it’s worth repeating: C has no mechanism for isolating namespaces. A C function is either static (confined to a single compilation unit) or completely global. Most shared library systems also give you a package-local form but anything that you’re exporting goes in a single flat namespace. This is also true of type and macro definitions. This is terrible for software engineering. Two libraries can easily define different macros with the same name and break compilation units that want to use both.

                                                                                                                  C++, at least, gives you namespaces for everything except macros.

                                                                                                                  C has no generics, you have to do ugly hacks with either void* (which means no type checking) or with the macro system (which is a pain in the ass).

                                                                                                                  The lack of type checking is really important here. A systems programming language is used to implement the most critical bits of the system. Type checks are incredibly important here, casting everything via void* has been the source of vast numbers of security vulnerabilities in C codebases. C++ templates avoid this.

                                                                                                                  C’s standard library is really tiny, so you end up creating your own in the process, which you end up carrying around from project to project.

                                                                                                                  This is less of an issue for systems programming, where a large standard library is also a problem because it implies dependencies on large features in the environment. In an embedded system or a kernel, I don’t want a standard library with file I/O. Actually, for most cloud programming I’d like a standard library that doesn’t assume the existence of a local filesystem as well. A bigger problem is that the library is not modular and layered. Rust’s nostd is a good step in the right direction here.

                                                                                                                  C’s error handling is completely nonexistant. “Errors” are returned as integer codes, so you need to define an enum/constants for each function (for each possible returned error), but if you do that, you need to have the actual return value as a pointer argument.

                                                                                                                  From libc, most errors are not returned, they’re signalled via the return and then stored in a global (now a thread-local) variable called errno. Yay. Option types for returns are really important for maintainable systems programming. C++ now has std::optional and std::variant in the standard library, other languages have union types as first-class citizens.

                                                                                                                  Manual memory management without defer is a PITA and error-prone.

                                                                                                                  defer isn’t great either because it doesn’t allow ownership transfer. You really need smart pointer types and then you hit the limitations of the C type system again (see: no generics, above). C++ and Rust both have a type system that can express smart pointers.

                                                                                                                  C has no anonymous functions. (Whether this matters really depends on your coding style.)

                                                                                                                  Anonymous functions are only really useful if they can capture things from the surrounding environment. That is only really useful in a language without GC if you have a notion of owning pointers that can manage the capture. A language with smart pointers allows you to implement this, C does not.

                                                                                                                  1. 6

                                                                                                                    defer isn’t great either because it doesn’t allow ownership transfer. You really need smart pointer types and then you hit the limitations of the C type system again (see: no generics, above). C++ and Rust both have a type system that can express smart pointers.

                                                                                                                    True. I’m more saying that defer is the baseline here; without it you need cleanup: labels, gotos, and synchronized function returns. It can get ugly fast.

                                                                                                                    Anonymous functions are only really useful if they can capture things from the surrounding environment. That is only really useful in a language without GC if you have a notion of owning pointers that can manage the capture. A language with smart pointers allows you to implement this, C does not.

                                                                                                                    I disagree, depends on what you’re doing. I’m doing a roguelike in Zig right now, and I use anonymous functions quite extensively for item/weapon/armor/etc triggers, i.e., where each game object has some unique anonymous functions tied to the object’s fields and can be called on certain events. Having closures would be nice, but honestly in this use-case I didn’t really feel much of a need for it.

                                                                                                                  2. 3

                                                                                                                    Note that C does have “standard” answers to a lot of these.

                                                                                                                    C’s macro system is very, very error prone and very easily abused, since it’s basically a glorified search-and-replace system that has no way to warn you of mistakes.

                                                                                                                    The macro system is the #1 thing keeping C alive :)

                                                                                                                    There are no methods for structs, you basically create struct Foo and then have to name all the methods of that struct foo_do_stuff (instead of doing foo_var.do_stuff() like in other languages)

                                                                                                                    Aside from macro stuff, the typical way to address this is to use a struct of function pointers. So you’d create a wrapper like

                                                                                                                    do_stuff(struct *foo)
                                                                                                                    {
                                                                                                                        foo->do_stuff(foo);
                                                                                                                    }
                                                                                                                    

                                                                                                                    C has no generics, you have to do ugly hacks with either void* (which means no type checking) or with the macro system (which is a pain in the ass).

                                                                                                                    Note that typically there is a “base class” which either all “subclasses” include as a member (and use offsetof to recover the subclass) or have a void * private data pointer. This doesn’t really escape the problem, however in practice I’ve never run into a bug where the wrong struct/method gets combined. This is because the above pattern ensures that the correct method gets called.

                                                                                                                    C’s error handling is completely nonexistant. “Errors” are returned as integer codes, so you need to define an enum/constants for each function (for each possible returned error), but if you do that, you need to have the actual return value as a pointer argument.

                                                                                                                    Well, there’s always errno… And if you control the address space you can always use the upper few addresses for error codes. That said, better syntax for multiple return values would probably go a long way.

                                                                                                                    C has no anonymous functions. (Whether this matters really depends on your coding style.)

                                                                                                                    IIRC gcc has them, but they require executable stacks :)

                                                                                                                    Manual memory management without defer is a PITA and error-prone.

                                                                                                                    Agree. I think you can do this with GCC extensions, but some sugar here would be nice.

                                                                                                                    Weird integer type system. long long, int, short, etc which have different bit widths on different arches/platforms. (Most C projects I know import stdint.h to get uint32_t and friends, or just have a typedef mess to use usize, u32, u16, etc.)

                                                                                                                    Arguably there should be fixed width types, size_t, intptr_t, and regsize_t. Unfortunately, C lacks the last one, which is typically assumed to be long. Rust, for example, gets this even more wrong and lacks the last two (c.f. the recent post on 129-bit pointers).


                                                                                                                    IMO you missed the most important part, which is that C strings are (by-and-large) nul-terminated. Having better syntax for carrying a length around with a pointer would go a long way to making string support better.

                                                                                                                  3. 9

                                                                                                                    Even in C’s domain, where C lacks nothing and is fine for what it is, I would criticize C for maybe 5 things, which I would consider the real criticism:

                                                                                                                    1. It has undefined behaviour, of the kind that has come to mean that the compiler may disobey the source code. It turns working code into broken code just by switching compiler or inlining some code that wasn’t inlined before. You can’t necessarily point at a piece of code and say it was always broken, because UB is a runtime phenomenon. Not reassuring for a supposedly lowlevel language.
                                                                                                                    2. Its operator precedence is wrong.
                                                                                                                    3. Integer promotion. Just why.
                                                                                                                    4. Signedness propagates the wrong way: Instead of the default type being signed (int) and comparison between signed and unsigned yielding unsigned, it should be opposite: There should be a nat type (for natural number, effectively size_t), and comparison between signed and unsigned should yield signed.
                                                                                                                    5. char is signed. Nobody likes negative code points.
                                                                                                                    1. 6

                                                                                                                      the kind that has come to mean that the compiler may disobey the source code. It turns working code into broken code

                                                                                                                      I’m wary of this same tired argument cropping up again, so I’ll just state it this way: I disagree. Code that invokes undefined behavior is already broken; changing compiler can’t (except perhaps in very particular circumstances, which I don’t think you were referring to) introduce undefined behaviour; it can change the observable behaviour when UB is invoked.

                                                                                                                      A compiler can’t “disobey the source code” whilst conforming to the language standard. If the source code does something that doesn’t have defined semantics, that’s on the source code, not the compiler.

                                                                                                                      “It’s easy to accidentally invoke undefined behaviour in C” is a valid criticism, but “C compilers breaks code” is not.

                                                                                                                      You can’t necessarily point at a piece of code and say it was always broken

                                                                                                                      You certainly can in some instances. But sure, for example, if some piece of code dereferences a pointer and the value is set somewhere else, it could be undefined or not depending on whether the pointer is valid at the point it is dereferenced. So code might be “not broken” given certain constraints (eg that the pointer is valid), but not work properly if those constraints are violated, just like code in any language (although in C there’s a good chance the end result is UB, which is potentially more catastrophic).

                                                                                                                      I’m not saying C is a good language, just that I think this particular criticism is unfair. (Also I think your point 5 is wrong, char can be unsigned, it’s up to the implementation).

                                                                                                                      1. 7

                                                                                                                        Thing is, it certainly feels like the compiler is disobeying the source code. Signed integer overflow? No problem pal, this is x86, that platform will wrap around just fine! Right? Riiight? Oops, nope, and since the compiler pretends UB does not exist, it just deleted a security check that it deemed “dead code”, and now my hard drive has been encrypted by a ransomware that just exploited my vulnerability.

                                                                                                                        Though I agree with all the facts you laid out, and with the interpretation that UB means the program is already broken even if the generated binary didn’t propagate the error. But Chandler Carruth pretending that UB does not invoke the nasal demons is not far. Let’s not forget that UB means the compiler is allowed to cause your entire hard drive to be formatted, as ridiculous as it may sound. And sometimes it actually happens (as it did so many times with buffer overflow exploits).

                                                                                                                        Sure, it’s not like the compiler is actually disobeying your source code. But since UB means “all bets are off”, and UB is not always easy to catch, the result is pretty close.

                                                                                                                        1. 3

                                                                                                                          Sure, it’s not like the compiler is actually disobeying your source code. But since UB means “all bets are off”, and UB is not always easy to catch, the result is pretty close.

                                                                                                                          I feel like “disobeying the code” and “not doing what I intended it to do due to the code being wrong” are still two sufficiently different things that it’s worth distinguishing.

                                                                                                                          1. 4

                                                                                                                            Okay, it is worth distinguishing.

                                                                                                                            But it is also worth noting that C is quite special. This UB business repeatedly violates the principle of least astonishement. Especially the modern interpretation, where compilers systematically assume UB does not exist and any code path that hits UB is considered “dead code”.

                                                                                                                            The original intent of UB was much closer to implementation defined behaviour. Signed integer overflow was originally UB because some platforms crashed or otherwise went bananas when it occurred. But the expectation was that on platforms that behave reasonably (like x86, that wraps around), we’d get the reasonable behaviour. But then compiler writers (or should I say their lawyers) noticed that strictly speaking, the standard didn’t made that expectation explicit, and in the name of optimisation started to invoke nasal demons even on platforms that could have done the right thing.

                                                                                                                            Sure the code is wrong. In many cases though, the standard is also wrong.

                                                                                                                            1. 4

                                                                                                                              I agree with some things but not others that you say, but these arguments have been hashed out many times before.

                                                                                                                              Sure the code is wrong

                                                                                                                              That’s the point I was making. Since we agree on that, and we agree that there are valid criticisms of C as a language (though we may differ on the specifics of those), let’s leave the rest. Peace.

                                                                                                                        2. 4

                                                                                                                          But why not have the compiler reject the code instead of silently compiling it wrong?

                                                                                                                          1. 2

                                                                                                                            It doesn’t compile it wrong. Code with no semantics can’t be compiled incorrectly. You’re making the exact same misrepresentation as in the post above that I responded to originally.

                                                                                                                            1. 3

                                                                                                                              Code with no semantics shouldn’t be able to be compiled at all.

                                                                                                                              1. 1

                                                                                                                                I’d almost agree, though I can think of some cases where such code could exist for a reason (and I’ll bet that such code exists in real code bases). In particular, hairy macro expansions etc which produce code that isn’t even executed (or won’t be executed in the case where it would be UB, at least) in order to make compile-time type-safety checks. IIRC there are a few such things used in the Linux kernel. There are probably plenty of other cases; there’s a lot of C code out there.

                                                                                                                                In practice though, a lot of code that potentially exhibits UB only does so if certain constraints are violated (eg if a pointer is invalid, or if an integer is too large and will result in overflow at some operation), and the compiler can’t always tell that the constraints necessarily will be violated, so it generates code with the assumption that if the code is executed, then the constraints do hold. So if the larger body of code is wrong - the constraints are violated, that is - the behaviour is undefined.

                                                                                                                                1. 1

                                                                                                                                  In particular, hairy macro expansions etc which produce code that isn’t even executed (or won’t be executed in the case where it would be UB

                                                                                                                                  That’s why it’s good to have a proper macro system that isn’t literally just find and replace.

                                                                                                                                  In practice though, a lot of code that potentially exhibits UB only does so if certain constraints are violated

                                                                                                                                  True, and I’m mostly talking about UB that can be detected at compile time, such as f(++x, ++x).

                                                                                                                      2. 6

                                                                                                                        Contrary to what people are saying, C is just fine for what it is.

                                                                                                                        People complain about the std library being tiny, but you basically have the operating system at your fingers, where C is a first class citizen.

                                                                                                                        Then people complain C is not safe, yes that’s true, but with a set of best practices you can keep thing under control.

                                                                                                                        People complain you don’t have generics, you dont need them most of the time.

                                                                                                                        Projects like nginx, SQLite and redis, not to speak about the Nix world prove that C is perfectly fine of a language. Also most of the popular python libraries nowadays are written in C.

                                                                                                                        1. 25

                                                                                                                          Hi! I’d like to introduce you to Fish in a Barrel, a bot which publishes information about security vulnerabilities to Twitter, including statistics on how many of those vulnerabilities are due to memory unsafety. In general, memory unsafety is easy to avoid in languages which do not permit memory-unsafe operations, and nearly impossible to avoid in other languages. Because C is in the latter set, C is a regular and reliable source of security vulnerabilities.

                                                                                                                          I understand your position; you believe that people are morally obligated to choose “a set of best practices” which limits usage of languages like C to supposedly-safe subsets. However, there are not many interesting subsets of C; at best, avoiding pointer arithmetic and casts is good, but little can be done about the inherent dangers of malloc() and free() (and free() and free() and …) Moreover, why not consider the act of choosing a language to be a practice? Then the choice of C can itself be critiqued as contrary to best practices.

                                                                                                                          nginx is well-written, but Redis is not. SQLite is not written just in C, but also in several other languages combined, including SQL and TH1 (“test harness one”); this latter language is specifically for testing that SQLite behaves property. All three have had memory-unsafety bugs. This suggests that even well-written C, or C in combination with other languages, is unsafe.

                                                                                                                          Additionally, Nix is written in C++ and package definitions are written in shell. I prefer PyPy to CPython; both are written in a combination of C and Python, with CPython using more C and PyPy using more Python. I’m not sure where you were headed here; this sounds like a popularity-contest argument, but those are not meaningful in discussions about technical issues. Nonetheless, if it’s the only thing that motivates you, then consider this quote from the Google Chrome security team:

                                                                                                                          Since “memory safety” bugs account for 70% of the exploitable security bugs, we aim to write new parts of Chrome in memory-safe languages.

                                                                                                                          1. 2

                                                                                                                            I am curious about your claim that Redis is not well-written? I’ve seen other folks online hold it up as an example of a well-written C codebase, at least in terms of readability.

                                                                                                                            I understand that readable is not the same as secure, but would like to understand where you are coming from on this.l

                                                                                                                            1. 1

                                                                                                                              It’s 100% personal opinion.

                                                                                                                          2. 9

                                                                                                                            Projects like nginx, SQLite and redis, not to speak about the Nix world prove that C is perfectly fine of a language.

                                                                                                                            Ah yes, you can see the safety of high-quality C in practice:

                                                                                                                            https://nginx.org/en/security_advisories.html https://www.cvedetails.com/vulnerability-list/vendor_id-18560/product_id-47087/Redislabs-Redis.html

                                                                                                                            Including some fun RCEs, like CVE-2014-0133 or CVE-2016-8339.

                                                                                                                            1. 2

                                                                                                                              I also believe C will still have a place for long time. I know I’m a newbie with it, but making a game with C (using Raylib) has been pretty fun. It’s simple and to the point… And I don’t mind making mistakes really, that’s how I learn the best.

                                                                                                                              But again it’s cool to see people creating new languages as alternatives.

                                                                                                                            2. 4

                                                                                                                              What does Hare offer over C?

                                                                                                                              Here’s a list of ways that Drew says Hare improves over C:

                                                                                                                              Hare makes a number of conservative improvements on C’s ideas, the biggest bet of which is the use of tagged unions. Here are a few other improvements:

                                                                                                                              • A context-free grammar
                                                                                                                              • Less weird type syntax
                                                                                                                              • Language tooling in the stdlib
                                                                                                                              • Built-in and semantically meaningful static and runtime assertions
                                                                                                                              • A lightweight system for dependency resolution
                                                                                                                              • defer for cleanup and error handling
                                                                                                                              • An optional build system which you can replace with make and standard tools

                                                                                                                              Even with these improvements, Hare manages to be a smaller, more conservative language than C, with our specification clocking in at less than 1/10th the size of C11, without sacrificing anything that you need to get things done in the systems programming world.

                                                                                                                              It’s worth reading the whole piece. I only pasted his summary.

                                                                                                                            1. 40

                                                                                                                              I tried out this language while it was in early development, writing some of the standard library (hash::crc* and unix::tty::*) to test the language. I wrote about this experience, in a somewhat haphazard way. (Note, that blog post is outdated and not all my opinions are the same. I’ll be trying to take a second look at Hare in the coming days.)

                                                                                                                              In general, I feel like Hare just ends up being a Zig without comptime, or a Go without interfaces, generics, GC, or runtime. I really hate to say this about a project where they authors have put in such a huge amount of effort over the past year or so, but I just don’t see its niche – the lack of generics mean I’d always use Zig or Rust instead of Hare or C. It really looks like Drew looked at Zig, said “too bloated”, and set out to create his own version.

                                                                                                                              Another thing I find strange: why are you choosing to not support Windows and macOS? Especially since, you know, one of C’s good points is that there’s a compiler for every platform and architecture combination on earth?

                                                                                                                              That said, this language is still in its infancy, so maybe as time goes and the language finds more users we’ll see more use-cases for Hare.

                                                                                                                              In any case: good luck, Drew! Cheers!

                                                                                                                              1. 10

                                                                                                                                why are you choosing to not support Windows and macOS?

                                                                                                                                DdV’s answer on HN:

                                                                                                                                We don’t want to help non-FOSS OSes.

                                                                                                                                (Paraphrasing a lot, obvs.)

                                                                                                                                My personal 2c:

                                                                                                                                Some of the nastier weirdnesses in Go are because Go supports Windows and Windows is profoundly un-xNix-like. Supporting Windows distorted Go severely.

                                                                                                                                1. 13

                                                                                                                                  Some of the nastier weirdnesses in Go are because Go supports Windows and Windows is profoundly un-xNix-like. Supporting Windows distorted Go severely.

                                                                                                                                  I think that’s the consequence of not planning for Windows support in the first place. Rust’s standard library was built without the assumption of an underlying Unix-like system, and it provides good abstractions as a result.

                                                                                                                                  1. 5

                                                                                                                                    Amos talks about that here: Go’s file APIs assume a Unix filesystem. Windows support was kludged in later.

                                                                                                                                  2. 5

                                                                                                                                    Windows and Mac/iOS don’t need help from new languages; it’s rather the other way around. Getting people to try a new language is pretty hard, let alone getting them to build real software in it. If the language deliberately won’t let them target three of the most widely used operating systems, I’d say it’s shooting itself in the foot, if not in the head.

                                                                                                                                    (There are other seemingly perverse decisions too. 80-character lines and 8-character indentation? Manual allocation with no cleanup beyond a general-purpose “defer” statement? I must not be the target audience for this language, is the nicest response I have.)

                                                                                                                                    1. 2

                                                                                                                                      Just for clarity, it’s not my argument. I was just trying to précis DdV’s.

                                                                                                                                      I am not sure I agree, but then again…

                                                                                                                                      I am not sure that I see the need for yet another C-replacement. Weren’t Limbo, D, Go, & Rust all attempts at this?

                                                                                                                                      But that aside: there are a lot of OSes out there that are profoundly un-Unix-like. Windows is actually quite close, compared to, say, Oberon or classic MacOS or Z/OS or OpenVMS or Netware or OS/2 or iTron or OpenGenera or [cont’d p94].

                                                                                                                                      There is a lot of diversity out there that gets ignored if it doesn’t have millions of users.

                                                                                                                                      Confining oneself to just OSes in the same immediate family seems reasonable and prudent to me.

                                                                                                                                  3. 10

                                                                                                                                    My understanding is that the lack of generics and comptime is exactly the differentiating factor here – the project aims at simplicity, and generics/compile time evaluations are enormous cost centers in terms of complexity.

                                                                                                                                    1. 20

                                                                                                                                      You could say that generics and macros are complex, relative to the functionality they offer.

                                                                                                                                      But I would put comptime in a different category – it’s reducing complexity by providing a single, more powerful mechanism. Without something like comptime, IMO static languages lose significant productivity / power compared to a dynamic language.

                                                                                                                                      You might be thinking about things from the tooling perspective, in which case both features are complex (and probably comptime even more because it’s creating impossible/undecidable problems). But in terms of the language I’d say that there is a big difference between the two.

                                                                                                                                      I think a language like Hare will end up pushing that complexity out to the tooling. I guess it’s like Go where they have go generate and relatively verbose code.

                                                                                                                                      1. 3

                                                                                                                                        Yup, agree that zig-style seamless comptime might be a great user-facing complexity reducer.

                                                                                                                                        1. 16

                                                                                                                                          I’m not being Zig-specific when I say that, by definition, comptime cannot introduce user-facing complexity. Unlike other attributes, comptime only exists during a specific phase of compiler execution; it’s not present during runtime. Like a static type declaration, comptime creates a second program executed by the compiler, and this second program does inform the first program’s runtime, but it is handled entirely by the compiler. Unlike a static type declaration, the user uses exactly the same expression language for comptime and runtime.

                                                                                                                                          If we think of metaprogramming as inherent complexity, rather than incidental complexity, then an optimizing compiler already performs compile-time execution of input programs. What comptime offers is not additional complexity, but additional control over complexity which is already present.

                                                                                                                                          To put all of this in a non-Zig context, languages like Python allow for arbitrary code execution during module loading, including compile-time metaprogramming. Some folks argue that this introduces complexity. But the complexity of the Python runtime is there regardless of whether modules get an extra code-execution phase; the extra phase provides expressive power for users, not new complexity.

                                                                                                                                          1. 8

                                                                                                                                            Yeah, but I feel like this isn’t what people usually mean when they say some feature “increases complexity.”

                                                                                                                                            I think they mean something like: Now I must know more to navigate this world. There will be, on average, a wider array of common usage patterns that I will have to understand. You can say that the complexity was already there anyway, but if, in practice, is was usually hidden, and now it’s not, doesn’t that matter?

                                                                                                                                            then an optimizing compiler already performs compile-time execution of input programs.

                                                                                                                                            As a concrete example, I don’t have to know about a new keyword or what it means when the optimizing compiler does its thing.

                                                                                                                                            1. 2

                                                                                                                                              A case can be made that this definition of complexity is a “good thing” to improve code quality / “matters”:

                                                                                                                                              Similar arguments can be used for undefined behavior (UB) as it changes how you navigate a language’s world. But for many programmers, it can be usually hidden by code seemingly working in practice (i.e. not hitting race conditions, not hitting unreachable paths for common input, updating compilers, etc.). I’d argue that this still matters (enough to introduce tooling like UBSan, ASan, and TSan at least).

                                                                                                                                              The UB is already there, both for correct and incorrect programs. Providing tools to interact with it (i.e. __builtin_unreachable -> comptime) as well as explicit ways to do what you want correctly (i.e. __builtin_add_overflow -> comptime specific lang constructs interacted with using normal code e.g. for vs inline for) would still be described as “increases complexity” under this model which is unfortunate.

                                                                                                                                              1. 1

                                                                                                                                                The UB is already there, both for correct and incorrect programs.

                                                                                                                                                Unless one is purposefully using a specific compiler (or set thereof), that actually defines the behaviour the standard didn’t, then the program is incorrect. That it just happens to generate correct object code with this particular version of that particular compiler on those particular platforms is just dumb luck.

                                                                                                                                                Thus, I’d argue that tools like MSan, ASan, and UBSan don’t introduce any complexity at all. The just revealed the complexity of UB that was already there, and they do so reliably enough that they actually relieve me of some of the mental burden I previously had to shoulder.

                                                                                                                                            2. 5

                                                                                                                                              languages like Python allow for arbitrary code execution during module loading, including compile-time metaprogramming.

                                                                                                                                              Python doesn’t allow compile-time metaprogramming for any reasonable definition of the word. Everything happens and is introspectable at runtime, which allows you to do similar things, but it’s not compile-time metaprogramming.

                                                                                                                                              One way to see this is that sys.argv is always available when executing Python code. (Python “compiles” byte code, but that’s an implementation detail unrelated to the semantics of the language.)

                                                                                                                                              On the other hand, Zig and RPython are staged. There is one stage that does not have access to argv (compile time), and another one that does (runtime).

                                                                                                                                              Related to the comment about RPython I linked here:

                                                                                                                                              http://www.oilshell.org/blog/2021/04/build-ci-comments.html

                                                                                                                                              https://old.reddit.com/r/ProgrammingLanguages/comments/mlflqb/is_this_already_a_thing_interpreter_and_compiler/gtmbno8/

                                                                                                                                              1. 4

                                                                                                                                                Yours is a rather unconventional definition of complexity.

                                                                                                                                                1. 5

                                                                                                                                                  I am following the classic paper, “Out of the Tar Pit”, which in turn follows Brooks. In “Abstractive Power”, Shutt distinguishes complexity from expressiveness and abstractedness while relating all three.

                                                                                                                                                  We could always simply go back to computational complexity, but that doesn’t capture the usage in this thread. Edit for clarity: Computational complexity is a property of problems and algorithms, not a property of languages nor programming systems.

                                                                                                                                                  1. 3

                                                                                                                                                    Good faith question: I just skimmed the first ~10 pages of “Out of the Tar Pit” again, but was unable to find the definition that you allude to, which would exclude things like the comptime keyword from the meaning of “complexity”. Can you point me to it or otherwise clarify?

                                                                                                                                                    1. 4

                                                                                                                                                      Sure. I’m being explicit for posterity, but I’m not trying to be rude in my reply. First, the relevant parts of the paper; then, the relevance to comptime.

                                                                                                                                                      On p1, complexity is defined as the tendency of “large systems [to be] hard to understand”. Unpacking their em-dash and subjecting “large” to the heap paradox, we might imagine that complexity is the amount of information (bits) required to describe a system in full detail, with larger systems requiring more information. (I don’t really know what “understanding” is, so I’m not quite happy with “hard to understand” as a concrete definition.) Maybe we should call this “Brooks complexity”.

                                                                                                                                                      On p6, state is a cause of complexity. But comptime does not introduce state compared to an equivalent non-staged approach. On p8, control-flow is a cause of complexity. But comptime does not introduce new control-flow constructs. One could argue that comptime requires extra attention to order of evaluation, but again, an equivalent program would have the same order of evaluation at runtime.

                                                                                                                                                      On p10, “sheer code volume” is a cause of complexity, and on this point, I fully admit that I was in error; comptime is a long symbol, adding size to source code. In this particular sense, comptime adds Brooks complexity.

                                                                                                                                                      Finally, on a tangent to the top-level article, p12 explains that “power corrupts”:

                                                                                                                                                      [I]n the absence of language-enforced guarantees (…) mistakes (and abuses) will happen. This is the reason that garbage collection is good — the power of manual memory management is removed. … The bottom line is that the more powerful a language (i.e. the more that is possible within the language), the harder it is to understand systems constructed in it.

                                                                                                                                                      comptime and similar metaprogramming tools don’t make anything newly possible. It’s an annotation to the compiler to emit specialized code for the same computational result. As such, they arguably don’t add Brooks complexity. I think that this argument also works for inline, but not for @compileError.

                                                                                                                                          2. 18

                                                                                                                                            My understanding is that the lack of generics and comptime is exactly the differentiating factor here – the project aims at simplicity, and generics/compile time evaluations are enormous cost centers in terms of complexity.

                                                                                                                                            Yeah, I can see that. But under what conditions would I care how small, big, or icecream-covered the compiler is? Building/bootstrapping for a new platform is a one-time thing, but writing code in the language isn’t. I want the language to make it as easy as possible on me when I’m using it, and omitting features that were around since the 1990’s isn’t helping.

                                                                                                                                            1. 8

                                                                                                                                              Depends on your values! I personally see how, eg, generics entice users to write overly complicated code which I then have to deal with as a consumer of libraries. I am not sure that not having generics solves this problem, but I am fairly certain that the problem exists, and that some kind of solution would be helpful!

                                                                                                                                              1. 3

                                                                                                                                                In some situations, emitted code size matters a lot (and with generics, that can quickly grow out of hand without you realizing it).

                                                                                                                                                1. 13

                                                                                                                                                  In some situations

                                                                                                                                                  I see what you mean, but I think in those situations it’s not too hard to, you know, refrain from use generics. I see no reason to force all language users to not use that feature. Unless Hare is specifically aiming for that niche, which I don’t think it is.

                                                                                                                                                  1. 4

                                                                                                                                                    There are very few languages that let you switch between monomorphisation and dynamic dispatch as a compile-time flag, right? So if you have dependencies, you’ve already had the choice forced on you.

                                                                                                                                                    1. 6

                                                                                                                                                      If you don’t like how a library is implemented, then don’t use it.

                                                                                                                                                      1. 2

                                                                                                                                                        Ah, the illusion of choice.

                                                                                                                                              2. 10

                                                                                                                                                Where is the dividing line? What makes functions “not complex” but generics, which are literally functions evaluated at compile time, “complex”?

                                                                                                                                                1. 14

                                                                                                                                                  I don’t know where the line is, but I am pretty sure that this is past that :D

                                                                                                                                                  https://github.com/diesel-rs/diesel/blob/master/diesel_cli/src/infer_schema_internals/information_schema.rs#L146-L210

                                                                                                                                                  1. 17

                                                                                                                                                    Sure, that’s complicated. However:

                                                                                                                                                    1. that’s the inside of the inside of a library modeling a very complex domain. Complexity needs to live somewhere, and I am not convinced that complexity that is abstracted away and provides value is a bad thing, as much of the “let’s go back to simpler times” discourse seems to imply. I rather someone takes the time to solve something once, than me having to solve it every time, even if with simpler code.

                                                                                                                                                    2. Is this just complex, or is it actually doing more than the equivalent in other languages? Rust allows for expressing constraints that are not easily (or at all) expressable in other languages, and static types allow for expressing more constraints than dynamic types in general.

                                                                                                                                                    In sum, I’d reject a pull request with this type of code in an application, but don’t mind it at all in a library.

                                                                                                                                                    1. 4

                                                                                                                                                      that’s the inside of the inside of a library modeling a very complex domain. Complexity needs to live somewhere,

                                                                                                                                                      I find that’s rarely the case. It’s often possible to tweak the approach to a problem a little bit, in a way that allows you to simply omit huge swaths of complexity.

                                                                                                                                                      1. 3

                                                                                                                                                        Possible, yes. Often? Not convinced. Practical? I am willing to bet some money that no.

                                                                                                                                                        1. 7

                                                                                                                                                          I’ve done it repeatedly, as well as seeing others do it. Occasionally, though admittedly rarely, reducing the size of the codebase by an order of magnitude while increasing the number of features.

                                                                                                                                                          There’s a huge amount of code in most systems that’s dedicated to solving optional problems. Usually the unnecessary problems are imposed at the system design level, and changing the way the parts interface internally allows simple reuse of appropriate large-scale building blocks and subsystems, reduces the number of building blocks needed, and drops entire sections of translation and negotiation glue between layers.

                                                                                                                                                          Complexity rarely needs to be somewhere – and where it does need to be, it’s in often in the ad-hoc, problem-specific data structures that simplify the domain. A good data structure can act as a laplace transform for the entire problem space of a program, even if it takes a few thousand lines to implement. It lets you take the problem, transform it to a space where the problem is easy to solve, and put it back directly.

                                                                                                                                                    2. 7

                                                                                                                                                      You can write complex code in any language, with any language feature. The fact that someone has written complex code in Rust with its macros has no bearing on the feature itself.

                                                                                                                                                      1. 2

                                                                                                                                                        It’s the Rust culture that encourages things like this, not the fact that Rust has parametric polymorphism.

                                                                                                                                                        1. 14

                                                                                                                                                          I am not entirely convinced – to me, it seems there’s a high correlation between languages with parametric polymorphism and languages with culture for high-to-understand abstractions (Rust, C++, Scala, Haskell). Even in Java, parts that touch generics tend to require some mind-bending (producer extends consumer super).

                                                                                                                                                          I am curious how Go’s generic would turn out to be in practice!

                                                                                                                                                          1. 8

                                                                                                                                                            Obligatory reference for this: F# Designer Don Syme on the downsides of type-level programming

                                                                                                                                                            I don’t want F# to be the kind of language where the most empowered person in the discord chat is the category theorist.

                                                                                                                                                            It’s a good example of the culture and the language design being related.

                                                                                                                                                            https://lobste.rs/s/pkmzlu/fsharp_designer_on_downsides_type_level

                                                                                                                                                            https://old.reddit.com/r/ProgrammingLanguages/comments/placo6/don_syme_explains_the_downsides_of_type_classes/

                                                                                                                                                            which I linked here: http://www.oilshell.org/blog/2022/03/backlog-arch.html

                                                                                                                                                  2. 3

                                                                                                                                                    In general, I feel like Hare just ends up being a Zig without comptime, or a Go without interfaces, generics, GC, or runtime. … I’d always use Zig or Rust instead of Hare or C.

                                                                                                                                                    What if you were on a platform unsupported by LLVM?

                                                                                                                                                    When I was trying out Plan 9, lack of LLVM support really hurt; a lot of good CLI tools these days are being written in Rust.

                                                                                                                                                    1. 15

                                                                                                                                                      Zig has rudimentary plan9 support, including a linker and native codegen (without LLVM). We’ll need more plan9 maintainers to step up if this is to become a robust target, but the groundwork has been laid.

                                                                                                                                                      Additionally, Zig has a C backend for those targets that only ship a proprietary C compiler fork and do not publish ISA details.

                                                                                                                                                      Finally, Zig has the ambitions to become the project that is forked and used as the proprietary compiler for esoteric systems. Although of course we would prefer for businesses to make their ISAs open source and publicly documented instead. Nevertheless, Zig’s MIT license does allow this use case.

                                                                                                                                                      1. 2

                                                                                                                                                        I’ll be damned! That’s super impressive. I’ll look into Zig some more next time I’m on Plan 9.

                                                                                                                                                      2. 5

                                                                                                                                                        I think that implies that your platform is essentially dead ( I would like to program my Amiga in Rust or Swift or Zig, too) or so off-mainstream (MVS comes to mind) that those tools wouldn’t serve any purpose anyway because they’re too alien).

                                                                                                                                                        1. 5

                                                                                                                                                          Amiga in Rust or Swift or Zig, too)

                                                                                                                                                          Good news: LLVM does support 68k, in part to many communities like the Amiga community. LLVM doesn’t like to include stuff unless there’s a sufficient maintainer base, so…

                                                                                                                                                          MVS comes to mind

                                                                                                                                                          Bad news: LLVM does support S/390. No idea if it’s just Linux or includes MVS.

                                                                                                                                                          1. 1

                                                                                                                                                            Good news: LLVM does support 68k Unfortunately, that doesn’t by itself mean that compilers (apart from clang) get ported, or that the platform gets added as part of a target triple. For instance, Plan 9 runs on platforms with LLVM support, yet isn’t supported by LLVM.

                                                                                                                                                            Bad news: LLVM does support S/390. I should have written VMS instead.

                                                                                                                                                            1. 1
                                                                                                                                                          2. 2

                                                                                                                                                            I won’t disagree with describing Plan 9 as off-mainstream ;) But I’d still like a console-based Signal client for that OS, and the best (only?) one I’ve found is written in Rust.