Threads for duncan_bayne

  1. 9

    I hope the author gets the help they need, but I don’t really see how the blame for their psychological issues should be laid at the feet of their most-recent employer.

    1. 50

      In my career I’ve seen managers cry multiple times, and this is one of the places that happened. A manager should never have to ask whether they’re a coward, but that happened here.

      I dunno, doesn’t sound like they were the only person damaged by the experience.

      Eventually my physicians put me on forced medical leave, and they strongly encouraged me to quit…

      Seems pretty significant when medical professionals are telling you the cure for your issues is “quit this job”?

      1. 15

        Seems pretty significant when medical professionals are telling you the cure for your issues is “quit this job”?

        A number of years ago I developed some neurological problems, and stress made it worse. I was told by two different doctors to change or quit my job. I eventually did, and it helped, but the job itself was not the root cause, nor was leaving the sole cure.

        I absolutely cannot speak for OP’s situation, but I just want to point out that a doctor informing you to rethink your career doesn’t necessarily imply that the career is at fault. Though, in this case, it seems like it is.

        1. 4

          It doesn’t seem like the OP’s doctors told them to change careers though, just quit that job.

          1. 3

            To clarify, I’m using “career change” in a general sense. I would include quitting a job as a career change, as well as leaving one job for another in the same industry/domain. I’m not using it in the “leave software altogether” sense.

      2. 24

        I’m trusting the author’s causal assessment here, but employers (especially large businesses with the resources required) can be huge sources of stress and prevent employees from having the time or energy needed to seek treatment for their own needs, so they can both cause issues and worsen existing ones.

        It’s not uncommon, for example, for businesses to encourage unpaid out-of-hours work for salaried employees by building a culture that emphasizes personal accountability for project success; this not only increases stress and reduces free time that could otherwise be used to relieve work-related stress, it teaches employees to blame themselves for what could just as easily be systemic failures. Even if an employee resists the social pressure to put in extra hours in such an environment, they’ll still be penalized with (real or imagined) blame from their peers, blame from themselves for “not trying hard enough”, and likely less job safety or fewer benefits.

        In particular, there’s relevance from the business’ failure to support effective project management, manage workloads, or generally address problems repeatedly and clearly brought up to them. These kinds of things typically fuel burnout. The author doesn’t go into details enough for an outside observer to make a judgment call one way or the other, but if you trust the author’s account of reality then it seems reasonable to blame the employer for, at the least, negligently fueling these problems through gross mismanagement.

        Arguably off-topic, but I think it might squeak by on the grounds that it briefly ties the psychological harm to the quality of a technical standard resulting from the mismanaged business process.

        1. 3

          a culture that emphasizes personal accountability for project success; this not only increases stress and reduces free time that could otherwise be used to relieve work-related stress, it teaches employees to blame themselves for what could just as easily be systemic failures.

          This is such a common thing. An executive or manager punts on actually organizing the work, whether from incompetence or laziness, and then tries to make the individuals in the system responsible for the failures that occur. It’s hardly new. Deming describes basically this in ‘The New Economics’ (look up the ‘red bead game’).

          More cynically, is WebAssembly actuall in Google’s interests? It doesn’t add revenue to Google Cloud. It’s going to make their data collection harder (provide Google analytics libraries for how many languages?). It was clearly a thing that was gaining momentum, so if they were to damage it, they would need to make sure they had a seat at the table and then make sure that the seat was used as ineffectually and disruptively as possible.

          1. 9

            More cynically, is WebAssembly actually in Google’s interests?

            I think historically the answer would have been yes. Google has at various points been somewhat hamstrung by shipping projects with slow front end JS in them and responded by trying to make browsers themselves faster. e.g. creating V8 and financially contributing to Mozilla.

            I couldn’t say if Google now has any incentive to not make JS go fast. I’m not aware of one. I suspect still the opposite. I think they’re also pushing mobile web apps as a way to inconvenience Apple; I think Google currently want people to write portable software using web tech instead of being tempted to write native apps for iOS only.

            That said, what’s good for the company is not the principle factor motivating policy decisions. What’s good for specific senior managers inside Google is. Otherwise you wouldn’t see all these damn self combusting promo cycle driven chat apps from Google. A company is not a monolith.

            ‘The New Economics’

            I have this book and will have to re-read at least this bit tomorrow. I have slightly mixed feelings about it, mostly about the writing style.

            1. 1

              Making JS fast is one thing. Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

              Your point about the senior managers’ interests driving what’s done is on point, though. Google and Facebook especially are weird because ads fund the company, and the rest is all some kind of loss leader floating around divorced from revenue.

              The only thing I’ll comment about Deming is that the chapter on intrinsic vs extrinsic motivation should be ignored, as that’s entirely an artifact despite its popularity. The rest of the book has held up pretty well.

              1. 10

                Making JS fast is one thing. Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                Google doesn’t need to maintain their analytics libraries in many other languages, only to expose APIs callable from those languages. All WebAssembly languages can call / be called by JavaScript.

                More generally, Google has been the biggest proponent of web apps instead of web services. Tim Berners-Lee’s vision for the web was that you’d have services that provided data with rich semantic markup. These could be rendered as web pages but could equally plug into other clients. The problem with this approach is that a client that can parse the structure of the data can choose to render it in a way that simply ignores adverts. If all of your adds are in an <advert provider="google"> block then an ad blocker is a trivial browser extension, as is something that displays ads but restricts them to plain text. Google’s web app push has been a massive effort to convince everyone to obfuscate the contents of their web pages. This has two key advantages for Google:

                • Writing an ad blocker is hard if ads and contents are both generated from a Turing-complete language using the same output mechanisms.
                • Parsing such pages for indexing requires more resources (you can’t just parse the semantic markup, you must run the interpreter / JIT in your crawler, which requires orders of magnitude more hardware than simply parsing some semantic mark-up. This significantly increases the barrier to entry for new search engines, protecting Google’s core user-data-harvesting tool.

                WebAssembly fits very well into Google’s vision for the web.

                1. 2

                  I used to work for a price-comparison site, back when those were actual startups. We had one legacy price information page that was Java applet (remember those?) Supposedly the founders were worried about screen scrapers so wanted the entire site rendered with applets to deter them.

                  1. 1

                    This makes more sense than my initial thoughts. Thanks.

                  2. 2

                    Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                    This is something I should have stated explicitly but didn’t think to: I don’t think wasm is actually going to be the future of non-JS languages in the browser. I think they for the next couple decades at least, wasm is going to be used for compute kernels (written in other langs like C++ and Rust) that get called from JS.

                    I’m taking a bet here that targeting wasm from langs with substantial runtimes will remain unattractive indefinitely due to download weight and parsing time.

                    about Deming

                    I honestly think many of the points in that book are great but hoo boy the writing style.

            2. 0

              That is exactly what I thought while reading this. I understand that to a lot of people, WebAssembly is very important, and they have a lot of emotions vested into the success. But to the author’s employer, it might not be as important, as it might not directly generate revenue. The author forgets that to the vast, vast majority of people on this earth, having the opportunity to work on such a technology at a company like Google is an unparalleled privilege. Most people on this earth do not have the opportunity to quit their job just because a project is difficult, or because meetings run long or it is hard to find consensus. Managing projects well is incredibly hard. But I am sure that the author was not living on minimum wage, so there surely was compensation for the efforts.

              It is sad to hear that the author has medical issues, and I hope those get sorted out. And those kinds of issues do exacerbate stressful jobs. But that is not a good reason for finger pointing. Maybe the position just was not right for the author, maybe there are more exciting projects that are waiting in the future. I certainly hope so. But it is important not to blame one’s issues on others, that is not a good attitude in life.

              1. 25

                Using the excuse that because there exist others less fortunate, it’s not worth fighting to make something better is also not a good attitude in life.

                Reading between the lines, it feels to me like there was a lot that the author left unsaid, and that’s fine. It takes courage to share a personal story about mental wellbeing, and an itemized list of all the wrongs that took place is not necessary to get the point the author was trying to make across.

                My point is that I’d be cautious about making assumptions about the author’s experiences as they didn’t exactly give a lot of detail here.

                1. 3

                  Using the excuse that because there exist others less fortunate, it’s not worth fighting to make something better is also not a good attitude in life.

                  This is true. It is worth fighting to make things better

                  Reading between the lines, it feels to me like there was a lot that the author left unsaid, and that’s fine. It takes courage to share a personal story about mental wellbeing, and an itemized list of all the wrongs that took place is not necessary to get the point the author was trying to make across.

                  There is a lot of things that go into mental wellbeing. Some things you can control, some things are genetic. I don’t know what the author left out, but I have not yet seen a study showing that stressful office jobs give people brain damage. There might be things the author has not explained, but at the same time that is a very extreme claim. In fact, if that were true, I am sure that the author should receive a lot in compensation.

                  My point is that I’d be cautious about making assumptions about the author’s experiences as they didn’t exactly give a lot of detail here.

                  I agree with you, but I also think that if someone makes a very bold claim about an employer, especially about personal injury, that these claims should be substantiated. There is a very big difference between “working there was hard, I quit” and “the employer acted recklessly and caused me personal injury”. And I don’t really know which one the author is saying, because from the description could be interpreted as it just being a difficult project to see through.

                  1. 8

                    In fact, if that were true, I am sure that the author should receive a lot in compensation.

                    By thinking about it for a few seconds you can realize that this can easily not happen. The OP itself says that they don’t have documented evidence from the time because of all the issues they were going through. And it’s easy to see why: if your mental health is damaged, your brain is not working right, would you be mindful enough to take detailed notes of every incident and keep a trail of evidence for later use in compensation claims? Or are you saying that compensation would be given out no questions asked?

                    1. 3

                      All I’m saying is, there is a very large difference between saying this job was very stressful, I had trouble sleeping and it negatively affected my concentration and memory and saying this job gave me brain damage. Brain damage is relatively well-defined:

                      The basic definition of brain damage is an injury to the brain caused by various conditions such as head trauma, inadequate oxygen supply, infections, or intracranial hemorrhage. This damage may be associated with a behavioral or functional abnormality.

                      Additionally, there are ways to test for this, a neurologist can make that determination. I’m not a neurologist. But it would be the first time I heard that brain damage be caused by psychosomatic issues. I believe that the author may have used this term in error. That’s why I said what I said — if you, or anyone, has brain damage as a result of your occupation, that is definitely grounds for compensation. And not a small compensation either, as brain damage is no joke. This is a very different category from mere psychological stress from working for an apparently mismanaged project.

                      1. 5

                        Via https://www.webmd.com/brain/brain-damage-symptoms-causes-treatments

                        Brain damage is an injury that causes the destruction or deterioration of brain cells.

                        Anxiety, stress, lack of sleep, and other factors can potentially do that. So I don’t see any incorrect use of the phrase ‘brain damage’ here. And anyway, you missed the point. Saying ‘This patient has brain damage’ is different from saying ‘Working in the WebAssembly team at Google caused this patient’s brain damage’. When you talk about causation and claims of damage and compensation, people tend to demand documentary evidence.

                        I agree brain damage is no joke, but if you look at society it’s very common for certain types of relatively-invisible mental illnesses to be downplayed and treated very lightly, almost as a joke. Especially by people and corporations who would suddenly have to answer for causing these injuries.

                        1. 3

                          Anxiety, stress, lack of sleep and other factors cannot, ever, possibly, cause brain damage. I think you have not completely read that article. It states – as does the definition that I linked:

                          All traumatic brain injuries are head injuries. But head injury is not necessarily brain injury. There are two types of brain injury: traumatic brain injury and acquired brain injury. Both disrupt the brain’s normal functioning.

                          • Traumatic Brain Injury(TBI) is caused by an external force – such as a blow to the head – that causes the brain to move inside the skull or damages the skull. This in turn damages the brain.
                          • Acquired Brain Injury (ABI) occurs at the cellular level. It is most often associated with pressure on the brain. This could come from a tumor. Or it could result from neurological illness, as in the case of a stroke.

                          There is no kind of brain injury that is caused by lack of sleep or stress. That is not to say that these things are not also damaging to one’s body and well-being.

                          Mental illnesses can be very devastating and stressful on the body. But you will not get a brain injury from a mental illness, unless it makes you physically impact your brain (causing traumatic brain injury), ingest something toxic, or have a stroke. It is important to be very careful with language and not confuse terms. The term “brain damage” is colloquially often used to describe things that are most definitely not brain damage, like “reading this gave me brain damage”. I hope you understand what I’m trying to state here. Again, the author has possibly misused the term “brain damage”, or there is some physical trauma that happened that the author has not mentioned in the article.

                          I hope you understand what I am trying to say here!

                          1. 9

                            Anxiety and stress raise adrenaline levels, which in turn cause short- and long-term changes in brain chemistry. It sounds like you’ve never been burnt out; don’t judge others so harshly.

                            1. 2

                              Anxiety and stress are definitely not healthy for a brain. They accelerate aging processes, which is damaging. But brain damage in a medical context refers to large-scale cell death caused by genetics, trauma, stroke or tumors.

                            2. 8

                              There seems to be a weird definitional slide here from “brain damage” to “traumatic brain injury.” I think we are all agreed that her job did not give her traumatic brain injury, and this is not claimed. But your claim that stress and sleep deprivation cannot cause (acquired) brain injury is wrong. In fact, you will find counterexamples by just googling “sleep deprivation brain damage”.

                              “Mental illnesses can be … stressful on the body.” The brain is part of the body!

                              1. 0

                                I think you – and most of the other people that have responded to my comment – have not quite understood what I’m saying. The argument here is about the terms being used.

                                Brain Damage

                                Brain damage, as defined here, is damage caused to the brain by trauma, tumors, genetics or oxygen loss, such as during a stroke. This leads to potentially large chunks of your brain to die off. This means you can lose entire brain regions, potentially permanently lose some abilities (facial recognition, speech, etc).

                                Sleep Deprivation

                                See Fundamental Neuroscience, page 961:

                                The crucial role of sleep is illustrated by studies showing that prolonged sleep deprivation results in the distruption of metabolic processes and eventually death.

                                When you are forcibly sleep deprived for a long time, such as when you are being tortured, your body can lose the ability to use nutrients and finally you can die. You need to not sleep at all for weeks for this to happen, generally this is not something that happens to people voluntarily, especially not in western countries.

                                Stress

                                The cells in your brain only have a finite lifespan. At some point, they die and new ones take their place (apoptosis). Chronic stress and sleep deprivation can speed up this process, accelerating aging.

                                Crucially, this is not the same as an entire chunk of your brain to die off because of a stroke. This is a very different process. It is not localized, and it doesn’t cause massive cell death. It is more of a slow, gradual process.

                                Summary

                                Mental illnesses can be … stressful on the body.” The brain is part of the body!

                                Yes, for sure. It is just that the term “brain damage” is usually used for a very specific kind of pattern, and not for the kind of chronlc, low-level damage done by stress and such. A doctor will not diagnose you with brain damage after you’ve had a stressful interaction with your coworker. You will be diagnosed with brain damage in the ICU after someone dropped a hammer on your head. Do you get what I’m trying to say?

                                1. 4

                                  I get what you are trying to say, I think you are simply mistaken. If your job impairs your cognitive abilities, then it has given you brain damage. Your brain, is damaged. You have been damaged in your brain. The cells and structures in your brain have taken damage. You keep trying to construct this exhaustive list of “things that are brain damage”, and then (in another comment) saying that this is about them not feeling appreciated and valued or sort of vaguely feeling bad, when what they are saying is that working at this job impaired their ability to form thoughts. That is a brain damage thing! The brain is an organ for forming thoughts. If the brain can’t thoughts so good no more, then it has been damaged.

                                  The big picture here is that a stressful job damaged this person’s health. Specifically, their brain’s.

                                  1. 2

                                    I understand what you are trying to say, but I think you are simply mistaken. We (as a society) have definitions for the terms we use. See https://en.wikipedia.org/wiki/Brain_damage:

                                    Neurotrauma, brain damage or brain injury (BI) is the destruction or degeneration of brain cells. Brain injuries occur due to a wide range of internal and external factors. In general, brain damage refers to significant, undiscriminating trauma-induced damage.

                                    This is not “significant, undiscriminating trauma-induced damage” (for context, trauma here refers to physical trauma, such as an impact to the head, not psychological trauma). What the author describes does not line up with any of the Causes of Brain Damage. It is simply not the right term.

                                    Yes, the author has a brain, and there is self-reported “damage” to it. But just because someone is a man and feels like he polices the neighborhood, does not make me a “police man”. Just because I feel like my brain doesn’t work right after a traumatic job experience does not mean I have brain damage™.

                                    1. 1

                                      The Wikipedia header is kind of odd. The next sentence after “in general, brain damage is trauma induced” lists non-trauma-induced categories of brain damage. So I don’t know how strong that “in general” is meant to be. At any rate, “in general” is not at odds with the use of the term for non-trauma induced stress/sleep depriv damage.

                                      At any rate, if you click through to Acquired Brain Injury, it says “These impairments result from either traumatic brain injury (e.g. …) or nontraumatic injury … (e.g. listing a bunch of things that are not traumatic.)”

                                      Anyway, the Causes of Brain Damage list is clearly not written to be exhaustive. “any number of conditions, including” etc.

                              2. 2

                                There is some evidence that lack of sleep may kill brain cells: https://www.bbc.com/news/health-26630647

                                It’s also possible to suffer from mini-strokes due to the factors discussed above.

                                In any case, I feel like you’re missing the forest for the trees. Sure, it’s important to be correct with wording. But is that more important than the bigger picture here, that a stressful job damaged this person’s health?

                                1. 2

                                  the bigger picture here, that a stressful job damaged this person’s health

                                  Yes, that is true, and it is a shame. I really wish that the process around WASM be less hostile, and that this person not be impacted negatively, even if stressful and hard projects are an unfortunate reality for many people.

                                  I feel like you’re missing the forest for the trees.

                                  I think that you might be missing the forest for the trees – I’m not saying that this person was not negatively impacted, I am merely stating that it is (probably, unless there is evidence otherwise) to characterize this impact as “brain damage”, because from a medical standpoint, that term has a more narrow definition that damage due to stress does not fulfill.

                        2. 4

                          Hello, you might enjoy this study.

                          https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4561403/

                          I looked through a lot of studies to try and find a review that was both broad and to the point.

                          Now, you are definitely mixing a lot of terms here… but I hope that if you read the research, you can be convinced, at the very least, that stress hurts brains (and I hope that reading the article and getting caught in this comment storm doesn’t hurt yours).

                          1. 2

                            Sleep Deprivation and Oxidative Stress in Animal Models: A Systematic Review tells us that sleep deprivation can be shown to increase oxidative stress:

                            Current experimental evidence suggests that sleep deprivation promotes oxidative stress. Furthermore, most of this experimental evidence was obtained from different animal species, mainly rats and mice, using diverse sleep deprivation methods.

                            Although, https://pubmed.ncbi.nlm.nih.gov/14998234/ disagrees with this. Furthermore, it is known that oxidative stress promotes apoptosis, see Oxidative stress and apoptosis :

                            Recent studies have demonstrated that reactive oxygen species (ROS) and the resulting oxidative stress play a pivotal role in apoptosis. Antioxidants and thiol reductants, such as N-acetylcysteine, and overexpression of manganese superoxide (MnSOD) can block or delay apoptosis.

                            The article that you linked Stress effects on the hippocampus: a critical review mentions that stress has an impact on the development of the brain and on it’s workings:

                            Uncontrollable stress has been recognized to influence the hippocampus at various levels of analysis. Behaviorally, human and animal studies have found that stress generally impairs various hippocampal-dependent memory tasks. Neurally, animal studies have revealed that stress alters ensuing synaptic plasticity and firing properties of hippocampal neurons. Structurally, human and animal studies have shown that stress changes neuronal morphology, suppresses neuronal proliferation, and reduces hippocampal volume

                            I do not disagree with this. I think that anyone would be able to agree that stress is bad for the brain, possibly by increasing apoptosis (accelerating ageing), decreasing the availability of nutrients. My only argument is that the term brain damage is quite narrowly defined (for example here) as (large-scale) damage to the brain caused by genetics, trauma, oxygen starvation or a tumor, and it can fall into one of two categories: traumatic brain injuries and acquired brain injuries. If you search for “brain damage” on pubmed, you will find the term being used like this:

                            You will not find studies or medical diagnoses of “brain damage due to stress”. I hope that you can agree that using the term brain damage in a context such as the author’s, without evidence of traumatic injury or a stroke, is wrong. This does not take away the fact that the author has allegedly experienced a lot of stress at their previous employer, one of the largest and high-paying tech companies, and that this experience has caused the author personal issues.

                            On an unrelated note: what is extremely fascinating to me is that some chemicals such as methamphetamine (at low concentrations) or minocycline are neuroprotective being able to lessen brain damage for example due to stroke. But obviously, at larger concentrations the opposite is the case.

                            1. 1

                              How about this one then? https://www.sciencedirect.com/science/article/abs/pii/S0197458003000484

                              We can keep going, it is not difficult to find these… Your’re splitting a hair which should not be split.

                              What’s so wrong about saying a bad work environment can cause brain damage?

                              1. 1

                                Your’re splitting a hair which should not be split.

                                There is nothing more fun than a civil debate. I would argue that any hair deserves being split. Worst case, you learn something new, or form a new opinion.

                                What’s so wrong about saying a bad work environment can cause brain damage?

                                Nothing is wrong with that, if the work environment involves heavy things, poisonous things, or the like. This is why OSHA compliance is so essential in protecting people’s livelihoods. I just firmly believe, and I think that the literature agrees with me on this, that “brain damage” as a medical definition refers to large-scale cell death due to trauma or stroke, and not chronic low-level damage caused by stress. The language we choose to use is extremely important, it is the only facility we have to exchange information. Language is not useful if it is imprecise or even wrong.

                                How about this one then?

                                Let’s take a look what we got here. I’m only taking a look at the abstract, for now.

                                Stress is a risk factor for a variety of illnesses, involving the same hormones that ensure survival during a period of stress. Although there is a considerable ambiguity in the definition of stress, a useful operational definition is: “anything that induces increased secretion of glucocorticoids”.

                                Right, stress causes elevated levels of glucocorticoids, such as cortisol.

                                The brain is a major target for glucocorticoids. Whereas the precise mechanism of glucocorticoid-induced brain damage is not yet understood, treatment strategies aimed at regulating abnormal levels of glucocorticoids, are worth examining.

                                Glucocorticoids are useful in regulating processes in the body, but they can also do damage. I had never heard of the term glucocorticoid-induced brain damage, and searching for it in the literature only yields this exact article, so I considered this a dead end. However, in doing some more research, I did find two articles that somewhat support your hypothesis:

                                In Effects of brain activity, morning salivary cortisol, and emotion regulation on cognitive impairment in elderly people, it is mentioned that high cortisol levels are associated with hippocampus damage, supporting your hypothesis, but it only refers to elderly patients with Mild Cognitive Impairment (MCI):

                                Cognitive impairment is a normal process of aging. The most common type of cognitive impairment among the elderly population is mild cognitive impairment (MCI), which is the intermediate stage between normal brain function and full dementia.[1] MCI and dementia are related to the hippocampus region of the brain and have been associated with elevated cortisol levels.[2]

                                Cortisol regulates metabolism, blood glucose levels, immune responses, anti-inflammatory actions, blood pressure, and emotion regulation. Cortisol is a glucocorticoid hormone that is synthesized and secreted by the cortex of adrenal glands. The hypothalamus releases a corticotrophin-releasing hormone and arginine vasopressin into hypothalamic-pituitary portal capillaries, which stimulates adrenocorticotropic hormone secretion, thus regulating the production of cortisol. Basal cortisol elevation causes damage to the hippocampus and impairs hippocampus-dependent learning and memory. Chronic high cortisol causes functional atrophy of the hypothalamic-pituitary-adrenal axis (HPA), the hippocampus, the amygdala, and the frontal lobe in the brain.

                                Additionally, Effects of stress hormones on the brain and cognition: Evidence from normal to pathological aging mentions that chronic stress is a contributor to memory performance decline.

                                We might be able to find a few mentions of brain damage outside of the typical context (as caused by traumatic injury, stroke, etc) in the literature, but at least we can agree that the term brain damage is quite unusual in the context of stress, can we not? Out of the 188,764 articles known by pubmed, only 18,981 mention “stress”, and of those the almost all are referring to “oxidative stress” (such as that experienced by cells during a stroke). I have yet to find a single study or article that directly states brain damage as being a result of chronic stress, in the same way that there are hundreds of thousands of studies showing brain damage from traumatic injuries to the brain.

                                1. 2

                                  Well, if anybody asks me I will tell them that too much stress at work causes brain damage… and now I can even point to some exact papers!

                                  I agree that it’s a little hyperbolic, but it’s not that hyperbolic. If we were talking about drug use everyone would kind of nod and say, ‘yeah, brain damage’ even if the effects were tertiary and the drug use was infrequent.

                                  But stress at work! Ohohoho, that’s just life my friend! Which really does not need to be the way of the world… OP was right to get out, especially once they started exhibiting symptoms suspiciously like the ones cited in that last paper (you know, the sorts of symptoms you get when your brain is suffering from some damage).

                                  1. 2

                                    If someone tells me that they got brain damage from stress at work, I will laugh, tell them to read the Wikipedia article article and then move on. But that is okay, we can agree to disagree. I understand that there are multiple possible definitions for the term brain damage.

                                    If we were talking about drug use everyone would kind of nod and say, ‘yeah, brain damage’ even if the effects were tertiary and the drug use was infrequent.

                                    In my defense, people often use terms incorrectly.

                                    OP was right to get out

                                    I agree. Brain damage or not, Google employee or not, if you are suffering at work you should not stay there. We all have very basic needs, and one of them is being valued and being happy to work.

                                    Anyways, I hope you have a good weekend!

                          2. 6

                            I have not yet seen a study showing that stressful office jobs give people brain damage.

                            This is a bizarre and somewhat awful thread. Please could you not post things like this in future?

                            1. 7

                              I disagree. The post seemed polite, constructive, and led to (IMO) a good conversation (including some corrections to the claims in the post).

                              1. 4

                                Parent left a clear method for you to disprove them by providing a counter-example.

                                If you can point to some peer-reviewed research on the topic, by all means do so.

                                1. 5

                                  Yea but this is an obnoxious, disrespectful, and disingenuous way to conduct an argument. I haven’t read any studies proving anything about this subject one way or another. Because I am not a mental health researcher. So it’s easy for me to make that claim, and present the claim as something that matters, when really it’s a pointless claim that truly does not matter at all.

                                  Arguing from an anecdotal position based on your own experience, yet demanding the opposing side provide peer-reviewed studies to contradict your anecdotal experience, places a disproportionate burden on them to conduct their argument. And whether intentional or not, it strongly implies that you have little to no respect for their experiences or judgement. That you will only care about their words if someone else says them.

                      1. 2

                        Good thing I’ve got Numberstation for my PinePhone then (just migrated off Authy on Android).

                        1. 3

                          Now if only they’d open source windows 7 haha. I wish. None the less it’s cool to see old software like this previously a black box in essence be open sourced by corporates. It’d be cool if older things like 98/95 or 3.1 we’re open sourced since their architecture was supersceded by the NT kernel and platform.

                          1. 3

                            I think this is a key point, and a key confession.

                            All versions of MS-DOS are long dead, and the last release of the last branch, IBM PC DOS 7.1 (no, not 7.01) was 2003: https://sites.google.com/site/pcdosretro/doshist

                            It’s dead. If MS were serious about FOSS, they could release DOS without any impact on current Windows.

                            But they don’t.

                            DOS doesn’t have media playback or anything. There shouldn’t be any secrets in there.

                            It would help FreeDOS and maybe even an update to DR-DOS.

                            So why not?

                            I suspect they are ashamed of some of it.

                            Win9x is equally dead, with no new releases in over 20 years. But I bet some of the codebase is still used, especially UI stuff, and some doesn’t belong to them.

                            Win9x source code would really help ReactOS and I suspect MS is scared of that, too.

                            IBM, equally, could release OS/2. Especially Workplace OS/2 (the PowerPC version) as there should be little to no MS code in that.

                            Red Hat could help. They have relevant experience.

                            But they don’t. How mysterious, huh?

                            1. 10

                              Any big project like this might contain licensed code.

                              MSFT is a huge company with big pockets, so if they slip up, they might have to pay out a bunch of money to whatever ghouls have speculatively purchased the IP to that code. It’s rather easy to convince a jury that big bad MSFT is wantonly denying a rights-holder their fair share by open-sourcing.

                              What’s the upside to open-sourcing? A bunch of nerds are happy, another bunch are vocally unhappy just because the license is not the best one du jour, and snarky Twitterers are highlighting shitty C code.

                              It’s much easier to just say “we can’t risk it”.

                              1. 3

                                whatever ghouls have speculatively purchased the IP to that code

                                I’m going to shamelessly steal that term for speculators in IP.

                                1. 2

                                  Well, yes, but… here we are, discussing a fairly substantial app which they just did precisely this to.

                                  It can be done and it does happen.

                                  So I submit that it’s more important to ask if there’s anything that people can do in order to help it happen more often, rather than discuss why it doesn’t happen.

                                  So, for example, we can usefully talk about things like MS-DOS, which is relatively tiny and which contains very little licensed code – for instance, the antivirus and backup tools in some later versions – and which could easily be excised with zero functional impact.

                                  The question becomes not “why doesn’t this happen?” but the far more productive “well what else could this happen to?”

                                  For instance, I have MS Word for DOS 5.5 here. It works. It’s still useful. It’s a 100% freeware download now, made so as a Y2K fix.

                                  But the last ever DOS version was MS Word for DOS 6, which is a much nicer app all round. How about asking for that just as freeware? Or even their other DOS apps, such as Multiplan and Chart? How about VB for DOS, or the Quick* compilers?

                                  1. 2

                                    So I submit that it’s more important to ask if there’s anything that people can do in order to help it happen more often, rather than discuss why it doesn’t happen.

                                    Sure, I am on board with this!

                                    I keep being reminded about this, the fact that MSFT does anything with open source, much less literally owning a huge chunk of its infrastructure (Github) is mind-blowing to me.

                                    I’m pretty sure there are people within MSFT who want to open-source DOS. 20 years ago this would have been unthinkable.

                                  2. 1

                                    That, and who knows if they even still have the source code?

                                    1. 2

                                      :-D Yes, that is very true.

                                      I think WordPad happened because the source to Windows Write was lost.

                                      Whereas Novell Netware 5 and 6 introduced new filesystems – which lacked the blazing, industry-beating performance of the filesystem of Netware 2/3/4 – not because the source was lost, but because nobody could maintain it any more.

                                      The original Netware Filesystem was apparently an approximately half megabyte single source file in x86 assembly. Indeed it is possible that it was machine-generated assembly: Novell ported Netware from its original 68000 architecture to 80286 by writing a compiler that compiled 68000 assembler into 80286 object code.

                                      When they revealed this and were told that it was impossible, they responded that they did not know that it was impossible so they just did it anyway.

                                      There were only a handful of senior engineers who could, or even dared, to make any changes to this huge lump of critical code, both performance-critical but also vital to data integrity.

                                      So in the end, when all those people were near retirement, they just replaced the entire thing, leaving the old one there untouched and writing a completely new one.

                                      This lost a key advantage of Netware. Netware servers were extremely vulnerable to power loss, because at boot, the OS read the disks’ file allocation tables into RAM and then kept them there. The data structures were in memory only and so if the server was downed without flushing them to disk, data loss was inevitable.

                                      This had 2 key results:

                                      1. Netware disk benchmarks were 2-3 orders of magnitude faster than any other OS in its era, the time of slow, expensive spinning disks.
                                      2. Netware had a linear relationship between amount of disk space and amount of RAM needed. The more disk you added, the more RAM you had to have. In the 1980s that could mean that a server with big disks needed hundreds of megabytes of memory, at a price equivalent to a very nice car or perhaps a small house.
                                      1. 2

                                        I think WordPad happened because the source to Windows Write was lost.

                                        It’s not. (I’m looking at it.)

                                        Write was just really, really early. It was written while Windows 1.0 was being written. The version included in NT 3.x is still 16 bit; while those systems did include some 16 bit code, this is the only executable I’m seeing that was 16 bit for lack of porting.

                                        Important code, in particular its formatting code, was written in 16 bit assembly.

                                        WordPad was a showcase for the then-new RichEdit control, which was part of rationalizing the huge number of rich text renderers that had proliferated up to that point.

                                  3. 3

                                    Not the latest version, but here you go: https://github.com/microsoft/MS-DOS

                                    1. 3

                                      True. But those were obsolete even 25Y ago.

                                      From vague memory:

                                      MS-DOS 1.x was largely intended to be floppy-only.

                                      MS-DOS 2.x added subdirectories.

                                      MS-DOS 3 added support for hard disks (meaning 1 partition on 1 hard disk).

                                      3.1 added networking

                                      3.2 added multiple hard disks.

                                      3.3 added multiple partitions per hard disk (1 primary + 1 extended containing multiple logical partitions).

                                      That was it for years. Compaq DOS 3.31 added FAT16 partitions above 32MB in size.

                                      IBM wrote DOS 4 which standardised Compaq’s disk format.

                                      Digital Research integrated 286/386 memory management into DR-DOS 5.

                                      MS-DOS 5 copied that. That is the basis of the DOS support that is still in NT (32-bit Windows 10) today.

                                      MS-DOS 6 added disk compression (stolen from Stacker), antivirus and some other utilities.

                                      6.2 fixed a serious bug and added SCANDISK.

                                      6.21 removed DoubleSpace

                                      6.22 replaced DoubleSpace with DriveSpace (stolen code rewritten)..

                                      That’s the last MS version.

                                      Later releases were only part of Win9x.

                                      IBM continued with PC-DOS 6.3, then 7.0, 7.01 and finally 7.1. That adds FAT32 support and some other nice-to-have stuff, derived from the DOS in Win95 OSR2 and later.

                                      So, basically, anything before DOS 5 is hopelessly obsolete and useless today. DOS 6.22 would be vaguely worth having. Anything after that would be down to IBM, not Microsoft.

                                    2. 2

                                      They likely don’t own the rights to everything inside MS-DOS, or the rights are unclear. The product wasn’t made to be open source to begin with, so considerations for licensing was likely never taken. It would be a rather large undertaking to go through the code and evaluate if they could release each piece or not as they would likely have to dig up decades old agreements with third parties, many of which are likely vague compared to todays legal standards for software, and interpret them with regards to the code.

                                      All of this for a handful of brownie points for people who think retro computing is fun? Eh. Not worth it.

                                      1. 1

                                        I think you may over-estimate the complexity of MS-DOS. :-D

                                        Windows, yes, any 32-bit version.

                                        But DOS? No. DOS doesn’t even have a CD device driver. It can’t play back any format of anything; it can display plain text, and nothing else. It doesn’t have a file manager as such (although DOSShell did, which came later.) It has no graphics support at all. No multitasking. It doesn’t have a mouse driver as standard, even. No sound drivers, no printer drivers. Very little at all, really.

                                        The only external code was code stolen from STAC’s Stacker in DoubleSpace in MS-DOS 6, and that was removed again in DOS 6.2.

                                        1. 2

                                          DOS doesn’t even have a CD device driver

                                          MSCDEX.EXE was in one of the MS-DOS 6.x versions, IIRC, but I suppose you mean that each CD-ROM drive vendor provided its own .SYS file to actually drive the unit?

                                          1. 2

                                            That’s right. The first time I ever saw a generic CD-drive hardware device driver – as oppposed to the filesystem layer which acted like a network redirector – was on the emergency boot disk image included in Win9x.

                                            Never part of DOS itself. The SmartDrive disk cache only came in later, too, and there were tons of 3rd party replacements for that.

                                            (I had some fascinating comments from its developer a while back about the IBMCACHE.SYS driver bundled with DOS 3.3 on the IBM PS/2 kit a while ago. I could post, if that’s of interest…?)

                                            1. 1

                                              You should definitely post those comments somewhere - the DOS era is fading into obscurity (deservedly or sadly, depending on who you ask).

                                              1. 2

                                                Interesting. OK, maybe I will make that into a new blog post, too. :-) Cheers!

                                      2. 1

                                        Indeed, really makes you think about the goals and morals behind projects and things within big corporations. As much as I’d like to see it happen, I doubt it ever will.

                                        1. 2

                                          My suspicion overall is this:

                                          A lot of history in computing is being lost. Stuff that was mainstream, common knowledge early in my career is largely forgotten now.

                                          This includes simple knowledge about how to operate computers… which is I think why Linux desktops (e.g. GNOME and Pantheon) just throw stuff out: because their developers don’t know how this stuff works, or why it is that way, so they think it’s unimportant.

                                          Some of these big companies have stuff they’ve forgotten about. They don’t know it’s historically important. They don’t know that it’s not related to any modern product. The version numbering of Windows was intentionally obscure.

                                          Example: NT. First release of NT was, logically, 1.0. But it wasn’t called that. It was called 3.1. Why?

                                          Casual apparent reason: well because mainstream Windows was version 3.1 so it was in parallel.

                                          This is marketing. It’s not actually true.

                                          Real reason: MS had a deal in place with Novell to include some handling of Novell Netware client drive mappings. Novell gave MS a little bit of Novell’s client source code, so that Novell shares looked like other network shares, meaning peer-to-peer file shares in Windows for Workgroups.

                                          (Sound weird? It wasn’t. Parallel example: 16-bit Windows (i.e. 3.x) did not include TCP/IP or any form of dial-up networking stack. Just a terminal emulator for BBS use, no networking over modems. People used a 3rd party tool for this.

                                          But Internet Explorer was supported on Windows 3.1x. So MS had to write its own alll-new dialup PPP stack and bundle it with 16-bit IE. Otherwise you could download the MS browser for the MS OS and it couldn’t connect and that would look very foolish.

                                          The dialup stack only did dialup and could not work over a LAN connection. The LAN connection could not do PPP or SLIP over a serial connection. Totally separate stacks.

                                          Well, the dominant server OS was Netware and again the stack was totally separate, with different drivers, different protocols, everything. So Windows couldn’t make or break Novell drive mappings, and the Novell tools couldn’t make or break MS network connections.

                                          Thus the need for some sharing of intellectual property and code.)

                                          Novell was, very reasonably, super wary of Microsoft. MS has a history of stealing code: DoubleSpace contained stolen STAC code; Video for Windows contained stolen Apple QuickTime code; etc. etc.

                                          The agreement with Novell only covered “Windows 3.1”. That is why the second, finished, working edition of Windows for Workgroups, a big version with massive changes, was called… Windows for Workgroups 3.11.

                                          And that’s why NT was also called 3.1. Because that way it fell under the Novell agreement.

                                          1. 3

                                            16-bit Windows (i.e. 3.x) did not include TCP/IP or any form of dial-up networking stack… The dialup stack only did dialup and could not work over a LAN connection. The LAN connection could not do PPP or SLIP over a serial connection. Totally separate stacks.

                                            There’s some nuance here.

                                            • 1992: Windows 3.1 ships with no (relevant) network stack. In unrelated news, the Winsock 1.0 interface specification happened, but it wasn’t complete/usable.
                                            • 1993: Windows for Workgroups 3.11 ships with a LAN stack. The Winsock 1.1 specification happened, which is what everybody (eventually) used.
                                            • 1994: A TCP/IP module is made available for download for Windows for Workgroups 3.11. That included a Winsock implementation.
                                            • 1995: Internet Explorer for Windows 3.x is released.

                                            Of course, IE could have just bundled the TCP/IP stack that already existed, but that wouldn’t have provided PPP. It could have provided a PPP dialer that used the WfWg networking stack, but that wouldn’t have done anything for Windows 3.1 users.

                                            As far as I can tell, the reason for two stacks is Windows 3.1 support - that version previously had zero stacks, so something needed to be added. There would also have been many WfWg users who hadn’t installed networking components.

                                            There’s an alternate universe out there where the WfWg stack was backported to 3.1, with its TCP/IP add-on, and a new PPP dialer…but that’s a huge amount of code to ask people to install. Besides, the WfWg upgrade was selling for $69 at the time, mainly to businesses.

                                            The real point is a 1992 release didn’t perfectly prepare for a 1995 world. Windows 95 (and NT 3.5) had unified stacks.

                                            MS has a history of stealing code: DoubleSpace contained stolen STAC code; Video for Windows contained stolen Apple QuickTime code; etc.

                                            The STAC issue was about patents, not copying code. The QuickTime copying allegation was against San Francisco Canyon Co, who licensed it to Intel, who licensed it to Microsoft.

                                            1. 2

                                              You are conflating a whole bunch of different stuff from different releases here. I don’t think that the result is an accurate summary.

                                              Windows 3.1: no networking. Windows 3.11: minor bugfix release; no networking.

                                              Windows for Workgroups 3.1: major rewrite; 16-bit peer-to-peer LanMan-protocol networking, over NetBEUI. No TCP/IP support IIRC.

                                              Windows for Workgroups 3.11: a major new version, disguised with a 0.01 version number, with a whole new 32-bit filesystem (called VFAT and pulled from the WIP Windows Chicago, AKA Windows 95), a 32-bit network stack and more. Has 16-bit TCP/IP included, over NIC only. No dialup TCP/IP, no PPP or SLIP support.

                                              32-bit TCP/IP available as an optional extra for WfWg 3.11 only. Still no dialup support.

                                              IE 1.x was 32-bit only.

                                              IE 2.0 was the first 16-bit release. https://en.wikipedia.org/wiki/Internet_Explorer_2

                                              The dialup TCP/IP stack was provided by a 3rd party, FTP Software. https://en.wikipedia.org/wiki/FTP_Software

                                              That dialup stack was dialup only and could not run over a NIC.

                                              So, if you installed 16-bit IE on WfWg 3.11, which I did, in production, you ended up with effectively 2 separate IP stacks: a dialup one that could only talk to a modem on a serial port, and one in the NIC protocol stack.

                                              The IE PPP stack was totally separate and independent from the WfWg TCP/IP stacks, and it did not interoperate with WfWg at all. You could not map network drives over PPP for example.

                                              The real reason that there were 2 stacks is not so much separate OSes – it’s that MS licensed it in.

                                              As far as the STAC thing – I may as well copy my own replied from the Orange Site, as it took a while to write.

                                              This is as I understand it. (It’s my blog post, BTW.)

                                              https://web.archive.org/web/20070509205650/http://www.vaxxine.com/lawyers/articles/stac.html

                                              https://www.latimes.com/archives/la-xpm-1994-02-24-fi-26671-story.html

                                              https://tedium.co/2018/09/04/disk-compression-stacker-doublespace-history/

                                              https://en.wikipedia.org/wiki/Stac_Electronics#Microsoft_lawsuit

                                              MS bullied Central Point Software into providing the backup and antivirus tools, on the basis of CPS being able to sell upgrades and updates.

                                              CPS went out of business.

                                              https://en.wikipedia.org/wiki/Central_Point_Software

                                              MS attempted to bully STAC into providing Stacker for free or cheaply. STAC fought back.

                                              Geoff Chappell was involved:

                                              https://www.geoffchappell.com/

                                              He’s the guy that found and published the AARD code MS used to fake Windows 3.1 failing on DR-DOS.

                                              https://en.wikipedia.org/wiki/AARD_code

                                              As described here: https://www.zapread.com/Post/Detail/7735/aard-code-or-how-bill-gates-finished-off-the-competition/

                                              Discussed on HN here: https://news.ycombinator.com/item?id=26526086

                                              Especially see this nice little summary: https://news.ycombinator.com/item?id=26529937

                                              It would be hard to patent this stuff that narrowly. Various companies sold disk compression; note the whole list here:

                                              https://en.wikipedia.org/wiki/Disk_compression#Standalone_software

                                              MS saw the code, MS copied it, STAC proved it, MS removed it (MS-DOS 6.21) and then added the functionality back (MS-DOS 6.22) after re-implementing the offending code.

                                    1. 4

                                      I reject the intro already. There is nothing scientific about preferences. I like my MX Black and Brown and I can’t say why - and that’s not important. I tried Red and I prefer Black. I tried Blue and I’d rather use most non-mechanical ones over Blues. It’s like the fabric of your couch or your brand of soft drink…

                                      1. 22

                                        Yes, but an important part of a review is to try and convey objectively what the product is like so that you can decide if it fits your subjective preferences without actually having tried it.

                                        1. 4

                                          The problem is, you might not be able to say why - neither can most people. But lots of them do like to say why, so unaware of the fact they bundle all of their experiences and biases into these potentially misleading reviews, which are generally meaningless outside of the experiences, preferences, biology etc., of some given person.

                                          1. 2

                                            And it is not about the sound, but maybe it is about the sound ;)
                                            Try all, change when bored! https://github.com/deejayy/cliqsound

                                            1. 1

                                              Yeah, preference is subjective. They need to provide a group of reviewers, who each give their subjective take.

                                              1. 1

                                                There is nothing scientific about preferences.

                                                Except when there is.

                                                Some examples: your favourite brand of soft drink is likely to be your favourite in part because of intense research on the part of Coca Cola Amatil or one of their competitors. Likewise your couch fabric is from a carefully researched, selected, and marketed subset of profitable fabrics available to that manufacturer.

                                                Some disclaimers: sometimes they aren’t. I prefer the smell of two stroke engines because that’s how fast exciting bikes smelled when I was a teenager and getting up to much naughtiness on them. And don’t assume that manufacturers research your preferences because they want to satisfy them; it’s as often done to understand them as a precursor to manipulating them.

                                                But please don’t take this as anti-capitalist polemic :) I’d actually prefer (heh) laissez-faire capitalism to our current mixed market. But I’d prefer it warts and all, and if you don’t think scientific treatment of preferences is one of those warts (or sometimes beauty spots), you’re probably being played more easily than those who do.

                                                Edited to add: and sometimes it works the other way around, when aggregate preferences reveal truth. I recall - but annoyingly can’t now find - a study that demonstrated cross cultural preferences for certain types of landscape beauty that implied their suitability for habitation by primitive humans. That is, our preferences for beautiful landscapes may be a consequence of our recent evolution.

                                                1. 1

                                                  OK, please point me to any source why I would prefer Coke over Pepsi? Apparently both companies did an excessive amount of research how to win me over?

                                                  Tongue in cheek aside, maybe I was overly terse and other have said it better. But sometimes I’d just prefer my new car to be blue and not red, even if there was a measurable advantage - I just don’t like the color.

                                                  Same with MX Blues. You might be able to persuade me that I could type 20% faster. Unless I was entering in a speed-typing contest I’d still stick to the other ones for work and leisure.

                                                  I think the main problem is that people who want to scientifically quantify things sometimes just overlook people’s priorities. For decades I simply chose my CPU vendor for 2 criteria: speed vs price and “are there known problems with the platform/chipset” - nothing else mattered. Coming to mainboards it was already some sort of “does it have the correct features?” and just like that all scientific things went off the board.

                                                  Good point about the market only providing a certain subset of things to have preferences for. But I am not sure this is the correct discussion, it just limits certain criteria to the most mainstream methods. I.e. the “perfect” version of something is either too expensive or has some other drawback that it won’t even be on the normal market, so we consumers have to adjust our preferences to what is available. But I think that brings it out of scope. I honestly do not care about a scientifically better keyboard switch if I can’t buy it. And then I’d still need to like it ;)

                                                  1. 1

                                                    OK, please point me to any source why I would prefer Coke over Pepsi?

                                                    Sure:

                                                    https://www.psychologytoday.com/au/blog/subliminal/201205/why-people-choose-coke-over-pepsi

                                                    As expected, both the normal and the brain-damaged volunteers preferred Pepsi to Coke when they did not know what they were drinking. And, as expected, those with healthy brains switched their preference when they knew what they were drinking. But those who had damage to their VMPC – their brain’s “brand appreciation” module – did not change preferences. They liked Pepsi better whether or not they knew what they were drinking. Without the ability to unconsciously experience a warm and fuzzy feeling toward a brand name, there is no Pepsi paradox.

                                                    I think the main problem is that people who want to scientifically quantify things sometimes just overlook people’s priorities.

                                                    Yes, agreed. There are a couple of drivers for this; it’s easier to deal with people in the aggregate when looking at stats, and also, people buy into the idea of intrinsic value (which is itself a philosophical error). It’s never valid to ask “is X valuable” only “for whom is X valuable, and for what purpose?”

                                              1. 6

                                                “Mcrib” is the best code name I’ve heard in a while.

                                                In a way it’s comforting that Slack, with these brilliant engineers, has these issues too. I mean, I know everyone does, but when a product I designed or worked on has an issue, I always beat myself up about it. Realizing I can make mistakes too and not be an imposter is a lesson I still haven’t fully learned after all these years.

                                                1. 21

                                                  Its appropriate for a service called “mcrib” to be only intermittently available…

                                                  1. 2

                                                    Bravo.

                                                    1. 2

                                                      This is the greatest comment I have ever seen here.

                                                    2. 3

                                                      EVERYONE does!

                                                      Let me tell you about some internal only post mortems (We call them COEs - Correction Of Error) that made my hair stand up :)

                                                      And I guarantee you that every other BigCorp in existence has them too. Solving problems at crazy pants scale means that sometimes despite everyone’s best efforts you end up with disasters at said scale :)

                                                      1. 2

                                                        In a way it’s comforting that Slack, with these brilliant engineers, has these issues too.

                                                        Haha! Not just Slack … literally every place I’ve worked at has had something like this.

                                                      1. 17

                                                        Or, don’t use Go.

                                                        There’s plenty other programming languages out there. And, I see Google giving its mark here as a net negative, not a positive. Perhaps in the past, Google was cool. Not anymore.

                                                        1. 11

                                                          Note that using Go does not require a Google account. Only contributing code to the Go project does.

                                                          1. 11

                                                            Pretty sure that’s not how anyone hoped free software would pan out.

                                                            1. 4

                                                              I really don’t understand this comment. Requiring you to sign away or otherwise give up legal rights in order to contribute is a long-standing tradition of Free Software projects. I’m sure the FSF would officially condemn this, of course, but in a “well, it’s OK when we do it because we have the best interests of Free Software at heart” way rather than anything constructive.

                                                              1. 7

                                                                Leaving aside the issue of CLAs, what I meant was: I don’t think that anyone had hoped that contributors would have to sign up for an account with an advertising / surveillance company in order to contribute.

                                                                1. 4

                                                                  The problem is that the hard uncompromising stance on never having a Google account, or any other major federated-identity-provider account, is no less and no more valid than a hard uncompromising stance on never signing away one’s rights to one’s own work. So it’s very hard to frame one as OK and the other as not OK in a way that stands up to scrutiny.

                                                                  1. 2

                                                                    I wasn’t attempting to frame it that way at all; I was disregarding CLAs as off-topic.

                                                                    But to bring them into scope - why do you consider them a package deal? I’d have said they were entirely separate, though valid, concerns.

                                                                    1. 4

                                                                      The thing is that the FSF historically didn’t just require a CLA giving them permission to use your contribution; they required a full assignment of copyright. They claimed it was done for good reasons, but that’s a line some people weren’t willing to cross just to contribute to a project.

                                                                      That is the source of the analogy here: I’m sure Google has an argument that requiring a Google account is done for good reasons, but that’s a line some people aren’t willing to cross just to contribute.

                                                                      And my point is thus that this phenomenon — of being required to give up something one considers too precious to give up, “just to contribute” — is not new, and in fact has previously been done by the literal Free Software Foundation. So lamenting “that’s not how anyone hoped free software would pan out” does not make sense to me.

                                                                      1. 1

                                                                        No, but the phenomenon of having to create an account with an advertising / surveillance company that then explicitly prevents logging in using the majority of free software web browsers, is new.

                                                                        1. 2

                                                                          I’m not trying to be mean here, but what you’re really saying is: “People who didn’t want to assign copyright have trivial concerns that don’t matter, and excluding them from all projects of the Free Software Foundation was acceptable; people who don’t want to have a Google account, however, have valid concerns that do matter, and excluding them from the Go language’s project is not acceptable”.

                                                                          If it does sound mean, that’s because it’s surfacing some things that maybe were never looked at critically before.

                                                                          1. 1

                                                                            Ah I see - no, I think the FSFs requirement around copyright assignment is wrong too. Neither is really a good model for collaboration IMO.

                                                                  2. 1

                                                                    Funny, I read it as being upset about the user versus leech versus developer split.

                                                                  3. 1

                                                                    Certainly the FSF has a long history a demanding they get ownership of uncompensated contributor’s IP, but the FSF is not the only community.

                                                                    Outside of the FSF, the majority of major opensource projects don’t steal contributor’s IP. One of the big things such IP theft allows is changing the license in future: If there’s only one owner for all the code, that owner gets free reign over the license terms.

                                                                    1. 2

                                                                      I was replying to someone who seemed to lament that ideals of Free Software we’re not being lived up to. Pointing out that the supposed steward and paragon of Free Software required people to sign away their rights seemed appropriate.

                                                                      1. 1

                                                                        I was lamenting that - but also think that the FSF also got it wrong, just in a different way. Perhaps it was a misunderstanding; my desire to disregard CLAs wasn’t because I support them, but because I thought it was off topic.

                                                                        To be super clear, when I talk about free software I don’t mean Free Software ;-P

                                                                      2. 1

                                                                        Allowing one party to unilaterally change the license of a project can also prevent an impasse. For example, I want to develop an add-on for a program that’s licensed under the GPL 2. I may also eventually contribute to the core of that program. As part of this work, I want to use a library that’s licensed under the Apache 2.0 license, which is incompatible with GPL 2. Neither the GPL-covered program nor the Apache-licensed library require copyright assignment for contributors, so changing the license of either is likely to be impractical. So it’s likely that I’ll have to make a suboptimal technical choice to avoid a license incompatibility. That just feels wrong. Of course, I know that’s a minor problem in the grand scheme of things, but it still feels wrong.

                                                              1. 40

                                                                I tried out this language while it was in early development, writing some of the standard library (hash::crc* and unix::tty::*) to test the language. I wrote about this experience, in a somewhat haphazard way. (Note, that blog post is outdated and not all my opinions are the same. I’ll be trying to take a second look at Hare in the coming days.)

                                                                In general, I feel like Hare just ends up being a Zig without comptime, or a Go without interfaces, generics, GC, or runtime. I really hate to say this about a project where they authors have put in such a huge amount of effort over the past year or so, but I just don’t see its niche – the lack of generics mean I’d always use Zig or Rust instead of Hare or C. It really looks like Drew looked at Zig, said “too bloated”, and set out to create his own version.

                                                                Another thing I find strange: why are you choosing to not support Windows and macOS? Especially since, you know, one of C’s good points is that there’s a compiler for every platform and architecture combination on earth?

                                                                That said, this language is still in its infancy, so maybe as time goes and the language finds more users we’ll see more use-cases for Hare.

                                                                In any case: good luck, Drew! Cheers!

                                                                1. 10

                                                                  why are you choosing to not support Windows and macOS?

                                                                  DdV’s answer on HN:

                                                                  We don’t want to help non-FOSS OSes.

                                                                  (Paraphrasing a lot, obvs.)

                                                                  My personal 2c:

                                                                  Some of the nastier weirdnesses in Go are because Go supports Windows and Windows is profoundly un-xNix-like. Supporting Windows distorted Go severely.

                                                                  1. 13

                                                                    Some of the nastier weirdnesses in Go are because Go supports Windows and Windows is profoundly un-xNix-like. Supporting Windows distorted Go severely.

                                                                    I think that’s the consequence of not planning for Windows support in the first place. Rust’s standard library was built without the assumption of an underlying Unix-like system, and it provides good abstractions as a result.

                                                                    1. 5

                                                                      Amos talks about that here: Go’s file APIs assume a Unix filesystem. Windows support was kludged in later.

                                                                    2. 5

                                                                      Windows and Mac/iOS don’t need help from new languages; it’s rather the other way around. Getting people to try a new language is pretty hard, let alone getting them to build real software in it. If the language deliberately won’t let them target three of the most widely used operating systems, I’d say it’s shooting itself in the foot, if not in the head.

                                                                      (There are other seemingly perverse decisions too. 80-character lines and 8-character indentation? Manual allocation with no cleanup beyond a general-purpose “defer” statement? I must not be the target audience for this language, is the nicest response I have.)

                                                                      1. 2

                                                                        Just for clarity, it’s not my argument. I was just trying to précis DdV’s.

                                                                        I am not sure I agree, but then again…

                                                                        I am not sure that I see the need for yet another C-replacement. Weren’t Limbo, D, Go, & Rust all attempts at this?

                                                                        But that aside: there are a lot of OSes out there that are profoundly un-Unix-like. Windows is actually quite close, compared to, say, Oberon or classic MacOS or Z/OS or OpenVMS or Netware or OS/2 or iTron or OpenGenera or [cont’d p94].

                                                                        There is a lot of diversity out there that gets ignored if it doesn’t have millions of users.

                                                                        Confining oneself to just OSes in the same immediate family seems reasonable and prudent to me.

                                                                    3. 10

                                                                      My understanding is that the lack of generics and comptime is exactly the differentiating factor here – the project aims at simplicity, and generics/compile time evaluations are enormous cost centers in terms of complexity.

                                                                      1. 20

                                                                        You could say that generics and macros are complex, relative to the functionality they offer.

                                                                        But I would put comptime in a different category – it’s reducing complexity by providing a single, more powerful mechanism. Without something like comptime, IMO static languages lose significant productivity / power compared to a dynamic language.

                                                                        You might be thinking about things from the tooling perspective, in which case both features are complex (and probably comptime even more because it’s creating impossible/undecidable problems). But in terms of the language I’d say that there is a big difference between the two.

                                                                        I think a language like Hare will end up pushing that complexity out to the tooling. I guess it’s like Go where they have go generate and relatively verbose code.

                                                                        1. 3

                                                                          Yup, agree that zig-style seamless comptime might be a great user-facing complexity reducer.

                                                                          1. 16

                                                                            I’m not being Zig-specific when I say that, by definition, comptime cannot introduce user-facing complexity. Unlike other attributes, comptime only exists during a specific phase of compiler execution; it’s not present during runtime. Like a static type declaration, comptime creates a second program executed by the compiler, and this second program does inform the first program’s runtime, but it is handled entirely by the compiler. Unlike a static type declaration, the user uses exactly the same expression language for comptime and runtime.

                                                                            If we think of metaprogramming as inherent complexity, rather than incidental complexity, then an optimizing compiler already performs compile-time execution of input programs. What comptime offers is not additional complexity, but additional control over complexity which is already present.

                                                                            To put all of this in a non-Zig context, languages like Python allow for arbitrary code execution during module loading, including compile-time metaprogramming. Some folks argue that this introduces complexity. But the complexity of the Python runtime is there regardless of whether modules get an extra code-execution phase; the extra phase provides expressive power for users, not new complexity.

                                                                            1. 8

                                                                              Yeah, but I feel like this isn’t what people usually mean when they say some feature “increases complexity.”

                                                                              I think they mean something like: Now I must know more to navigate this world. There will be, on average, a wider array of common usage patterns that I will have to understand. You can say that the complexity was already there anyway, but if, in practice, is was usually hidden, and now it’s not, doesn’t that matter?

                                                                              then an optimizing compiler already performs compile-time execution of input programs.

                                                                              As a concrete example, I don’t have to know about a new keyword or what it means when the optimizing compiler does its thing.

                                                                              1. 2

                                                                                A case can be made that this definition of complexity is a “good thing” to improve code quality / “matters”:

                                                                                Similar arguments can be used for undefined behavior (UB) as it changes how you navigate a language’s world. But for many programmers, it can be usually hidden by code seemingly working in practice (i.e. not hitting race conditions, not hitting unreachable paths for common input, updating compilers, etc.). I’d argue that this still matters (enough to introduce tooling like UBSan, ASan, and TSan at least).

                                                                                The UB is already there, both for correct and incorrect programs. Providing tools to interact with it (i.e. __builtin_unreachable -> comptime) as well as explicit ways to do what you want correctly (i.e. __builtin_add_overflow -> comptime specific lang constructs interacted with using normal code e.g. for vs inline for) would still be described as “increases complexity” under this model which is unfortunate.

                                                                                1. 1

                                                                                  The UB is already there, both for correct and incorrect programs.

                                                                                  Unless one is purposefully using a specific compiler (or set thereof), that actually defines the behaviour the standard didn’t, then the program is incorrect. That it just happens to generate correct object code with this particular version of that particular compiler on those particular platforms is just dumb luck.

                                                                                  Thus, I’d argue that tools like MSan, ASan, and UBSan don’t introduce any complexity at all. The just revealed the complexity of UB that was already there, and they do so reliably enough that they actually relieve me of some of the mental burden I previously had to shoulder.

                                                                              2. 5

                                                                                languages like Python allow for arbitrary code execution during module loading, including compile-time metaprogramming.

                                                                                Python doesn’t allow compile-time metaprogramming for any reasonable definition of the word. Everything happens and is introspectable at runtime, which allows you to do similar things, but it’s not compile-time metaprogramming.

                                                                                One way to see this is that sys.argv is always available when executing Python code. (Python “compiles” byte code, but that’s an implementation detail unrelated to the semantics of the language.)

                                                                                On the other hand, Zig and RPython are staged. There is one stage that does not have access to argv (compile time), and another one that does (runtime).

                                                                                Related to the comment about RPython I linked here:

                                                                                http://www.oilshell.org/blog/2021/04/build-ci-comments.html

                                                                                https://old.reddit.com/r/ProgrammingLanguages/comments/mlflqb/is_this_already_a_thing_interpreter_and_compiler/gtmbno8/

                                                                                1. 4

                                                                                  Yours is a rather unconventional definition of complexity.

                                                                                  1. 5

                                                                                    I am following the classic paper, “Out of the Tar Pit”, which in turn follows Brooks. In “Abstractive Power”, Shutt distinguishes complexity from expressiveness and abstractedness while relating all three.

                                                                                    We could always simply go back to computational complexity, but that doesn’t capture the usage in this thread. Edit for clarity: Computational complexity is a property of problems and algorithms, not a property of languages nor programming systems.

                                                                                    1. 3

                                                                                      Good faith question: I just skimmed the first ~10 pages of “Out of the Tar Pit” again, but was unable to find the definition that you allude to, which would exclude things like the comptime keyword from the meaning of “complexity”. Can you point me to it or otherwise clarify?

                                                                                      1. 4

                                                                                        Sure. I’m being explicit for posterity, but I’m not trying to be rude in my reply. First, the relevant parts of the paper; then, the relevance to comptime.

                                                                                        On p1, complexity is defined as the tendency of “large systems [to be] hard to understand”. Unpacking their em-dash and subjecting “large” to the heap paradox, we might imagine that complexity is the amount of information (bits) required to describe a system in full detail, with larger systems requiring more information. (I don’t really know what “understanding” is, so I’m not quite happy with “hard to understand” as a concrete definition.) Maybe we should call this “Brooks complexity”.

                                                                                        On p6, state is a cause of complexity. But comptime does not introduce state compared to an equivalent non-staged approach. On p8, control-flow is a cause of complexity. But comptime does not introduce new control-flow constructs. One could argue that comptime requires extra attention to order of evaluation, but again, an equivalent program would have the same order of evaluation at runtime.

                                                                                        On p10, “sheer code volume” is a cause of complexity, and on this point, I fully admit that I was in error; comptime is a long symbol, adding size to source code. In this particular sense, comptime adds Brooks complexity.

                                                                                        Finally, on a tangent to the top-level article, p12 explains that “power corrupts”:

                                                                                        [I]n the absence of language-enforced guarantees (…) mistakes (and abuses) will happen. This is the reason that garbage collection is good — the power of manual memory management is removed. … The bottom line is that the more powerful a language (i.e. the more that is possible within the language), the harder it is to understand systems constructed in it.

                                                                                        comptime and similar metaprogramming tools don’t make anything newly possible. It’s an annotation to the compiler to emit specialized code for the same computational result. As such, they arguably don’t add Brooks complexity. I think that this argument also works for inline, but not for @compileError.

                                                                            2. 18

                                                                              My understanding is that the lack of generics and comptime is exactly the differentiating factor here – the project aims at simplicity, and generics/compile time evaluations are enormous cost centers in terms of complexity.

                                                                              Yeah, I can see that. But under what conditions would I care how small, big, or icecream-covered the compiler is? Building/bootstrapping for a new platform is a one-time thing, but writing code in the language isn’t. I want the language to make it as easy as possible on me when I’m using it, and omitting features that were around since the 1990’s isn’t helping.

                                                                              1. 8

                                                                                Depends on your values! I personally see how, eg, generics entice users to write overly complicated code which I then have to deal with as a consumer of libraries. I am not sure that not having generics solves this problem, but I am fairly certain that the problem exists, and that some kind of solution would be helpful!

                                                                                1. 3

                                                                                  In some situations, emitted code size matters a lot (and with generics, that can quickly grow out of hand without you realizing it).

                                                                                  1. 13

                                                                                    In some situations

                                                                                    I see what you mean, but I think in those situations it’s not too hard to, you know, refrain from use generics. I see no reason to force all language users to not use that feature. Unless Hare is specifically aiming for that niche, which I don’t think it is.

                                                                                    1. 4

                                                                                      There are very few languages that let you switch between monomorphisation and dynamic dispatch as a compile-time flag, right? So if you have dependencies, you’ve already had the choice forced on you.

                                                                                      1. 6

                                                                                        If you don’t like how a library is implemented, then don’t use it.

                                                                                        1. 2

                                                                                          Ah, the illusion of choice.

                                                                                2. 10

                                                                                  Where is the dividing line? What makes functions “not complex” but generics, which are literally functions evaluated at compile time, “complex”?

                                                                                  1. 14

                                                                                    I don’t know where the line is, but I am pretty sure that this is past that :D

                                                                                    https://github.com/diesel-rs/diesel/blob/master/diesel_cli/src/infer_schema_internals/information_schema.rs#L146-L210

                                                                                    1. 17

                                                                                      Sure, that’s complicated. However:

                                                                                      1. that’s the inside of the inside of a library modeling a very complex domain. Complexity needs to live somewhere, and I am not convinced that complexity that is abstracted away and provides value is a bad thing, as much of the “let’s go back to simpler times” discourse seems to imply. I rather someone takes the time to solve something once, than me having to solve it every time, even if with simpler code.

                                                                                      2. Is this just complex, or is it actually doing more than the equivalent in other languages? Rust allows for expressing constraints that are not easily (or at all) expressable in other languages, and static types allow for expressing more constraints than dynamic types in general.

                                                                                      In sum, I’d reject a pull request with this type of code in an application, but don’t mind it at all in a library.

                                                                                      1. 4

                                                                                        that’s the inside of the inside of a library modeling a very complex domain. Complexity needs to live somewhere,

                                                                                        I find that’s rarely the case. It’s often possible to tweak the approach to a problem a little bit, in a way that allows you to simply omit huge swaths of complexity.

                                                                                        1. 3

                                                                                          Possible, yes. Often? Not convinced. Practical? I am willing to bet some money that no.

                                                                                          1. 7

                                                                                            I’ve done it repeatedly, as well as seeing others do it. Occasionally, though admittedly rarely, reducing the size of the codebase by an order of magnitude while increasing the number of features.

                                                                                            There’s a huge amount of code in most systems that’s dedicated to solving optional problems. Usually the unnecessary problems are imposed at the system design level, and changing the way the parts interface internally allows simple reuse of appropriate large-scale building blocks and subsystems, reduces the number of building blocks needed, and drops entire sections of translation and negotiation glue between layers.

                                                                                            Complexity rarely needs to be somewhere – and where it does need to be, it’s in often in the ad-hoc, problem-specific data structures that simplify the domain. A good data structure can act as a laplace transform for the entire problem space of a program, even if it takes a few thousand lines to implement. It lets you take the problem, transform it to a space where the problem is easy to solve, and put it back directly.

                                                                                      2. 7

                                                                                        You can write complex code in any language, with any language feature. The fact that someone has written complex code in Rust with its macros has no bearing on the feature itself.

                                                                                        1. 2

                                                                                          It’s the Rust culture that encourages things like this, not the fact that Rust has parametric polymorphism.

                                                                                          1. 14

                                                                                            I am not entirely convinced – to me, it seems there’s a high correlation between languages with parametric polymorphism and languages with culture for high-to-understand abstractions (Rust, C++, Scala, Haskell). Even in Java, parts that touch generics tend to require some mind-bending (producer extends consumer super).

                                                                                            I am curious how Go’s generic would turn out to be in practice!

                                                                                            1. 8

                                                                                              Obligatory reference for this: F# Designer Don Syme on the downsides of type-level programming

                                                                                              I don’t want F# to be the kind of language where the most empowered person in the discord chat is the category theorist.

                                                                                              It’s a good example of the culture and the language design being related.

                                                                                              https://lobste.rs/s/pkmzlu/fsharp_designer_on_downsides_type_level

                                                                                              https://old.reddit.com/r/ProgrammingLanguages/comments/placo6/don_syme_explains_the_downsides_of_type_classes/

                                                                                              which I linked here: http://www.oilshell.org/blog/2022/03/backlog-arch.html

                                                                                    2. 3

                                                                                      In general, I feel like Hare just ends up being a Zig without comptime, or a Go without interfaces, generics, GC, or runtime. … I’d always use Zig or Rust instead of Hare or C.

                                                                                      What if you were on a platform unsupported by LLVM?

                                                                                      When I was trying out Plan 9, lack of LLVM support really hurt; a lot of good CLI tools these days are being written in Rust.

                                                                                      1. 15

                                                                                        Zig has rudimentary plan9 support, including a linker and native codegen (without LLVM). We’ll need more plan9 maintainers to step up if this is to become a robust target, but the groundwork has been laid.

                                                                                        Additionally, Zig has a C backend for those targets that only ship a proprietary C compiler fork and do not publish ISA details.

                                                                                        Finally, Zig has the ambitions to become the project that is forked and used as the proprietary compiler for esoteric systems. Although of course we would prefer for businesses to make their ISAs open source and publicly documented instead. Nevertheless, Zig’s MIT license does allow this use case.

                                                                                        1. 2

                                                                                          I’ll be damned! That’s super impressive. I’ll look into Zig some more next time I’m on Plan 9.

                                                                                        2. 5

                                                                                          I think that implies that your platform is essentially dead ( I would like to program my Amiga in Rust or Swift or Zig, too) or so off-mainstream (MVS comes to mind) that those tools wouldn’t serve any purpose anyway because they’re too alien).

                                                                                          1. 5

                                                                                            Amiga in Rust or Swift or Zig, too)

                                                                                            Good news: LLVM does support 68k, in part to many communities like the Amiga community. LLVM doesn’t like to include stuff unless there’s a sufficient maintainer base, so…

                                                                                            MVS comes to mind

                                                                                            Bad news: LLVM does support S/390. No idea if it’s just Linux or includes MVS.

                                                                                            1. 1

                                                                                              Good news: LLVM does support 68k Unfortunately, that doesn’t by itself mean that compilers (apart from clang) get ported, or that the platform gets added as part of a target triple. For instance, Plan 9 runs on platforms with LLVM support, yet isn’t supported by LLVM.

                                                                                              Bad news: LLVM does support S/390. I should have written VMS instead.

                                                                                              1. 1
                                                                                            2. 2

                                                                                              I won’t disagree with describing Plan 9 as off-mainstream ;) But I’d still like a console-based Signal client for that OS, and the best (only?) one I’ve found is written in Rust.

                                                                                        1. 3

                                                                                          Huzzah, crimes!

                                                                                          Sometimes computations can be treated as values, but this is very rare. It’s even more rare to take a partially completed computation and use it as a value.

                                                                                          Pshaw. :-P

                                                                                          1. 2

                                                                                            Some years ago I worked on a fuel wetstock management system that had a lot of partially completed computations passed around.

                                                                                          1. 28

                                                                                            There’s a simple solution to this:

                                                                                            1. Create a Google account
                                                                                            2. Send a PR
                                                                                            3. Agree to the CLA
                                                                                            4. Have the PR merged
                                                                                            5. Send a GDPR notice to Google requiring that they delete all PII associated with the Google account and close it.

                                                                                            Repeat this process for every single patch that you submit. Eventually, Google’s compliance team will either bankrupt the company or come up with a better process.

                                                                                            There’s also a plan B solution that works well for me:

                                                                                            1. Don’t contribute to Google-run open source projects until they learn how to work with the community.
                                                                                            1. 8

                                                                                              You say “the community” as though there is just one, or that it is a well-defined term.

                                                                                              We have a large community of Go contributors from outside Google that we do work well with. It so happens that these people all have created Google accounts to log in to Google-run web sites - including our code review site go-review.googlesource.com - much the same way I have to create a GitHub account to post on Go’s issue tracker. We may be losing out on contributions from a few people, perhaps yourself included, who for one reason or another cannot create such an account. That’s unfortunate but hardly the common case.

                                                                                              1. 5

                                                                                                How can you measure the number of folks who would contribute if there wasn’t a silly requirement to make a google account vs the number of folks who did in order to contribute? Sounds an awful lot like a survivorship bias.

                                                                                                1. 1

                                                                                                  Even Apple is able to interact with the open source community better than this

                                                                                                2. 6

                                                                                                  Becoming a contributor […] Step 0: Decide on a single Google Account you will be using to contribute to Go. Use that account for all the following steps and make sure that git is configured to create commits with that account’s e-mail address.

                                                                                                  https://go.dev/doc/contribute

                                                                                                  I guess you’re not supposed create multiple accounts. But I do think your suggestion is clever.

                                                                                                  1. 3

                                                                                                    I guess you’re not supposed create multiple accounts. But I do think your suggestion is clever.

                                                                                                    The solution does not require multiple accounts, assuming each is deleted after use.

                                                                                                  2. 6

                                                                                                    I don’t think this really works. It also requires you to have a phone number to create a Google account in the first place. So people without phone numbers are effectively banned from contributing.

                                                                                                    1. 7

                                                                                                      Indeed, new Google accounts requiring a phone number is the worst aspect. Virtual phone numbers may not work.

                                                                                                      1. 3

                                                                                                        Wait until they ask you for a scan of your ID when you send them a GDPR request.

                                                                                                        1. 1

                                                                                                          Which means contributing to Go (and presumably other google projects?) requires giving Google your phone number as well? In addition to the various “you give up your right to ever sue us if we break the law” contracts?

                                                                                                          1. 1

                                                                                                            If I remember correctly, it only needs the number to register, and you may use an anonymous burner SIM, if you can buy one in your country (more and more countries are banning this). It is a nuisance in any case.

                                                                                                            1. 1

                                                                                                              “It only needs a number to register”: I don’t want thinks it’s reasonable that I should have to give Google - the advertising and surveillance company - my phone number when I already have an email address, a couple of alias addresses, and now things like iCloud that provides automatic alias addresses whenever you need them, etc.

                                                                                                              There is no justification for requiring your email be from a specific provider, unless you want to do more than simply email the account. I feel a little conspiracy nutter saying stuff like that, but we’re talking about a company that has been caught, and sued, and lost, for intentionally circumventing privacy measures. That has repeatedly attempted to tied a browser’s state their platform’s identity mechanisms, and then automatically share that information with sites, periodically “forgetting” a user had opted out of that, and relinking the browser’s identity.

                                                                                                              I respect many of the engineers working at Google. But I do not, and will not ever trust them. They’ve demonstrated that they are not trustworthy on far too many occasions for it to not be a systemic problem.

                                                                                                      2. 3

                                                                                                        They’ve almost certainly streamlined the account deletion process to the point where the handful of developers doing this would add almost no appreciable burden to Google.

                                                                                                        1. 1

                                                                                                          Let’s automate this process~

                                                                                                        1. 3

                                                                                                          Tenacity is such an important trait to cultivate for successful engineering, and especially debugging. I find Rachels debugging stories deeply inspirational.

                                                                                                          1. 14

                                                                                                            I work for AWS, my views are my own and do not reflect my employer’s views.

                                                                                                            Thanks for posting your frustrations with using AWS Lambda, AWS API Gateway, and AWS EventBridge. I agree, using new technologies and handing more responsibility over to a managed service comes with the risk that your organization is unable to adopt and enforce best standards.

                                                                                                            I also agree that working in a cult-like atmosphere is deeply frustrating. This can happen in any organization, even AWS. I suggest focusing on solving problems and your business needs, not on technologies or frameworks. There are always multiple ways to solve problems. Enumerate at least three, put down pros and cons, then prototype on two that are non-trivially different. With this advice you will start breaking down your organization’s cult-like atmosphere.

                                                                                                            Specifically addressing a few points in the article:

                                                                                                            Since engineers typically don’t have a high confidence in their code locally they depend on testing their functions by deploying. This means possibly breaking their own code. As you can imagine, this breaks everyone else deploying and testing any code which relies on the now broken function. While there are a few solutions to this scenario, all are usually quite complex (i.e. using an AWS account per developer) and still cannot be tested locally with much confidence.

                                                                                                            This is a difficult problem. I have worked in organizations that have solved this problem using individual developer AWS accounts deploying a full working version of “entire service” (e.g. the whole of AWS Lambda), with all its little microservices as e.g. different CloudFormation stacks that take ~hours to set up. It works. I have also worked in organizations that have not solved this problem, and resort to maintaining brittle shared test clusters that break once a week and need 1-2 days of a developer’s time to set up. Be the organization that invests in its developer’s productivity and can set up the “entire service” accurately and quickly in a distinct AWS account.

                                                                                                            Many engineers simply put a dynamodb:* for all resources in the account for a lambda function. (BTW this is not good). It becomes hard to manage all of these because developers can usually quite easily deploy and manage their own IAM roles and policies.

                                                                                                            If you trust and train your developers, use AWS Config [2] and your own custom-written scanners to automatically enforce best practices. If you do not trust and do not train your developers, do not give them authorization to create IAM roles and policies, and instead bottleneck this authorization to a dedicated security team.

                                                                                                            Without help from frameworks, DRY (Don’t Repeat Yourself), KISS (Keep It Simple Stupid) and other essential programming paradigms are simply ignored

                                                                                                            I don’t see how frameworks are connected with DRY and KISS. Inexperienced junior devs using e.g. Django or Ruby on Rails will still write bad, duplicated code. Experienced trained devs without a framework naturally gravitate towards helping their teams and other teams re-use libraries and create best practices. I think expecting frameworks to solve your problem is an equally cult-like thought pattern.

                                                                                                            Developers take the generic API Gateway generated DNS name (abcd1234.amazonaws.com) and litter their code with it.

                                                                                                            Don’t do this, attach a Route 53 domain name to API Gateway endpoints.

                                                                                                            The serverless cult has been active long enough now that many newer engineers entering the field don’t seem to even know about the basics of HTTP responses.

                                                                                                            Teach them.

                                                                                                            Cold starts - many engineers don’t care too much about this.

                                                                                                            I care about this deeply. Use Go or Rust first, see how much cold starts are still a problem, in my experience p99.99 latency is < 20 ms for trivial (empty) functions (this is still an outrageously high number for some applications). If cold starts on Go or Rust are still a problem, yes you need to investigate provisioned concurrency. But this is a known limitation of AWS Lambda.

                                                                                                            As teams chase the latest features released by AWS (or your cloud provider of choice)

                                                                                                            Don’t do this, give new features / libraries a hype-cool-down period that is calibrated to your risk profile. My risk profile is ~6 months, and I avoid all libraries that tell me they are not production ready.

                                                                                                            When it’s not okay to talk about the advantages and disadvantages of serverless with other engineers without fear of reprisal, it might be a cult. Many of these engineers say Lambda is the only way to deploy anymore.

                                                                                                            These engineers have stopped solving problems, they are now just lego constructors (I have nothing against lego). Find people who want to solve problems. Train existing people to want to solve problems.

                                                                                                            I am keeping track of people’s AWS frustrations, e.g. [1]. I am working on the outline of a book I’d like to write on designing, deploying, and operating cloud-based services focused on AWS. Please send me your stories. I want to share and teach ideas for solving problems.

                                                                                                            [1] https://blog.verygoodsoftwarenotvirus.ru/posts/babys-first-aws/

                                                                                                            [2] https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html

                                                                                                            1. 4

                                                                                                              The serverless cult has been active long enough now that many newer engineers entering the field don’t seem to even know about the basics of HTTP responses.

                                                                                                              Teach them.

                                                                                                              I’m happy to teach anyone who wants to learn. Unfortunately this usually comes up in the form of their manager arguing that it’s too much overhead to spend time getting their employee(s) up to speed on web tech and insist on using serverless as a way to paper over what is happening throughout the stack. This goes to the heart of why people characterize it as a cult. The issues it brings into orgs isn’t about the tech as much as it is about the sales pitches coming from serverless vendors.

                                                                                                              1. 9

                                                                                                                Interesting. At $WORK, we’re required to create documents containing alternatives that were considered and rejected, often in the form of a matrix with multiple dimensions like cost, time to learn, time to implement, etc. Of course there’s a bit of a push-pull going on with the managers, but we usually timebox it (1 person 1 week if it’s a smaller decision, longer if it’s a bigger one.) Sometimes when launching a new service we’ll get feedback from other senior engineers asking why we rejected an alternative maybe even urging us to reconsider the alternative.

                                                                                                                Emotional aspects of the cult aside (which sucks, not saying it doesn’t just bringing up a different point), I don’t think I’d ever let a new system be made at work if at least a token attempt weren’t made at evaluating different technologies. I fundamentally think comparing alternatives makes for better implementations, especially when you have different engineers with different amounts of experience with different technologies.

                                                                                                                1. 1

                                                                                                                  So you write an RFP with metrics/criteria chosen to perfectly meet the solution already settled on?

                                                                                                                  1. 2

                                                                                                                    I mean if that’s what you want to do, sure. Humans will be human after all. But having this kind of a process offers an escape hatch from dogma around a single idea. Our managers also try to apply pressure to just get started and ignore comparative analyses, but with a dictum from the top, you can always push back, citing the need for a true comparative analysis. When big outages happen, questions are asked in the postmortem whether an alternate architecture would have prevented any issues. In practice we often get vocal, occasionally bikeshed-level comments on different approaches.

                                                                                                                    I’m thankful for our approach. Reading about other company cultures reminds me of why I stay at $WORK.

                                                                                                                2. 2

                                                                                                                  Try giving them alternatives. Want to train your developers, or sign off on technical debt and your responsibility to fix it?, when presented well, can point out the issue. This happens with all tech vendors, and all managers can suck at this. But that’s not the fault of serverless.

                                                                                                                  Note that I’m not arguing that serverless is actually good. As with any tech, the answer is usually “it depends”. But just like serverless, you need experience with other things as well to be able to see this pattern.

                                                                                                                  In fact, I agree with several commenters saying that majority of issues in the article can be applied to any tech. The only real insurmountable technical issue is the testing/local stack. The rest is mostly about processes of the company, or maybe a team in the company.

                                                                                                                3. 4

                                                                                                                  Specifically addressing a few points in the article

                                                                                                                  … while carefully avoiding the biggest one:

                                                                                                                  “All these solutions are proprietary to AWS”

                                                                                                                  That right there is the real problem. An entirely new generation of devs is learning, the hard way, why it sucks to build on proprietary systems.

                                                                                                                  Or to put it in economic terms, ensure that your infrastructure is a commodity. As we learned in the 90s, the winning strategy is x86 boxen running Linux, not Sun boxen running Solaris ;) And you build for the Internet, not AOL …

                                                                                                                  1. 2

                                                                                                                    I think there are three problems with a lot of the serverless systems, which are closely related:

                                                                                                                    • They are proprietary, single-vendor solutions. If you use an abstraction layer over the top then you lose performance and you will still end up optimising to do things that are cheap with one vendor but expensive for others.
                                                                                                                    • They are very immature. We’ve been building minicomputer operating systems (and running them on microcomputers) for 40+ years and know what abstractions make sense. We don’t really know what abstractions make sense for a cloud datacenter (which looks a bit like a mainframe, a bit like a supercomputer, and a bit like a pile of servers).
                                                                                                                    • They have a lot of vertical integration and close dependencies between them, so it’s hard to use some bits without fully buying into the entire stack.

                                                                                                                    If you think back to the late ’70s / early ‘80s, a lot of things that we take for granted now were still very much in flux. For example, we now have a shared understanding that a file is a variable-sized contiguous blob of bytes. A load of operating systems provided record-oriented filesystems, where each file was an array of strongly typed records. If you do networking, then you now use the Berkeley Sockets API (or a minor tweak like WinSock), but that wasn’t really standardised until 1989.

                                                                                                                    Existing FaaS offerings are quite thin shims over these abstractions. They’re basically ‘upload a Linux program and we’ll run it with access to some cloud things that look a bit like the abstractions you’re used to, if you use a managed language then we’ll give you some extra frameworks that build some domain-specific abstractions over the top’. The domain-specific abstractions are often overly specialised and so evolve quite quickly. The minicomputer abstractions are not helpful (for example, every Azure Function must be associated with an Azure Files Store to provide a filesystem, but you really don’t want to use that filesystem for communication).

                                                                                                                    Figuring out what the right abstractions are for things like persistent storage, communication, fault tolerance, and so on is a very active research area. This means that each cloud vendor gains a competitive advantage by deploying the latest research, which means that proprietary systems remain the norm, that the offerings remain immature. I expect that it will settle down over the next decade but there are so many changes coming on the hardware roadmap (think about the things that CXL enables, for one) that anything built today will look horribly dated in a few years.

                                                                                                                    1. 1

                                                                                                                      Many serverless frameworks are built upon Kubernetes, which is explicitly vendor-neutral. However, this does make your third point stronger: full buy-in to Kubernetes is required.

                                                                                                                      1. 2

                                                                                                                        Anything building on Kubernetes is also implicitly buying into the idea that the thing that you’ll be running is a Linux binary (well, or Windows, but that’s far less common) with all of the minicomputer abstractions that this entails. I understand why this is being done (expediency) but it’s also almost certainly not what serverless computing will end up looking like. In Azure, the paid FaaS things use separate VMs for each customer (not sure about the free ones), so using something like Kubernetes (it’s actually ACS for Azure Functions, but the core ideas are similar) means a full Linux VM per function instance. That’s an insane amount of overhead for running a few thousand lines of code.

                                                                                                                        A lot of the focus at the moment is on how these things scale up (you can write your function and deploy a million instances of it in parallel!) but I think the critical thing for the vast majority of users is how well they scale down. If you’re deploying a service that gets an average of 100 requests per day, how cheap can it be? Typically, FaaS things spin up a VM, run the function, then leave the VM running for a while and then shut it down if it’s not in use. If your function is triggered, on average, at an interval slightly longer than the interval that the provider shuts down the VM then the amount that you’re paying (FaaS typically charges only for CPU / memory while the function is running) is far less than the cost of the infrastructure that’s running it.

                                                                                                                    2. 2

                                                                                                                      S3 was a proprietary protocol that has become a de facto industry standard. I don’t see why the same couldn’t happen for Lambda.

                                                                                                                  1. 6

                                                                                                                    Feh! Where’s the tape punch? This is just for mindless content consumption. People will be pulling tapes of MP3s and videos, instead of punching their own tapes of games they wrote in BASIC the way we used to. I predict total civilization collapse in 3, 2, 1…

                                                                                                                    1. 2

                                                                                                                      That got me to wondering …

                                                                                                                      According to Wikipedia [1], paper tape has a bit density of ~ 50 bits / inch. Leaving aside error correction / whatevs, and assuming 5MiB for an MP3, you’d be looking at … 21.307km o_O

                                                                                                                      For all I wax lyrical about my first hard drive (a second hand 20MiB MFM HDD I fitted to my IBM 5150 with 3.5” -> 5.25” rails I made in my high school workshop), that’s next-level.

                                                                                                                      [1] https://en.wikipedia.org/wiki/Paper_data_storage

                                                                                                                      1. 1

                                                                                                                        I could see someone making an enormously long paper-tape of a prosaic file, as an art project. 21km is probably impractical, but take something smaller like a JPEG of some stale meme, then string the tape around and around a gallery…

                                                                                                                    1. 16

                                                                                                                      MacBook Pro M1 Max for work, Reg M1 for personal. I kid you not I can go for a weekend with these laptops and use them extensively and I will have battery left come Monday. I have been missing my OpenBSD machine but alas. The hardware is super solid and the battery life and performance have been amazing.

                                                                                                                      1. 4

                                                                                                                        Mostly the same here, 16” with M1 Pro for work and 14” with M1 Pro for personal use. The hardware is incredible honestly, they don’t get hot, they done make noise, just this Monday I noticed that there was a faint sound of wind coming from my laptop and then remembered I’d had a Chromium compile using 100% of all cores for the past couple of hours. Any previous laptop I’ve owned (mostly Dells) or any of the Intel macs in the office would’ve been painfully hot and loud as a jet for the whole duration – while compiling Chromium less than half as quickly.

                                                                                                                        The Intel monopoly in the laptop space desperately needs to come to an end, and PC manufacturers need to drastically step up their game. I’m not a huge fan of macOS, but I can’t defend getting vastly inferior PC hardware (in terms of performance, battery life, build quality and the screen/speakers/webcam/etc) at the same price just to be able to run an OS I prefer.

                                                                                                                        I hope Asahi gets really good, because dual booting macOS and Linux on these things would be amazing.

                                                                                                                        1. 3

                                                                                                                          vastly inferior PC hardware (in terms of performance, battery life, build quality and the screen/speakers/webcam/etc) at the same price

                                                                                                                          Are you buying new hardware, or used?

                                                                                                                          I paid AUD325 for my refurbished ThinkPad W520 w/ 16GiB RAM back in 2018, and transplanted the (fast) SSD from my old X220 into it.

                                                                                                                          Yeah, the M1 that my employer issued me is superior in most respects to the W520. But it’s literally an order of magnitude more expensive new.

                                                                                                                          1. 3

                                                                                                                            I’m buying hardware new for the most part.

                                                                                                                            I don’t tend to go for cheap hardware, I know PC manufacturers are way more competitive in the lower market segments than where the MacBook Pro operates. Apple also has insane pricing on things like storage space, so if you need a couple terabytes it very quickly becomes a much worse value proposition, and the lack of upgradability absolutely sucks. But if you’re looking for something around the MacBook Pro price range, and don’t need more than around 1TB of storage and 16GB of RAM, it’s really, really hard to find a better laptop than the MacBook Pro in my experience; at least when factoring in qualities like the screen and speakers and trackpad.

                                                                                                                            1. 3

                                                                                                                              Agreed. That M1 that I’ve started using is indisputably the next generation of laptop; nothing I’ve used that is Intel based comes close. It also has better sound than the Bluetooth speaker currently adorning my desk.

                                                                                                                              But a while ago I switched to refurb and I haven’t looked back.

                                                                                                                              Leaving aside my wife who runs new XPSs (also on Ubuntu) I bought the rest of our family fleet - 1 x W520 for me and 3 x X250s for the kids (so we can share docking stations, etc.) - for around AUD1,200 in total.

                                                                                                                              1. 1

                                                                                                                                Indisputably? According to who?

                                                                                                                                1. 2

                                                                                                                                  Have you read the articles around the M1 vs Intel cpus in laptops (here’s just one)? These M1 cpus are incredibly powerful, about as powerful as the intel cpus if not more so. But they use half the energy in all comparisons. On top of Apple moving over to arm, there’s also Windows 11 built for arm and chromebooks too. I’m happy that arm is rising in popularity, so maybe we won’t just have to go with only Intel or AMD in the future.

                                                                                                                                  1. 2

                                                                                                                                    That’s from a year ago, and a disingenuous time frame as well being pre-Tiger Lake. Here’s one from last month, PCMag, that shows Apple wininng at effecincy but losing out to AMD and Intel in several benchmarks. The differences aren’t great, but “indisputable” is the wrong word, as here it is, being disputed.

                                                                                                                                    1. 4

                                                                                                                                      I think my use of that word literally made your head explode.

                                                                                                                                      ;)

                                                                                                                                      1. 2

                                                                                                                                        I only see power being disputed there, and not by a huge margin either. Efficiency is still going to be a huge part of “the next generation of laptops.” That’s why you don’t have phones running x86 processors. Also with microsoft developing windows for arm, Apple with its m1, and google’s chromebooks, I still firmly believe that arm is indisputably the future for laptops, and hopefully desktops while we’re at it.

                                                                                                                                        1. 1

                                                                                                                                          If not RISC coming around the corner before then. It will be a while for the industry adjusts to a non-x86 architecture. Microsoft has had ARM support for ages and was garbage. I used Ubuntu on ARM Chromebook in 2012 and still to this day, a lot of binaries aren’t available. Steam Deck had to be x86 or there’d be no games.

                                                                                                                                    2. 1

                                                                                                                                      Okay, fair cop - that was rhetorical flourish, but only slightly, and I didn’t mean to preclude the idea that other laptops are similarly good.

                                                                                                                                      To explain a bit further: the M1 represents a step change from previous laptops I’ve used in terms of battery life, convenience, and general usability in some ways. For example: no fan, no touchbar, usable keyboard, magsafe power adaptor, actual ports in addition to USB-C, etc.

                                                                                                                                      There may certainly be Intel laptops at a similar level, but in which case, they also would be the “next generation” of laptops compared with the XPSs and X and W series I’m used to running.

                                                                                                                                      Also, I’m still going to DIY my next laptop, because I’m quite sick of almost everything about “consumer” (how I hate that term) laptops and operating systems. But the M1 will make that a harder trade-off.

                                                                                                                                    3. 1

                                                                                                                                      520

                                                                                                                                      W540 :/

                                                                                                                                  2. 2

                                                                                                                                    I mean, I looked at the the prices of new X-series ThinkPads and all except IIRC the X13 were more expensive than a base MacBook Air, and usually worse specced, not to mention things hard to put on a specsheet like mouthfeel/build quality.

                                                                                                                                    You might not be buying used, but someone else has to buy new in the first place for used to actually happen.

                                                                                                                                    1. 2

                                                                                                                                      Yup, and just as with cars, I will continue to benefit from the second hand market while not really understanding why people buy new in the first place.

                                                                                                                                    2. 1

                                                                                                                                      I had strongly considered getting a used W520 years back. The ability to just buy replacement and extended capacity batteries, the DVD drive bay which can be repurposed, no numeric keypad, sufficient RAM… that was all great. I think the prices back in 2015 / 2016 were still a bit high for a used system, so I didn’t get one back then. The other thing giving me pause would have been the weight.

                                                                                                                                      1. 1

                                                                                                                                        Actually I’d recommend the 521, as the 520 has an awful trackpad. I still have a 521 trackpad in my workshop to fit to the 520 at some point.

                                                                                                                                        1. 1

                                                                                                                                          Argh, I meant 540/541 here.

                                                                                                                                      2. 1

                                                                                                                                        520

                                                                                                                                        Argh! Following this up like some of my other posts … I meant a W540, not a W520.

                                                                                                                                    3. 4

                                                                                                                                      I kid you not I can go for a weekend with these laptops and use them extensively and I will have battery left come Monday.

                                                                                                                                      Yeah, this part is impressive. I can get with the Air 8 hours continuous with max brightness, and light usage at like half brightness for a week, maybe two. It’s a fully fledged laptop you can treat like an iPad battery life wise.

                                                                                                                                      1. 4

                                                                                                                                        My boss, a long-time Apple hater, ended up getting an M1 MBP which he took on a two-week vacation. He told me that he realized a week in that he forgot to pack its charger, then realized “woah, it’s been a week… and I only now thought about charging it?!” Turns out it had just over 50% charge. He turned the screen brightness down and brought it home with over 10% to go.

                                                                                                                                        He now begrudgingly respects Apple.

                                                                                                                                      2. 2

                                                                                                                                        I was really wanting an arm laptop for the battery life after hearing about how well the M1 cpus performed. But I couldn’t justify spending so much money on an Apple product, I just don’t like them that much personally. I bought a Galaxy Book Go instead for under $300. Pretty dang cheap for a laptop, and obviously the specs show for it. But I like it so far. I’m gonna wait until Ubuntu 22.04 is officially released and install it probably.

                                                                                                                                        For work I use a regular Galaxy Book with Ubuntu on it. It works very well for what I do, no problem running my dev environment.

                                                                                                                                        1. 1

                                                                                                                                          I can compile things on it without fans making noise, at a speed competitive with a big Ryzen box. M1 is so good that I can forgive Apple the dark days of TouchBar and janky keyboards.

                                                                                                                                          1. 1

                                                                                                                                            I had just built a Ryzen 3600 hackintosh when the M1 Air came out. I had spent a lot on cooling to try and get it to run silently. It was still annoyingly audible.

                                                                                                                                            I bought the M1 Air and it was the same speed for everything I tried it on - and was a laptop. With no fan.

                                                                                                                                            I sold the Ryzen tower straight away.

                                                                                                                                        1. 1

                                                                                                                                          14” MacBook Pro M1 for work (issued by my employer); ThinkPad W520 for everything else.

                                                                                                                                          I’d like to run Asahi on the M1 but that will be a matter for discussion with the IT team there once Asahi is ready as a daily driver :) I’m running Ubuntu on the W520 now, but that’s a stopgap.

                                                                                                                                          I still harbour plans for a DIY laptop based on a Udoo Bolt V8, hoping to bring those to fruition this year.

                                                                                                                                          1. 1

                                                                                                                                            520

                                                                                                                                            I meant W540 here.

                                                                                                                                          1. 7

                                                                                                                                            How will you ensure that you can still build zig from sources in the future?

                                                                                                                                            1. 27

                                                                                                                                              By forever maintaining two implementations of the compiler - one in C, one in Zig. This way you will always be able to bootstrap from source in three steps:

                                                                                                                                              1. Use system C compiler to build C implementation from source. We call this stage1. stage1 is only capable of outputting C code.
                                                                                                                                              2. Use stage1 to build the Zig implementation to .c code. Use system C compiler to build from this .c code. We call this stage2.
                                                                                                                                              3. Use stage2 to build the Zig implementation again. The output is our final zig binary to ship to the user. At this point, if you build the Zig implementation again, you get back the same binary.

                                                                                                                                              https://github.com/ziglang/zig-bootstrap

                                                                                                                                              1. 7

                                                                                                                                                I’m curious, is there some reason you don’t instead write a backend for the Zig implementation of the compiler to output C code? That seems like it would be easier than maintaining an entirely separate compiler. What am I missing?

                                                                                                                                                1. 2

                                                                                                                                                  That is the current plan as far as I’m aware

                                                                                                                                                  1. 1

                                                                                                                                                    The above post says they wanted two separate compilers, one written in C and one in Zig. I’m wondering why they just have one compiler written in Zig that can also output C code as a target. Have it compile itself to C, zip up the C code, and now you have a bootstrap compiler that can build on any system with a C compiler.

                                                                                                                                                    1. 2

                                                                                                                                                      In the above linked Zig Roadmap video, Andrew explains that their current plan is halfway between what you are saying and what was said above. They plan to have the Zig compiler output ‘ugly’ C, then they will manually clean up those C files and version control them, and as they add new features to the Zig source, they will port those features to the C codebase.

                                                                                                                                                      1. 2

                                                                                                                                                        I just watched this talk and learned a bit more. It does seem like the plan is to use the C backend to compile the Zig compiler to C. What interests me though is there will be a manual cleanup process and then two separate codebases will be maintained. I’m curious why an auto-generated C compiler wouldn’t be good enough for bootstrapping without manual cleanup.

                                                                                                                                                        1. 7

                                                                                                                                                          Generated source code usually isn’t considered to be acceptable from an auditing/chain of trust point of view. Don’t expect the C code generated by the Zig compiler’s C backend to be normal readable C, expect something closer to minified js in style but without the minification aspect. Downloading a tarball of such generated C source should be considered equivalent to downloading an opaque binary to start the bootstrapping process.

                                                                                                                                                          Being able to trust a compiler toolchain is extremely important from a security perspective, and the Zig project believes that this extra work is worth it.

                                                                                                                                                          1. 2

                                                                                                                                                            That makes a lot of sense! Thank you for the clear and detailed response :)

                                                                                                                                                          2. 2

                                                                                                                                                            It would work fine, but it wouldn’t be legitimate as a bootstrappable build because the build would rely on a big auto-generated artifact. An auto-generated artifact isn’t source code. The question is: what do you need to build Zig, other than source code?

                                                                                                                                                            It could be reasonable to write and maintain a relatively simple Zig interpreter that’s just good enough to run the Zig compiler, if the interpreter is written in a language that builds cleanly from C… like Lua, or JavaScript using Fabrice Bellard’s QuickJS.

                                                                                                                                                            1. 1

                                                                                                                                                              Except that you can’t bootstrap C, so you’re back where you started?

                                                                                                                                                              1. 2

                                                                                                                                                                The issue is not to be completely free of all bootstrap seeds. The issue is to avoid making new ones. C is the most widely accepted and practical bootstrap target. What do you think is a better alternative?

                                                                                                                                                                1. 1

                                                                                                                                                                  C isn’t necessarily a bad choice today, but I think it needs to be explicitly acknowledged in this kind of discussion. C isn’t better at being bootstrapped than Zig, many just happen to have chosen it in their seed.

                                                                                                                                                                  A C compiler written in Zig or Rust to allow bootstrapping old code without encouraging new C code to be written could be a great project, for example.

                                                                                                                                                                  1. 5

                                                                                                                                                                    This is in fact being worked on: https://github.com/Vexu/arocc

                                                                                                                                                      2. 1

                                                                                                                                                        Or do like Golang. For bootstrap you need to:

                                                                                                                                                        1. Build Go 1.4 (the last one made in C)
                                                                                                                                                        2. Build the latest Go using the compiler from step 1
                                                                                                                                                        3. Build the latest Go using the compiler from step 2
                                                                                                                                                      3. 3

                                                                                                                                                        Build the Zig compiler to Wasm, then run it to cross-compile the new compiler. Wasm is forever.

                                                                                                                                                        1. 11

                                                                                                                                                          I certainly hope that’s true, but in reality wasm has existed for 5 years and C has existed for 50.

                                                                                                                                                          1. 2

                                                                                                                                                            The issue is building from maintained source code with a widely accepted bootstrapping base, like a C compiler.

                                                                                                                                                            The Zig plan is to compile the compiler to C using its own C backend, once, and then refactor that output into something to maintain as source code. This compiler would only need to have the C backend.

                                                                                                                                                            1. 1

                                                                                                                                                              I mean, if it is, then it should have the time to grow some much needed features.

                                                                                                                                                              https://dl.acm.org/doi/10.1145/3426422.3426978

                                                                                                                                                            2. 1

                                                                                                                                                              It’s okay if you don’t know because it’s not your language, but is this how Go works? I know there’s some kind of C bootstrap involved.

                                                                                                                                                              1. 4

                                                                                                                                                                The Go compiler used to be written in C. Around 1.4 they switched to a Go compiler written in Go. If you were setting up an entirely new platform (and not use cross compiling), i believe the recommended steps are still get a C compiler working, build Go 1.4, then update from 1.4 to latest.

                                                                                                                                                            3. 2

                                                                                                                                                              How do we build C compilers from source?

                                                                                                                                                              1. 3

                                                                                                                                                                Bootstrapping a C compiler is usually much easier than bootstrapping a chain of some-other-language compilers.

                                                                                                                                                                1. 4

                                                                                                                                                                  Only if you accept a c compiler in your bootstrap seed and don’t accept a some-other-language compiler in your seed.

                                                                                                                                                                  1. 3

                                                                                                                                                                    Theoretically. But from a practical point of view? Yes, there are systems like Redox (Rust), but in most cases the C compiler is an inevitable piece of the puzzle (the bootstrapping chain) when building an operating system. And in such cases, I would (when focused on simplicity) rather prefer a language that depends just on C (that I already have) instead of a sequence of previous versions of its own compilers. (and I say that as someone, who does most of his work in Java – which is terrible from the bootstrapping point-of-view)

                                                                                                                                                                    However, I do not object much against the dependence on previous versions of your compiler. It is often the way to go, because you want to write your compiler in a higher language instead of some old-school C and because you create a language and you believe in its qualities, you use it also for writing the compiler. What I do not understand is why someone (not this particular case, I saw this pattern before many times) present the “self-hosted” as an advantage…

                                                                                                                                                                    1. 2

                                                                                                                                                                      The self-hosted Zig compiler provides much faster compile times and is easier to hack, allowing language development to move forward. In theory the gains could be done in a different language, but some of the kind of optimizations used are exactly the kind of thing Zig is good at. See this talk for some examples: https://media.handmade-seattle.com/practical-data-oriented-design/.

                                                                                                                                                                    2. 1

                                                                                                                                                                      But you could make a C compiler (or a C interpreter) from scratch relatively easily.

                                                                                                                                                              1. 8

                                                                                                                                                                an immediate halt to the production of new computing devices

                                                                                                                                                                I’ll reproduce here an HN comment that I posted a few months ago:

                                                                                                                                                                On the one hand, if this happened, it could be good for putting an end to the routine obsolescence of electronics. Then maybe we could get to the future that bunnie predicted roughly 10 years ago, complete with heirloom laptops.

                                                                                                                                                                On the other hand, ongoing advances in semiconductor technology let us solve real problems – not just in automation that makes the rich richer, enables mass surveillance, and possibly takes away jobs, but in areas that actually make people’s lives better, such as accessibility. If Moore’s Law had stopped before the SoC in the iPhone 3GS had been introduced, would we have smartphones with built-in screen readers for blind people? If it had stopped before the Apple A12 in 2018, would the iOS VoiceOver screen reader be able to use on-device machine learning to provide access to apps that weren’t designed to be accessible? (Edit: A9-based devices could run iOS 14, but not with this feature.) What new problems will be solved by further advances in semiconductors? I don’t know, but I know I shouldn’t wish for an end to those advances. I just wish we could have them without ever-increasing software bloat that leads to obsolete hardware.

                                                                                                                                                                1. 3

                                                                                                                                                                  automation that makes the rich richer

                                                                                                                                                                  The rich have money to invest, to bring new ideas and technologies to market.

                                                                                                                                                                  What sort of society could invent without further enriching the rich? Automation is just a special case of a general property of our society.

                                                                                                                                                                  1. 1

                                                                                                                                                                    Tools that make a person more productive can make the person using those tools richer. Tools that eliminate the need for a person entirely make whoever can afford to buy the tools and fire the person richer.

                                                                                                                                                                    This was the big shift in the original industrial revolution. Prior to that, improvements in looms or spinning wheels made the self-employed cottage workers more productive and meant that they could make more money from selling more cloth. Large-scale industrial automation meant that you could replace a hundred cottage workers with ten factory workers and some machines but only people who were already wealthy could afford to build the factories.

                                                                                                                                                                    1. 1

                                                                                                                                                                      And yet, centuries after that revolution, we’re collectively and individually far richer.

                                                                                                                                                                      I think the buying and firing example is an argument for a (voluntary!) social safety net, not an argument against automation or wealth disparity.

                                                                                                                                                                      1. 1

                                                                                                                                                                        The difficulty with any social science research is that we don’t have an alternate universe for A/B testing. You can imagine an alternate history where the government made business loans available to weavers that allowed groups of 100 of them to jointly finance a factory, work shorter hours, and increase both their income and their net productivity, rather than a single rich person putting 100 of them out of work and hiring 10 of them in abusive conditions. Would we now enjoy the same standard of living, something better, or something worse? I don’t have any data to support an argument either way, but I am reasonably confident that it would have at least been better in the short term for the 100 workers.

                                                                                                                                                                        1. 1

                                                                                                                                                                          Taxpayer-funded “enterprises” aren’t a good idea [1], regardless of whether the money goes to the workers or shareholders[2].

                                                                                                                                                                          Having worked for a company where 50% of the surplus was distributed between staff, we ran open financials and salaries, & so forth I can confidently say that there are definitely better models than the norm, though :)

                                                                                                                                                                          [1] Coming from an Austrian perspective, though. I’m aware that’s disputed by other schools of economics.

                                                                                                                                                                          [2] A particularly egregious example is the Australian automotive industry - propped up by countless taxpayers over the years, until the owning companies finally closed up shop. I can’t off-hand think of a more blatant example of robbing Peter to pay Paul.

                                                                                                                                                                1. 7

                                                                                                                                                                  Or, when debugging a shell script issue requires reading the Linux source code.

                                                                                                                                                                  1. 3

                                                                                                                                                                    Many fun problems require debugging down through multiple layers of technology.

                                                                                                                                                                    Just be thankful you can debug the code in this fashion. What if it had been Windows or macOS?

                                                                                                                                                                    1. 5

                                                                                                                                                                      Yeah but some technologies have more hidden gotchas than others. In my experience, shell scripts are really good at hiding problems.

                                                                                                                                                                      1. 3

                                                                                                                                                                        On Windows, I’d have the symbol server.

                                                                                                                                                                        1. 1

                                                                                                                                                                          On Windows, I don’t have a symbols server.

                                                                                                                                                                          Sad day.

                                                                                                                                                                    1. 9

                                                                                                                                                                      It might look like prolonged human misery throughout the world.

                                                                                                                                                                      https://ourworldindata.org/uploads/2019/11/Extreme-Poverty-projection-by-the-World-Bank-to-2030-786x550.png

                                                                                                                                                                      Bluntly: capitalism, growth, wealth, technology - these are the drivers of human progress and flourishing.

                                                                                                                                                                      1. 1

                                                                                                                                                                        I agree except that the poverty reduction shown in East Asia is from socialist China.

                                                                                                                                                                        1. 6

                                                                                                                                                                          China’s “socialism” revolves around state-owned enterprises in a market economy. They’re pretty capitalist, even Chinese schools teach the Chinese system of government as a Hegelian derivation of socialism and capitalism.

                                                                                                                                                                          1. 3

                                                                                                                                                                            What distinguishes capitalism as a system is that profit is the decisive and ultimate factor around which economic activity is organized. China’s system makes use of markets and private enterprise, but it is ultimately planned and organized around social ends (see: the aforementioned poverty alleviation).

                                                                                                                                                                            In China they describe their current system as the lower stage of socialism, but yes they’ve developed it in part based on insights into the contradictions of earlier socialist projects.

                                                                                                                                                                            1. 2

                                                                                                                                                                              Another, less charitable, way of looking at it: the Chinese Government is unwilling to relinquish power, but discovered through the starvation and murder of 45 million of their own people that mixed economies are less bad than planned economies.

                                                                                                                                                                              1. 2

                                                                                                                                                                                Yeah, I used to believe all that too. But eventually I got curious about what people on the other side of the argument could possibly have to say, and much to my surprise I found they had stronger arguments and a more serious commitment to truth. Then I realized that the people pushing those lines I believed were aligned with the people pushing all sorts of anti-human ideologies like degrowth.

                                                                                                                                                                                1. 2

                                                                                                                                                                                  “Government willing to relinquish power” is a sufficiently low-half-life, unstable-state-of-being that the average number in existence at any given time is zero. What information does referencing it add?

                                                                                                                                                                                  1. 1

                                                                                                                                                                                    I disagree. Any government participating in open, free, elections is clearly willing to relinquish power.

                                                                                                                                                                                    1. 1

                                                                                                                                                                                      In Australia, 78 senate seats and 151 house seats change occupants at an election; the remaining 140,000 government employees largely remain the same.

                                                                                                                                                                                      Is replacing 0.1% of the people really ‘replacing the government’?

                                                                                                                                                                                      1. 1

                                                                                                                                                                                        Ah, fair - I was referring to the politicians (theoretically) in charge of the civil service. I’m intrigued by where you’re going with this, though … are you concerned about the efficacy of changing the 0.1% even in the case of democratically elected Governments?

                                                                                                                                                                                        1. 1

                                                                                                                                                                                          To my mind, long-term stability is the key practical advantage of constitutional democracies as a form of government.

                                                                                                                                                                                          Dictatorships change less frequently, and churn far more of the government when they do. Single-party rule is subject to sudden, massive policy reversals.

                                                                                                                                                                                          Stability (knowing how the rules can change over time, and how they can’t) is what makes them desirable places for the wealthy to live and invest, which makes larger capital works possible.

                                                                                                                                                                                          1. 1

                                                                                                                                                                                            Right so to paraphrase - you don’t see the replacement of politicians by democratic means as likely to effect significant change, but also, you see that as a feature not a bug?

                                                                                                                                                                                            1. 1

                                                                                                                                                                                              Essentially, yes. Significant changes would imply that the voters have drastically changed their minds in a short time, which essentially never happens. The set of changes is also restricted (eg no retrospective crimes, restrictions on asset seizure).

                                                                                                                                                                          2. 1

                                                                                                                                                                            Some starting criticism: https://issforum.org/essays/PDF/CR1.pdf

                                                                                                                                                                            Encourage taking “our world in data” charts with a grain of salt when considering fossil fuel dependence (and our future), planetary boundaries framework losses (notably biodiversity), etc.

                                                                                                                                                                            1. 4

                                                                                                                                                                              Hunter-gatherer societies also ran up against the limitations of their mode of relating to the environment. A paradigm shift in this relationship opened up new horizons for growth and development.

                                                                                                                                                                              If we’ve reached similar environmental limits then the solution is a similar advancement to a higher mode, not “degrowth” (an ideology whose most severe ramifications will inevitably fall upon the people who are struggling the most already).

                                                                                                                                                                              1. 1

                                                                                                                                                                                This is a book review. What does it do to suggest the data that the number of people in extreme poverty is decreasing are false?

                                                                                                                                                                                1. 1

                                                                                                                                                                                  Okay, go back to the linked chart, which references Poverty and Shared Prosperity, (World Bank, 2018) - here’s some reflection around the metrics - 12 Things We Can Agree On about Global Poverty (Hickel & Kenny, 2018) - notably the words of caution about such data:

                                                                                                                                                                                  “10. Income and consumption does not tell us the whole story about poverty. Poverty is multi-dimensional, and some aspects of human well-being can be obscured by consumption figures.”

                                                                                                                                                                                  “11. The present rate of poverty reduction is too slow for us to end $1.90/day poverty by 2030, or $7.40/day poverty in our lifetimes. To achieve this goal, we would need to change economic policy to make it fairer for the world’s majority. We will also need to respond to the growing crisis of climate change and ecological breakdown, which threatens the gains we have made.”

                                                                                                                                                                                  “12. Ultimately, the more morally relevant metric is not proportions or absolute numbers, but rather the extent of poverty vis-a-vis our capacity to end it. By this metric, the world has much to do—perhaps more than ever before.”

                                                                                                                                                                            1. 6

                                                                                                                                                                              ‘C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off’ - Bjarne Stroustrup

                                                                                                                                                                              1. 1

                                                                                                                                                                                How does C++ make it harder?

                                                                                                                                                                                1. 1

                                                                                                                                                                                  RAII for the most part.

                                                                                                                                                                                  1. 1

                                                                                                                                                                                    RAII, smart pointers to manage memory, safer strings, robust collection classes, type-safe templates instead of ad-hoc casting, references vs. pointers, optionals and variants instead of unsafe raw unions…

                                                                                                                                                                                    1. 2

                                                                                                                                                                                      unsafe raw unions

                                                                                                                                                                                      I completely misread that when scanning the comments, did a double-take, then re-read it :)

                                                                                                                                                                                      1. 2

                                                                                                                                                                                        Raw unions have made me cry, that’s for sure.