Threads for FeepingCreature

  1. 5

    As an extension developper who works on very stateful extensions (can’t do otherwise due to performance reasons), having to make my extensions’ background process “restartable” is going to be a nightmare.

    I also dread the new permission model, I already audit the code of the extensions I install, I don’t want to have to manually allow them to run on websites every time I install them.

    I can see how these changes are great from an end-user point of view though. I’m just sad that this continues to follow the trend of improving things for “normal” users while “technical users” are left behind. I guess it makes sense though, the pool or normal users is much larger that the pool of technical users and Mozilla wants to grow.

    1. 2

      And yet, it’s continuing to shrink.

      Firefox should realize that there’s no market in being the second best product for the biggest user pool.

      1. 2

        It’s shrinking in relative numbers, but is it shrinking in absolute numbers too?

    1. 10

      I hope the author gets the help they need, but I don’t really see how the blame for their psychological issues should be laid at the feet of their most-recent employer.

      1. 50

        In my career I’ve seen managers cry multiple times, and this is one of the places that happened. A manager should never have to ask whether they’re a coward, but that happened here.

        I dunno, doesn’t sound like they were the only person damaged by the experience.

        Eventually my physicians put me on forced medical leave, and they strongly encouraged me to quit…

        Seems pretty significant when medical professionals are telling you the cure for your issues is “quit this job”?

        1. 16

          Seems pretty significant when medical professionals are telling you the cure for your issues is “quit this job”?

          A number of years ago I developed some neurological problems, and stress made it worse. I was told by two different doctors to change or quit my job. I eventually did, and it helped, but the job itself was not the root cause, nor was leaving the sole cure.

          I absolutely cannot speak for OP’s situation, but I just want to point out that a doctor informing you to rethink your career doesn’t necessarily imply that the career is at fault. Though, in this case, it seems like it is.

          1. 4

            It doesn’t seem like the OP’s doctors told them to change careers though, just quit that job.

            1. 3

              To clarify, I’m using “career change” in a general sense. I would include quitting a job as a career change, as well as leaving one job for another in the same industry/domain. I’m not using it in the “leave software altogether” sense.

        2. 24

          I’m trusting the author’s causal assessment here, but employers (especially large businesses with the resources required) can be huge sources of stress and prevent employees from having the time or energy needed to seek treatment for their own needs, so they can both cause issues and worsen existing ones.

          It’s not uncommon, for example, for businesses to encourage unpaid out-of-hours work for salaried employees by building a culture that emphasizes personal accountability for project success; this not only increases stress and reduces free time that could otherwise be used to relieve work-related stress, it teaches employees to blame themselves for what could just as easily be systemic failures. Even if an employee resists the social pressure to put in extra hours in such an environment, they’ll still be penalized with (real or imagined) blame from their peers, blame from themselves for “not trying hard enough”, and likely less job safety or fewer benefits.

          In particular, there’s relevance from the business’ failure to support effective project management, manage workloads, or generally address problems repeatedly and clearly brought up to them. These kinds of things typically fuel burnout. The author doesn’t go into details enough for an outside observer to make a judgment call one way or the other, but if you trust the author’s account of reality then it seems reasonable to blame the employer for, at the least, negligently fueling these problems through gross mismanagement.

          Arguably off-topic, but I think it might squeak by on the grounds that it briefly ties the psychological harm to the quality of a technical standard resulting from the mismanaged business process.

          1. 3

            a culture that emphasizes personal accountability for project success; this not only increases stress and reduces free time that could otherwise be used to relieve work-related stress, it teaches employees to blame themselves for what could just as easily be systemic failures.

            This is such a common thing. An executive or manager punts on actually organizing the work, whether from incompetence or laziness, and then tries to make the individuals in the system responsible for the failures that occur. It’s hardly new. Deming describes basically this in ‘The New Economics’ (look up the ‘red bead game’).

            More cynically, is WebAssembly actuall in Google’s interests? It doesn’t add revenue to Google Cloud. It’s going to make their data collection harder (provide Google analytics libraries for how many languages?). It was clearly a thing that was gaining momentum, so if they were to damage it, they would need to make sure they had a seat at the table and then make sure that the seat was used as ineffectually and disruptively as possible.

            1. 9

              More cynically, is WebAssembly actually in Google’s interests?

              I think historically the answer would have been yes. Google has at various points been somewhat hamstrung by shipping projects with slow front end JS in them and responded by trying to make browsers themselves faster. e.g. creating V8 and financially contributing to Mozilla.

              I couldn’t say if Google now has any incentive to not make JS go fast. I’m not aware of one. I suspect still the opposite. I think they’re also pushing mobile web apps as a way to inconvenience Apple; I think Google currently want people to write portable software using web tech instead of being tempted to write native apps for iOS only.

              That said, what’s good for the company is not the principle factor motivating policy decisions. What’s good for specific senior managers inside Google is. Otherwise you wouldn’t see all these damn self combusting promo cycle driven chat apps from Google. A company is not a monolith.

              ‘The New Economics’

              I have this book and will have to re-read at least this bit tomorrow. I have slightly mixed feelings about it, mostly about the writing style.

              1. 1

                Making JS fast is one thing. Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                Your point about the senior managers’ interests driving what’s done is on point, though. Google and Facebook especially are weird because ads fund the company, and the rest is all some kind of loss leader floating around divorced from revenue.

                The only thing I’ll comment about Deming is that the chapter on intrinsic vs extrinsic motivation should be ignored, as that’s entirely an artifact despite its popularity. The rest of the book has held up pretty well.

                1. 10

                  Making JS fast is one thing. Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                  Google doesn’t need to maintain their analytics libraries in many other languages, only to expose APIs callable from those languages. All WebAssembly languages can call / be called by JavaScript.

                  More generally, Google has been the biggest proponent of web apps instead of web services. Tim Berners-Lee’s vision for the web was that you’d have services that provided data with rich semantic markup. These could be rendered as web pages but could equally plug into other clients. The problem with this approach is that a client that can parse the structure of the data can choose to render it in a way that simply ignores adverts. If all of your adds are in an <advert provider="google"> block then an ad blocker is a trivial browser extension, as is something that displays ads but restricts them to plain text. Google’s web app push has been a massive effort to convince everyone to obfuscate the contents of their web pages. This has two key advantages for Google:

                  • Writing an ad blocker is hard if ads and contents are both generated from a Turing-complete language using the same output mechanisms.
                  • Parsing such pages for indexing requires more resources (you can’t just parse the semantic markup, you must run the interpreter / JIT in your crawler, which requires orders of magnitude more hardware than simply parsing some semantic mark-up. This significantly increases the barrier to entry for new search engines, protecting Google’s core user-data-harvesting tool.

                  WebAssembly fits very well into Google’s vision for the web.

                  1. 2

                    I used to work for a price-comparison site, back when those were actual startups. We had one legacy price information page that was Java applet (remember those?) Supposedly the founders were worried about screen scrapers so wanted the entire site rendered with applets to deter them.

                    1. 1

                      This makes more sense than my initial thoughts. Thanks.

                    2. 2

                      Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                      This is something I should have stated explicitly but didn’t think to: I don’t think wasm is actually going to be the future of non-JS languages in the browser. I think they for the next couple decades at least, wasm is going to be used for compute kernels (written in other langs like C++ and Rust) that get called from JS.

                      I’m taking a bet here that targeting wasm from langs with substantial runtimes will remain unattractive indefinitely due to download weight and parsing time.

                      about Deming

                      I honestly think many of the points in that book are great but hoo boy the writing style.

              2. 1

                That is exactly what I thought while reading this. I understand that to a lot of people, WebAssembly is very important, and they have a lot of emotions vested into the success. But to the author’s employer, it might not be as important, as it might not directly generate revenue. The author forgets that to the vast, vast majority of people on this earth, having the opportunity to work on such a technology at a company like Google is an unparalleled privilege. Most people on this earth do not have the opportunity to quit their job just because a project is difficult, or because meetings run long or it is hard to find consensus. Managing projects well is incredibly hard. But I am sure that the author was not living on minimum wage, so there surely was compensation for the efforts.

                It is sad to hear that the author has medical issues, and I hope those get sorted out. And those kinds of issues do exacerbate stressful jobs. But that is not a good reason for finger pointing. Maybe the position just was not right for the author, maybe there are more exciting projects that are waiting in the future. I certainly hope so. But it is important not to blame one’s issues on others, that is not a good attitude in life.

                1. 25

                  Using the excuse that because there exist others less fortunate, it’s not worth fighting to make something better is also not a good attitude in life.

                  Reading between the lines, it feels to me like there was a lot that the author left unsaid, and that’s fine. It takes courage to share a personal story about mental wellbeing, and an itemized list of all the wrongs that took place is not necessary to get the point the author was trying to make across.

                  My point is that I’d be cautious about making assumptions about the author’s experiences as they didn’t exactly give a lot of detail here.

                  1. 3

                    Using the excuse that because there exist others less fortunate, it’s not worth fighting to make something better is also not a good attitude in life.

                    This is true. It is worth fighting to make things better

                    Reading between the lines, it feels to me like there was a lot that the author left unsaid, and that’s fine. It takes courage to share a personal story about mental wellbeing, and an itemized list of all the wrongs that took place is not necessary to get the point the author was trying to make across.

                    There is a lot of things that go into mental wellbeing. Some things you can control, some things are genetic. I don’t know what the author left out, but I have not yet seen a study showing that stressful office jobs give people brain damage. There might be things the author has not explained, but at the same time that is a very extreme claim. In fact, if that were true, I am sure that the author should receive a lot in compensation.

                    My point is that I’d be cautious about making assumptions about the author’s experiences as they didn’t exactly give a lot of detail here.

                    I agree with you, but I also think that if someone makes a very bold claim about an employer, especially about personal injury, that these claims should be substantiated. There is a very big difference between “working there was hard, I quit” and “the employer acted recklessly and caused me personal injury”. And I don’t really know which one the author is saying, because from the description could be interpreted as it just being a difficult project to see through.

                    1. 8

                      In fact, if that were true, I am sure that the author should receive a lot in compensation.

                      By thinking about it for a few seconds you can realize that this can easily not happen. The OP itself says that they don’t have documented evidence from the time because of all the issues they were going through. And it’s easy to see why: if your mental health is damaged, your brain is not working right, would you be mindful enough to take detailed notes of every incident and keep a trail of evidence for later use in compensation claims? Or are you saying that compensation would be given out no questions asked?

                      1. 3

                        All I’m saying is, there is a very large difference between saying this job was very stressful, I had trouble sleeping and it negatively affected my concentration and memory and saying this job gave me brain damage. Brain damage is relatively well-defined:

                        The basic definition of brain damage is an injury to the brain caused by various conditions such as head trauma, inadequate oxygen supply, infections, or intracranial hemorrhage. This damage may be associated with a behavioral or functional abnormality.

                        Additionally, there are ways to test for this, a neurologist can make that determination. I’m not a neurologist. But it would be the first time I heard that brain damage be caused by psychosomatic issues. I believe that the author may have used this term in error. That’s why I said what I said — if you, or anyone, has brain damage as a result of your occupation, that is definitely grounds for compensation. And not a small compensation either, as brain damage is no joke. This is a very different category from mere psychological stress from working for an apparently mismanaged project.

                        1. 5

                          Via https://www.webmd.com/brain/brain-damage-symptoms-causes-treatments

                          Brain damage is an injury that causes the destruction or deterioration of brain cells.

                          Anxiety, stress, lack of sleep, and other factors can potentially do that. So I don’t see any incorrect use of the phrase ‘brain damage’ here. And anyway, you missed the point. Saying ‘This patient has brain damage’ is different from saying ‘Working in the WebAssembly team at Google caused this patient’s brain damage’. When you talk about causation and claims of damage and compensation, people tend to demand documentary evidence.

                          I agree brain damage is no joke, but if you look at society it’s very common for certain types of relatively-invisible mental illnesses to be downplayed and treated very lightly, almost as a joke. Especially by people and corporations who would suddenly have to answer for causing these injuries.

                          1. 4

                            Anxiety, stress, lack of sleep and other factors cannot, ever, possibly, cause brain damage. I think you have not completely read that article. It states – as does the definition that I linked:

                            All traumatic brain injuries are head injuries. But head injury is not necessarily brain injury. There are two types of brain injury: traumatic brain injury and acquired brain injury. Both disrupt the brain’s normal functioning.

                            • Traumatic Brain Injury(TBI) is caused by an external force – such as a blow to the head – that causes the brain to move inside the skull or damages the skull. This in turn damages the brain.
                            • Acquired Brain Injury (ABI) occurs at the cellular level. It is most often associated with pressure on the brain. This could come from a tumor. Or it could result from neurological illness, as in the case of a stroke.

                            There is no kind of brain injury that is caused by lack of sleep or stress. That is not to say that these things are not also damaging to one’s body and well-being.

                            Mental illnesses can be very devastating and stressful on the body. But you will not get a brain injury from a mental illness, unless it makes you physically impact your brain (causing traumatic brain injury), ingest something toxic, or have a stroke. It is important to be very careful with language and not confuse terms. The term “brain damage” is colloquially often used to describe things that are most definitely not brain damage, like “reading this gave me brain damage”. I hope you understand what I’m trying to state here. Again, the author has possibly misused the term “brain damage”, or there is some physical trauma that happened that the author has not mentioned in the article.

                            I hope you understand what I am trying to say here!

                            1. 9

                              Anxiety and stress raise adrenaline levels, which in turn cause short- and long-term changes in brain chemistry. It sounds like you’ve never been burnt out; don’t judge others so harshly.

                              1. 3

                                Anxiety and stress are definitely not healthy for a brain. They accelerate aging processes, which is damaging. But brain damage in a medical context refers to large-scale cell death caused by genetics, trauma, stroke or tumors.

                              2. 8

                                There seems to be a weird definitional slide here from “brain damage” to “traumatic brain injury.” I think we are all agreed that her job did not give her traumatic brain injury, and this is not claimed. But your claim that stress and sleep deprivation cannot cause (acquired) brain injury is wrong. In fact, you will find counterexamples by just googling “sleep deprivation brain damage”.

                                “Mental illnesses can be … stressful on the body.” The brain is part of the body!

                                1. 1

                                  I think you – and most of the other people that have responded to my comment – have not quite understood what I’m saying. The argument here is about the terms being used.

                                  Brain Damage

                                  Brain damage, as defined here, is damage caused to the brain by trauma, tumors, genetics or oxygen loss, such as during a stroke. This leads to potentially large chunks of your brain to die off. This means you can lose entire brain regions, potentially permanently lose some abilities (facial recognition, speech, etc).

                                  Sleep Deprivation

                                  See Fundamental Neuroscience, page 961:

                                  The crucial role of sleep is illustrated by studies showing that prolonged sleep deprivation results in the distruption of metabolic processes and eventually death.

                                  When you are forcibly sleep deprived for a long time, such as when you are being tortured, your body can lose the ability to use nutrients and finally you can die. You need to not sleep at all for weeks for this to happen, generally this is not something that happens to people voluntarily, especially not in western countries.

                                  Stress

                                  The cells in your brain only have a finite lifespan. At some point, they die and new ones take their place (apoptosis). Chronic stress and sleep deprivation can speed up this process, accelerating aging.

                                  Crucially, this is not the same as an entire chunk of your brain to die off because of a stroke. This is a very different process. It is not localized, and it doesn’t cause massive cell death. It is more of a slow, gradual process.

                                  Summary

                                  Mental illnesses can be … stressful on the body.” The brain is part of the body!

                                  Yes, for sure. It is just that the term “brain damage” is usually used for a very specific kind of pattern, and not for the kind of chronlc, low-level damage done by stress and such. A doctor will not diagnose you with brain damage after you’ve had a stressful interaction with your coworker. You will be diagnosed with brain damage in the ICU after someone dropped a hammer on your head. Do you get what I’m trying to say?

                                  1. 4

                                    I get what you are trying to say, I think you are simply mistaken. If your job impairs your cognitive abilities, then it has given you brain damage. Your brain, is damaged. You have been damaged in your brain. The cells and structures in your brain have taken damage. You keep trying to construct this exhaustive list of “things that are brain damage”, and then (in another comment) saying that this is about them not feeling appreciated and valued or sort of vaguely feeling bad, when what they are saying is that working at this job impaired their ability to form thoughts. That is a brain damage thing! The brain is an organ for forming thoughts. If the brain can’t thoughts so good no more, then it has been damaged.

                                    The big picture here is that a stressful job damaged this person’s health. Specifically, their brain’s.

                                    1. 3

                                      I understand what you are trying to say, but I think you are simply mistaken. We (as a society) have definitions for the terms we use. See https://en.wikipedia.org/wiki/Brain_damage:

                                      Neurotrauma, brain damage or brain injury (BI) is the destruction or degeneration of brain cells. Brain injuries occur due to a wide range of internal and external factors. In general, brain damage refers to significant, undiscriminating trauma-induced damage.

                                      This is not “significant, undiscriminating trauma-induced damage” (for context, trauma here refers to physical trauma, such as an impact to the head, not psychological trauma). What the author describes does not line up with any of the Causes of Brain Damage. It is simply not the right term.

                                      Yes, the author has a brain, and there is self-reported “damage” to it. But just because someone is a man and feels like he polices the neighborhood, does not make me a “police man”. Just because I feel like my brain doesn’t work right after a traumatic job experience does not mean I have brain damage™.

                                      1. 1

                                        The Wikipedia header is kind of odd. The next sentence after “in general, brain damage is trauma induced” lists non-trauma-induced categories of brain damage. So I don’t know how strong that “in general” is meant to be. At any rate, “in general” is not at odds with the use of the term for non-trauma induced stress/sleep depriv damage.

                                        At any rate, if you click through to Acquired Brain Injury, it says “These impairments result from either traumatic brain injury (e.g. …) or nontraumatic injury … (e.g. listing a bunch of things that are not traumatic.)”

                                        Anyway, the Causes of Brain Damage list is clearly not written to be exhaustive. “any number of conditions, including” etc.

                                2. 2

                                  There is some evidence that lack of sleep may kill brain cells: https://www.bbc.com/news/health-26630647

                                  It’s also possible to suffer from mini-strokes due to the factors discussed above.

                                  In any case, I feel like you’re missing the forest for the trees. Sure, it’s important to be correct with wording. But is that more important than the bigger picture here, that a stressful job damaged this person’s health?

                                  1. 2

                                    the bigger picture here, that a stressful job damaged this person’s health

                                    Yes, that is true, and it is a shame. I really wish that the process around WASM be less hostile, and that this person not be impacted negatively, even if stressful and hard projects are an unfortunate reality for many people.

                                    I feel like you’re missing the forest for the trees.

                                    I think that you might be missing the forest for the trees – I’m not saying that this person was not negatively impacted, I am merely stating that it is (probably, unless there is evidence otherwise) to characterize this impact as “brain damage”, because from a medical standpoint, that term has a more narrow definition that damage due to stress does not fulfill.

                          2. 4

                            Hello, you might enjoy this study.

                            https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4561403/

                            I looked through a lot of studies to try and find a review that was both broad and to the point.

                            Now, you are definitely mixing a lot of terms here… but I hope that if you read the research, you can be convinced, at the very least, that stress hurts brains (and I hope that reading the article and getting caught in this comment storm doesn’t hurt yours).

                            1. 2

                              Sleep Deprivation and Oxidative Stress in Animal Models: A Systematic Review tells us that sleep deprivation can be shown to increase oxidative stress:

                              Current experimental evidence suggests that sleep deprivation promotes oxidative stress. Furthermore, most of this experimental evidence was obtained from different animal species, mainly rats and mice, using diverse sleep deprivation methods.

                              Although, https://pubmed.ncbi.nlm.nih.gov/14998234/ disagrees with this. Furthermore, it is known that oxidative stress promotes apoptosis, see Oxidative stress and apoptosis :

                              Recent studies have demonstrated that reactive oxygen species (ROS) and the resulting oxidative stress play a pivotal role in apoptosis. Antioxidants and thiol reductants, such as N-acetylcysteine, and overexpression of manganese superoxide (MnSOD) can block or delay apoptosis.

                              The article that you linked Stress effects on the hippocampus: a critical review mentions that stress has an impact on the development of the brain and on it’s workings:

                              Uncontrollable stress has been recognized to influence the hippocampus at various levels of analysis. Behaviorally, human and animal studies have found that stress generally impairs various hippocampal-dependent memory tasks. Neurally, animal studies have revealed that stress alters ensuing synaptic plasticity and firing properties of hippocampal neurons. Structurally, human and animal studies have shown that stress changes neuronal morphology, suppresses neuronal proliferation, and reduces hippocampal volume

                              I do not disagree with this. I think that anyone would be able to agree that stress is bad for the brain, possibly by increasing apoptosis (accelerating ageing), decreasing the availability of nutrients. My only argument is that the term brain damage is quite narrowly defined (for example here) as (large-scale) damage to the brain caused by genetics, trauma, oxygen starvation or a tumor, and it can fall into one of two categories: traumatic brain injuries and acquired brain injuries. If you search for “brain damage” on pubmed, you will find the term being used like this:

                              You will not find studies or medical diagnoses of “brain damage due to stress”. I hope that you can agree that using the term brain damage in a context such as the author’s, without evidence of traumatic injury or a stroke, is wrong. This does not take away the fact that the author has allegedly experienced a lot of stress at their previous employer, one of the largest and high-paying tech companies, and that this experience has caused the author personal issues.

                              On an unrelated note: what is extremely fascinating to me is that some chemicals such as methamphetamine (at low concentrations) or minocycline are neuroprotective being able to lessen brain damage for example due to stroke. But obviously, at larger concentrations the opposite is the case.

                              1. 1

                                How about this one then? https://www.sciencedirect.com/science/article/abs/pii/S0197458003000484

                                We can keep going, it is not difficult to find these… Your’re splitting a hair which should not be split.

                                What’s so wrong about saying a bad work environment can cause brain damage?

                                1. 1

                                  Your’re splitting a hair which should not be split.

                                  There is nothing more fun than a civil debate. I would argue that any hair deserves being split. Worst case, you learn something new, or form a new opinion.

                                  What’s so wrong about saying a bad work environment can cause brain damage?

                                  Nothing is wrong with that, if the work environment involves heavy things, poisonous things, or the like. This is why OSHA compliance is so essential in protecting people’s livelihoods. I just firmly believe, and I think that the literature agrees with me on this, that “brain damage” as a medical definition refers to large-scale cell death due to trauma or stroke, and not chronic low-level damage caused by stress. The language we choose to use is extremely important, it is the only facility we have to exchange information. Language is not useful if it is imprecise or even wrong.

                                  How about this one then?

                                  Let’s take a look what we got here. I’m only taking a look at the abstract, for now.

                                  Stress is a risk factor for a variety of illnesses, involving the same hormones that ensure survival during a period of stress. Although there is a considerable ambiguity in the definition of stress, a useful operational definition is: “anything that induces increased secretion of glucocorticoids”.

                                  Right, stress causes elevated levels of glucocorticoids, such as cortisol.

                                  The brain is a major target for glucocorticoids. Whereas the precise mechanism of glucocorticoid-induced brain damage is not yet understood, treatment strategies aimed at regulating abnormal levels of glucocorticoids, are worth examining.

                                  Glucocorticoids are useful in regulating processes in the body, but they can also do damage. I had never heard of the term glucocorticoid-induced brain damage, and searching for it in the literature only yields this exact article, so I considered this a dead end. However, in doing some more research, I did find two articles that somewhat support your hypothesis:

                                  In Effects of brain activity, morning salivary cortisol, and emotion regulation on cognitive impairment in elderly people, it is mentioned that high cortisol levels are associated with hippocampus damage, supporting your hypothesis, but it only refers to elderly patients with Mild Cognitive Impairment (MCI):

                                  Cognitive impairment is a normal process of aging. The most common type of cognitive impairment among the elderly population is mild cognitive impairment (MCI), which is the intermediate stage between normal brain function and full dementia.[1] MCI and dementia are related to the hippocampus region of the brain and have been associated with elevated cortisol levels.[2]

                                  Cortisol regulates metabolism, blood glucose levels, immune responses, anti-inflammatory actions, blood pressure, and emotion regulation. Cortisol is a glucocorticoid hormone that is synthesized and secreted by the cortex of adrenal glands. The hypothalamus releases a corticotrophin-releasing hormone and arginine vasopressin into hypothalamic-pituitary portal capillaries, which stimulates adrenocorticotropic hormone secretion, thus regulating the production of cortisol. Basal cortisol elevation causes damage to the hippocampus and impairs hippocampus-dependent learning and memory. Chronic high cortisol causes functional atrophy of the hypothalamic-pituitary-adrenal axis (HPA), the hippocampus, the amygdala, and the frontal lobe in the brain.

                                  Additionally, Effects of stress hormones on the brain and cognition: Evidence from normal to pathological aging mentions that chronic stress is a contributor to memory performance decline.

                                  We might be able to find a few mentions of brain damage outside of the typical context (as caused by traumatic injury, stroke, etc) in the literature, but at least we can agree that the term brain damage is quite unusual in the context of stress, can we not? Out of the 188,764 articles known by pubmed, only 18,981 mention “stress”, and of those the almost all are referring to “oxidative stress” (such as that experienced by cells during a stroke). I have yet to find a single study or article that directly states brain damage as being a result of chronic stress, in the same way that there are hundreds of thousands of studies showing brain damage from traumatic injuries to the brain.

                                  1. 2

                                    Well, if anybody asks me I will tell them that too much stress at work causes brain damage… and now I can even point to some exact papers!

                                    I agree that it’s a little hyperbolic, but it’s not that hyperbolic. If we were talking about drug use everyone would kind of nod and say, ‘yeah, brain damage’ even if the effects were tertiary and the drug use was infrequent.

                                    But stress at work! Ohohoho, that’s just life my friend! Which really does not need to be the way of the world… OP was right to get out, especially once they started exhibiting symptoms suspiciously like the ones cited in that last paper (you know, the sorts of symptoms you get when your brain is suffering from some damage).

                                    1. 2

                                      If someone tells me that they got brain damage from stress at work, I will laugh, tell them to read the Wikipedia article article and then move on. But that is okay, we can agree to disagree. I understand that there are multiple possible definitions for the term brain damage.

                                      If we were talking about drug use everyone would kind of nod and say, ‘yeah, brain damage’ even if the effects were tertiary and the drug use was infrequent.

                                      In my defense, people often use terms incorrectly.

                                      OP was right to get out

                                      I agree. Brain damage or not, Google employee or not, if you are suffering at work you should not stay there. We all have very basic needs, and one of them is being valued and being happy to work.

                                      Anyways, I hope you have a good weekend!

                            2. 6

                              I have not yet seen a study showing that stressful office jobs give people brain damage.

                              This is a bizarre and somewhat awful thread. Please could you not post things like this in future?

                              1. 8

                                I disagree. The post seemed polite, constructive, and led to (IMO) a good conversation (including some corrections to the claims in the post).

                                1. 4

                                  Parent left a clear method for you to disprove them by providing a counter-example.

                                  If you can point to some peer-reviewed research on the topic, by all means do so.

                                  1. 5

                                    Yea but this is an obnoxious, disrespectful, and disingenuous way to conduct an argument. I haven’t read any studies proving anything about this subject one way or another. Because I am not a mental health researcher. So it’s easy for me to make that claim, and present the claim as something that matters, when really it’s a pointless claim that truly does not matter at all.

                                    Arguing from an anecdotal position based on your own experience, yet demanding the opposing side provide peer-reviewed studies to contradict your anecdotal experience, places a disproportionate burden on them to conduct their argument. And whether intentional or not, it strongly implies that you have little to no respect for their experiences or judgement. That you will only care about their words if someone else says them.

                        1. 7

                          https://i.imgur.com/CWlLYnQ.png (minimally edited)

                          The ad placement on that site is just a little north of obnoxious.

                          I especially like the modal side ad that suddenly appears after about a minute.

                          1. 3

                            Oof, that sounds annoying indeed. I did not see it - hurrah for JS blocking ;)

                            1. 3

                              I’ve used DNS based adblocking (not pihole but the same principle) and these are the first ads I’ve seen in a while. Like…months.

                              Maybe it’s time to also add addons to my browsers again…

                            2. 2

                              It looks like they’re bespoke ads too, the author is self hosting these on his wordpress!

                              https://haydenjames.io/wp-content/uploads/2022/02/868x300.gif
                              
                              1. 2

                                Sherlock! :-)

                                That explains why these were not block. Ugh!!!

                                1. 1

                                  With Adblock can block them manually - but it’s a chore. I wonder if things like this (not the first site I see ads on, lately) means the return of old-school “affiliate” things, banners manually set up and all.

                            1. 4

                              What sort of system does not have desktop redraws for an entire minute? It seems to me the solution here is just to put the computer in sleep mode, at which point it won’t have to draw the taskbar anyways. If the computer is in active use, it will certainly have many tasks to do every second, let alone minute, and the clock redraw will be an irrelevant footnote.

                              1. 3

                                A lot of this thread has me feeling like I stepped into some sort of alternate universe where calculating and displaying the time with seconds is somehow the most resource-intensive operation imaginable. At least one of Microsoft’s major competitors has managed to do this in both their desktop OS and their mobile OS, for crying out loud. And last I checked, Edge doesn’t forbid you just opening a browser and writing a setTimeout JS loop to display the time with seconds, nor does it cause the machine to catch fire or consume all electricity and RAM within a 1000-kilometer radius. So the issue is not “it can’t be done” nor is it “can’t be done without unacceptable performance”. It’s most likely “can’t be done (or can’t be done with acceptable performance) with the way Windows is architected”, and everything else is an attempt to distract from that.

                                1. 3

                                  I can’t speak to edge, but I can say in webkit timers (by which I mean everything periodic: setTimeout, setInterval, GIFs, SVG animations, CSS animations, presumably myriad other things) are aggressively coalesced. Timers for background tabs are throttled - you can ask for a 1ms timeout, but you are not going to get anything close, if anything at all.

                                  In the past timers/callbacks asking for ridiculously short timeouts would get them for a few ticks, before they’d get throttled back to 15+ms, for some sites it was needed because of bad code - e.g. sites that would just be burning 10-20ms of cpu per tick so battery life was hosed no matter what, but others would just be updating a tiny piece o UI and be running for fractions of a ms. Even for those sites though the battery drain from not throttling is insane.

                                  1. 1

                                    It sounds to me like this is mostly just an issue of not being able to designate tasks as “low-prio background” at the framework level? Like, if you could update the taskbar clock without reflexively kicking the CPU into high power for half a second, would that fix things?

                              1. 8

                                TIL that systemd upstream refuses to make their software compatible with musl libc. Damn that’s shitty.

                                1. 8

                                  I think the systemd team’s position is more defensible if you look at it top-down. Will spending resources on musl support do anything for the customers of the commercial distro vendors, or the end-users of those distros? Could it be justified to a non-technical manager or executive? Resources are always limited, and time spent on libc portability is time that can’t be spent on something else that might be more beneficial. I learned to take this view while I was on the Windows accessibility team at Microsoft. At the peak of our headcount, I naively thought we should be able to do everything we wanted to. But, like I said, resources are always limited, and tradeoffs have to be made.

                                  1. 13

                                    This is a defense of systemd that exactly matches its criticisms.

                                    1. 4

                                      Sure, my logic also applies to distros that choose not to use systemd, like Alpine, and I have no problem with that. Or did you mean something else?

                                      1. 8

                                        I mean, a common criticism is that SystemD is a project driven by business concerns that does not play well with the existing ecosystem. And it sounds to me like you are saying “Yes, systemd does not play well with the existing ecosystem because the people working on it have to make decisions on the basis of business concerns.”

                                        A primary reason that Linux is so open to experimentation is that the people working on it, at least in theory, are driven by the love of the craft and some level of principled thoughts about system design. The point of the UNIX philosophy (and yes I know Linux does not hew very closely to the UNIX philosophy, but there’s degrees–) is that a system assembled from independent components speaking an accessible interop language encourages, experimentation through composeability, readability and extensibility.

                                        If Linux was designed the way that SystemD is designed, there would be a list of supported hardware and a paid certification program. Heck, it might even have more users that way! But I dare claim it would have fewer developers; at least, fewer unpaid developers. Its ecosystem would be vastly different, biased towards business rather than amateurs. It would be less welcoming to developers, even though it may be more welcoming to users. And I know that even now, most people working on Linux are paid to do so, but the ethos is amateur, and that’s a large part of what I like about it.

                                        1. 6

                                          IIt’s systemd, not SystemD.

                                          Also, yes systemd is developed primarily by people who work for a business. So what? That’s the best-case scenario of FLOSS–run a business while developing it. Or do you prefer it was developed by hackers part-time outside of work, school, and other jobs?

                                          systemd does not play well with the existing ecosystem

                                          systemd plays fine with the existing ecosystem. It even understands traditional config formats like crontab and fstab. It’s a reliable way to run a Linux box for users, developers, and sysadmins. That’s really all we need to know to see why it’s successful. No conspiracy theories needed.

                                          1. 4

                                            That’s the best-case scenario of FLOSS–run a business while developing it. Or do you prefer it was developed by hackers part-time outside of work, school, and other jobs?

                                            Strong disagree. The best-case scenario for users is a non-profit organization such as Python Software Foundation or Zig Software Foundation.

                                            1. 2

                                              A non-profit org doesn’t guarantee anything in particular other than, the interests of its biggest funders will come first. If a powerful funder wants to control how a project is developed, they can just ‘acqui-hire’ its core developers. Look at Rust.

                                              1. 1

                                                Note that Rust is not a 501(c)(3) non profit organization. It is a 501(c)(6) business league and does not pay its core team members.

                                                Agree with you that non profits are vulnerable to high donation percentage from individual funders. However, non profit organizations have a crucial guarantee that for-profit entities do not: the flow of income must legally be pointed at the org’s mission statement, with none skimmed off the top. The org exists for the users, not for the financial stakeholders. Furthermore they are governed by a board, so when the founder retires, the org passes on to other board members rather than being sold and dying a painful corporate death at the expense of the users.

                                                1. 2

                                                  Nothing about this guarantees that the mission statement will remain unchanged, or that the software delivered will be fit for purpose, or that the rest of the board will be capable of maintaining it if the founder goes away. A for-profit business has one weird trick that really helps with all this: it’s in the business of selling that software, and if it’s no good, they go out of business i.e. lose their income source.

                                          2. 2

                                            A primary reason that Linux is so open to experimentation

                                            Given this thread, it doesn’t seem like this is true. Someone wants to experiment with this, and the entire community rejects it, even as an experiment.

                                            1. 1

                                              Not every experiment should be merged to the official project repos.

                                      2. 8

                                        I get what you’re saying but…

                                        …first of all, it’s not like musl is some shabby hobby project written by some hippie with way too much free time on their hands due to their questionable personal hygiene routine – it’s a pretty solid project, with a good maintenance history. Accepting patches that add support for an alternative piece of supporting infrastructure is not something that you’ll ever going to be able to justify to non-technical managers, especially when some of them are quite heavily invested in the other piece of infrastructure, the one that systemd already supports ;). That’s the road to technical stagnation, not pragmatic choices. At this rate, something better than glibc will have zero chance of ever showing up on Linux – and ten years down the line we’ll all be wondering why them young kids don’t even care about this crap anymore.

                                        Also, more generally, copycating large corporate practices when developing software isn’t really conducive to quality software unless you have large corporate money to throw at it. If you try to write system software with Microsoft’s attitude but a small fraction of Microsoft’s money, all you get is a cheap clone of something Microsoft might make. (Edit: to clarify – this always has to be said when Linux is involved, for some reason… – I don’t mean “something Microsoft might make” in a derogatory manner. A lot of Microsoft’s NT software is frickin’ amazing, and light years ahead of anything comparable we have in FOSS land).

                                        Second, this kind of “corporate gatekeeping” is one of the things that put a big nail in many Unix coffins back in the day. The idea that you can’t implement something that your users want because you’re unable to explain it to the people in charge is one of the things that made a lot of people move to systems that could do what they want – like Linux.

                                        It’s also kind of undermining the whole point of using an open source program in the first place. If I wanted my choice of hardware and systems software to depend on the goodwill of some executives, I’d much rather bet on Microsoft’s.

                                    1. 9

                                      This is one of the most civil political debates that I have ever read.

                                      In the entire thread, I saw only three comments that could be called offtopic or emotionally charged, and they were told to knock it off.

                                      If this is problematic to you, you need to broaden your perspective. With an eye on the rest of the internet, that PR thread is a font of civility. If one day I have a popular software project, I should be eternally grateful if all its pull requests were debated in this manner!

                                      If this discourages a newcomer, a word of advice to forestall disaster: echo 0.0.0.0 twitter.com | sudo tee -a /etc/hosts.

                                      1. 4

                                        If this is problematic to you, you need to broaden your perspective. With an eye on the rest of the internet, that PR thread is a [front] of civility.

                                        You may be misunderstanding. From what I can tell, the article is being marked as part of a flamewar. Whether it’s civil by Internet standards is secondary to Lobsters’s own guidelines.

                                        This is one of the most civil political debates that I have ever read.

                                        That’s great to hear, and I’m glad you’ve enjoyed your time reading the article. Nevertheless, I firmly hold that Lobsters should not serve as a site to link to or discuss such debates. Like the guidelines say, there are other sites for that.

                                      1. 3

                                        I’m not sure I follow the logic that there is short-termism caused by publish-or-perish and that the solution should be, effectively, for the existing members of the field to voluntarily perish to make space for new entrants. Even given that people would do this, I see no reason to believe that it would work - wouldn’t the newcomers just reproduce the exact same structure, given the same pressures?

                                        1. 3

                                          There is one study I am aware of which indicated that research areas tend to expand when “star” researchers die. The claimed reasoning is that those researchers, because of their influence both direct and indirect, limit the ability of research in areas they personally undervalue to get funded and published.

                                          That said, you’re right that this doesn’t resolve the problem of anointing “star” researchers who then bottle up opportunities in the field; it simply changes which subareas are valued.

                                        1. 1

                                          Magic on top of Fuzzy Finder:

                                          # bind ctrl-E to "edit fuzzy"
                                          function edit_fuzzy() {
                                            TARGET=$(fzf)
                                            if [ $? -eq 0 ]
                                            then
                                              echo -e "${PS1@P}edit $TARGET"
                                              history -s "edit $TARGET"
                                              edit "$TARGET"
                                              READLINE_LINE=""
                                            fi
                                          }
                                          
                                          bind -x '"\C-e":edit_fuzzy'
                                          

                                          Which binds ctrl-E to “select a file, then open it in ‘edit’” (an alias for kwrite). Adjust editor name for taste. The magic is that this also pretends that you typed in “edit filename” manually and pressed return, going so far as to inject that command into your history.

                                          1. 1

                                            This may be an odd question, but what is the point of this document? Why did somebody make this?

                                            I don’t even know if it’s ontopic to mention Heaptrack! ( https://github.com/KDE/heaptrack )

                                            1. 3

                                              ACM Computing Surveys provides surveys of academic research on various topics. It’s a great way to get a summary of current (academic, sometimes industry too) thinking on a particular area of computer science.

                                              In this particular case, the benefit to me as someone working on two memory profilers is both an overview of what people are trying to solve, the various approaches (never even occurred to me to think about source code diffs), and even just the fact someone else has done the literature search for me.

                                            1. 2

                                              The iffy thing to me here is that ontologically, we claim that we’re going to believe X until infinity, which we later correct. If we accepted the probability that we would change our mind later, shouldn’t we have refused to claim that we’ll believe this idea in perpetuity? It seems to me like it’d be cleaner to have a table design that only records facts: we started believing Y on date so-and-so, and this belief invalidated X when it came into effect. In other words, the fact that one belief’s believed_from must equal the other’s believed_to implies to me that the database schema has redundancies.

                                              Though if we explicitly store the table ID of the belief that we invalidate, queries may get expensive, because most of the belief table will be “dead” beliefs from the past. From that perspective, believed_to is just an efficiency optimization/denormalization substituting for what we actually want to express, which is invalidates_belief_id - or even an entire belief_invalidates table to allow multi-invalidations. (A separate belief_no_longer_held table may also help.)

                                              However, invalidates_belief_id also has the advantage of allowing us to use an append-only immutable database. This has the nice property that our future knowledge is always a superset of our past knowledge.

                                              1. 4

                                                This is an important topic but this article was a struggle to understand and does not clearly and accurately propose a solution. Maybe I never understood lock files?

                                                NPM can already record those versions, in a lock file. But npm install of a new package does not use the information in that package’s lock file to decide the versions of dependencies: lock files are not transitive.

                                                Why in the name of everything that is holy are NPM lock files not transitive? What does a non-transitive lock file even mean? Am I misunderstanding the article or NPM - doesn’t a lock file permanently freeze the full dependency graph to fixed versions, or do I only get that with npm shrinkwrap?

                                                Anyone running modern production systems knows about testing followed by gradual or staged rollouts, in which changes to a running system are deployed gradually over a long period of time, to reduce the possibility of accidentally taking down everything at once.

                                                Why even mention this? I was teased into maybe seeing a proposal for gradually revealing package updates from consumers, or maybe automatically running automated tests, but neither were proposed. I think the Go versioning proposal is weak because it still leaves you prey to arbitrary version upgrades.

                                                NPM’s design choice is the exact opposite. The latest version of colors was promoted to use in all its dependents before any of them had a chance to test it and without any kind of gradual rollout. Users can disable this behavior today, by pinning the exact versions of all their dependencies. For example here is the fix to aws-cdk. That’s not a good answer, but at least it’s possible.

                                                Why is this not a good answer? Later in the article the author argues people should use npm shrinkwrap….which is this answer. “

                                                Is this the crux of my misunderstanding? Isn’t a lock file equivalent to expressing specific version dependencies for all depenendencies in the full dependency graph?

                                                Other language package managers should take note too. Marak has done all of us a huge favor by highlighting the problems most package managers create with their policy of automatic adoption of new dependencies without the opportunity for gradual rollout or any kind of testing whatsoever.

                                                Come on, let’s call a spade a spade here. You need to freeze your dependency graph with a lock file. If npm plays weird semantic games with “oh lock files don’t really lock dependencies” then just Docker the whole thing and wrap it in a lead canister. If you depend on an ecosystem that doesn’t honor freezing dependencies then freeze them yourself.

                                                1. 9

                                                  Am I misunderstanding the article or NPM - doesn’t a lock file permanently freeze the full dependency graph to fixed versions, or do I only get that with npm shrinkwrap?

                                                  The article. You are correct that your project’s package-lock.json freezes the entire dependency tree for your project. The paragraphs at the end of the article starting “NPM also has an npm shrinkwrap…” are somewhat confusingly written.

                                                  Say I set up a new project and I want to use library A, which has a dependency on library B.

                                                  When I run npm install --save A, for the first time, npm will fetch the latest allowed versions (according to version bounds in the dependencies field of package.json) of A and B. It’ll record in package-lock.json whatever versions of A and B it fetched.

                                                  Next time I run npm install or npm ci in my project, it’ll install exactly the same versions of A and B again that it installed last time. (assuming I haven’t changed dependencies in package.json)

                                                  What the article is complaining about is this: package A may contain its own package-lock.json. npm did not consult A’s package-lock.json to decide which version of B to pick when I ran npm install --save A. Only the package-lock.json at the root of the project (if there is one) was consulted. On initial project setup, I don’t have a package-lock.json yet at the top level, so I’m getting all fresh new versions of everything and I’m going to get the sabotaged version of color since it’s the newest one.

                                                  The article proposes that, when you run npm install --save A, npm should check for a package-lock.json inside the package ‘A’ and, if one is present, use the same version of B that A’s package-lock.json has, because A was probably QA’d with that version of B.

                                                  1. 4

                                                    Isn’t this what package.json is for? Or is the problem that everyone uses overpermissive dependencies?

                                                    1. 3

                                                      No. package.json only specifies dependencies one level deep and only specifies names and version numbers. It’s pretty loose. The lock file has content hashes too and specifies the entire tree.

                                                      1. 3

                                                        But hang on, if you check the entire tree of every dependency, aren’t you pretty much always going to get conflicting information? Isn’t that the whole point of package.json that we specify looser dependencies so that we can, SemVer permitting, actually get two packages to agree on which version of a dependency they can both live with? It seems to me that when you give up on the SemVer notion of non-breaking differences, except for very trivial cases, you pretty much give up on versioning.

                                                        1. 2

                                                          Yes, there is a tradeoff here and I suspect that people who wrote npm did actually think about this very issue.

                                                          FWIW you can install mutually conflicting libraries in npm as sub dependencies. If A depends on C==1.0 and B depends on C==2.0, and you install both A and B, you get a node_modules tree like:

                                                          • node_modules/A
                                                          • node_modules/A/node_modules/C
                                                          • node_modules/B
                                                          • node_modules/B/node_modules/C

                                                          So over constraining dependency versions doesn’t necessarily break everything, even though it’s not often what you want.

                                                    2. 2

                                                      Thank you!

                                                  1. 2

                                                    D language: Thread.sleep(100.seconds);

                                                    1. 2

                                                      Thread.sleep( var);

                                                      1. 2

                                                        Sure, but units attach to values (via types), not to arguments. For instance, we don’t printf("Hello World": char*) either.

                                                        Ie. you gather the unit at the point where you convert from a number to var: Duration var = 10.seconds;

                                                        C# sort of does this though: Print("Hello World", out buffer);

                                                    1. 1

                                                      Yeah DMD is crazy fast. It generates less performant code though, which is why nowadays the official DMD releases are built with LDC, so you get the best of both worlds.

                                                      Where I work, we use DMD for daily driving and LDC for release generation. Actually we go one step further and PGO tune DMD for our codebase. … Of course, then we eat up all that speed with copious use of templates, but that’s how it goes.

                                                      1. 4

                                                        Speculating: The thing about testing in production is that it is maximally resistant to excuses. You can, as a group, decide to do tests like this, and you can do the tests, and the tests cannot fail to happen - however, assuming you have buy-in from the top, any failures will still be the fault of the people or group who failed to implement the mitigation/error handling setup. As such, chaos testing is a way to bypass internal resistance to fault tolerance overhead/effort. (The same concept applies to pentesting.)

                                                        Specifically, this sentence:

                                                        But, presumably we should be testing this in development, when we’re writing the code to contact that service.

                                                        Note that the “we” doing the chaos test and the “we” who should be testing this in development may not be the same “we”!

                                                        1. 3

                                                          Yeah, and I feel that chaos engineering is in some ways symmetric to the same social friction-bypassing aspect of writing services at all. It’s a messy technique for a messy world. It’s not a particularly fast way to find bugs in distributed systems, and it can incur heavy reproduction costs (bisecting git commit logs for a big batch of commits under test takes a lot longer when you have to run the highly non-deterministic fault injection for a long enough time on each commit to gain confidence about whether the bug is present or not at that point). But it lets whoever is writing the bugs decouple themselves more from whoever is fixing them :P (and often allowing social credit to accumulate with the bug producers rather than the bug fixers).

                                                        1. 4

                                                          Oh hey, it’s the folk who make Hotspot! Nice!

                                                          1. 2

                                                            @test, to appeal to the broad userbase of Bourne Shell users looking to learn CSS.

                                                            1. 1

                                                              I propose a hard to remember series of one character flags, -n for “not-not” which tests if the variable is not, not set; -p for “probably” to test if something is more than 50%; -z for “does z-index work?” (answer is always no); and so on.

                                                            1. 1

                                                              KDE’s runner applet also has a calculator built-in, though it’s pretty feature-sparse. (Alt-F2 -> 5 * 5. You can also write it in the start menu search field.)

                                                              1. 1

                                                                I do use it a lot. The windows calculator on the other hand annoys me. You’re always searching for that window.

                                                              1. 15

                                                                Damn, impressive sleuthing. I like that rather than “and then we did like 50 steps each of which is more inscrutable than the last”, in this one you can almost follow the process of discovery.

                                                                1. 13

                                                                  I have never understood why KDE isn’t the default VM for any serious linux distribution. It feels so much more professional than anything else.

                                                                  Every time I see it, it makes me want to run Linux on the desktop again.

                                                                  1. 11

                                                                    I suspect because:

                                                                    1. IIRC Gnome has a lot more funding/momentum
                                                                    2. Plasma suffers from a lot of papercuts

                                                                    Regarding the second reason: Plasma overall looks pretty nice, at least at first glance. Once you start using it, you’ll notice a lot of UI inconsistencies (misaligned UI elements, having to go through 15 layers of settings, unclear icons, applications using radically different styles, etc) and rather lackluster KDE first-party applications. Gnome takes a radically different approach, and having used both (and using Gnome currently), I prefer Gnome precisely because of its consistency.

                                                                    1. 14

                                                                      There’s also a lot of politics involved. Most of the Linux desktop ecosystem is still driven by RedHat and they employ a lot of FSF evangalists. GNOME had GNU in its name and was originally created because of the FSF’s objections to Qt (prior to its license change) and that led to Red Hat preferring it.

                                                                      1. 6

                                                                        Plus GNOME and all its core components are truly community FLOSS projects, whereas Qt is a corporate, for-profit project which the Qt company happens to also provide as open source (but where you’re seriously railroaded into buying their ridiculously expensive licenses if you try to do anything serious with it or need stable releases).

                                                                        1. 7

                                                                          No one ever talks about cinnamon mint but I really like it. It looks exactly like all the screenshots in the article. Some of the customisation is maybe a little less convenient but I have always managed to get things looking exactly how I want them to and I am hardly a linux power user (recent windows refugee). Given that it seems the majority of arguments for plasma are that it is more user friendly and easier to customise, I would be interested to hear people’s opinions on cinnamon vs plasma. I had mobile plasma on my pinephone for a day or two but it was too glitchy and I ended up switching to Mobian. This is not a criticism of plasma, rather an admission that I have not really used it and have no first hand knowledge.

                                                                          1. 7

                                                                            I have not used either in anger but there’s also a C/C++ split with GTK vs Qt-based things. C is a truly horrible language for application development. Modern C++ is a mediocre language for application development. Both have some support for higher-level languages (GTK is used by Mono, for example, and GNOME also has Vala) but both are losing out to things like Electron that give you JavaScript / TypeScript environments and neither has anything like the developer base of iOS (Objective-C/Swift) or Android (Java/Kotlin).

                                                                            1. 4

                                                                              As an unrelated sidenote, C is also a decent binding language, which matters when you are trying to use one of those frameworks from a language that is not C/C++. I wish Qt had a well-maintained C interface.

                                                                              1. 8

                                                                                I don’t really agree there. C is an adequate binding language if you are writing something like an image decoder, where your interface is expressed as functions that take buffers. It’s pretty terrible for something with a rich interface that needs to pass complex types across the boundary, which is the case for GUI toolkits.

                                                                                For example, consider something like ICU’s UText interface, for exposing character storage representations for things like regex matching. It is a C interface that defines a structure that you must create with a bunch of callback functions defined as function pointers. One of the functions is required to set up a pointer in the struct to contain the next set of characters, either by copying from your internal representation into a static buffer in the structure or providing a pointer and setting the length to allow direct access to a contiguous run of characters in your internal representation. Automatically bridging this from a higher-level language is incredibly hard.

                                                                                Or consider any of the delegate interfaces in OpenStep, which in C would be a void* and a struct containing a load of function pointers. Bridging this with a type-safe language is probably possible to do automatically but it loses type safety at the interfaces.

                                                                                C interfaces don’t contain anything at the source level to describe memory ownership. If a function takes a char*, is that a pointer to a C string, or a pointer to a buffer whose length is specified elsewhere? Is the callee responsible for freeing it or the caller? With C++, smart pointers can convey this information and so binding generators can use it. Something like SWIG or Sol3 can get the ownership semantics right with no additional information.

                                                                                Objective-C is a much better language for transparent bridging. Python, Ruby, and even Rust can transparently consume Objective-C APIs because it provides a single memory ownership model (everything is reference counted) and rich introspection functionality.

                                                                                1. 2

                                                                                  Fair enough. I haven’t really been looking at Objective-C headers as a binding source. I agree that C’s interface is anemic. I was thinking more from an ABI perspective, ie. C++ interfaces tend to be more reliant on inlining, or have weird things like exceptions, as well as being totally compiler dependent. Note how for instance SWIG still generates a C interface with autogenerated glue. Also the full abi is defined in like 15 pages. So while it’s hard to make a high-level to high-level interface in C, you can manually compensate from the target language; with C++ you need a large amount of compiler support to even get started. Maybe Obj-C strikes a balance there, I haven’t really looked into it much. Can you call Obj-C from C? If not, it’s gonna be a hard sell to a project as a “secondary api” like llvm-c, because you don’t even get the larger group of C users.

                                                                                  1. 6

                                                                                    Also the full abi is defined in like 15

                                                                                    That’s a blessing and a curse. It’s also an exaggeration, the SysV x86-64 psABI is 68 pages. On x86-32 there are subtle differences in calling convention between Linux, FreeBSD, and macOS, for example, and Windows is completely different. Bitfields are implementation dependent and so you need to either avoid them or understand what the target compiler does. All of this adds up to embedding a lot of a C compiler in your other language, or just generating C and delegating to the C compiler.

                                                                                    Even ignoring all of that, the fact that the ABI is so small is a problem because it means that the ABI doesn’t fully specify everything. Yes, I can look at a C function definition and know from reading a 68-page doc how to lower the arguments for x86-64 but I don’t know anything about who owns the pointers. Subtyping relationships are not exposed.

                                                                                    To give a trivial example from POSIX, the connect function takes three arguments: int, const struct sockaddr, and socklen_t. Nothing in this tells me:

                                                                                    • That the second argument is never actually a pointer to a sockaddr structure, it is a pointer to some other structure that starts with the same fields as the sockaddr.
                                                                                    • That the third argument must be the size of the real structure that I point to with the second argument.
                                                                                    • That the second parameter is not captured and I remain responsible for freeing it (you could assume this from const and you’d be right most of the time).
                                                                                    • That the first parameter is not an arbitrary integer, it must be a file descriptor (and for it to actually work, that file descriptor must be a socket).

                                                                                    I need to know all of these things to be able to bridge from another language. The C header tells me none of these.

                                                                                    Apple worked around a lot of these problems with CoreFoundation by adding annotations that basically expose the Objective-C object and ownership model into C. Both Microsoft and Apple worked around it for their core libraries by providing IDL files (in completely different formats) that describe their interfaces.

                                                                                    So while it’s hard to make a high-level to high-level interface in C, you can manually compensate from the target language; with C++ you need a large amount of compiler support to even get started

                                                                                    You do for C as well. Parsing C header files and extracting enough information to be able to reliably expose everything with anything less than a full C compiler is not going to work and every tool that I’ve seen that tries fails in exciting ways. But that isn’t enough.

                                                                                    In contrast, embedding something like clang’s libraries is sufficient for bridging a modern C++ or Objective-C codebase because all of the information that you need is present in the header files.

                                                                                    Can you call Obj-C from C?

                                                                                    Yes. Objective-C methods are invoked by calling objc_msgSend with the receiver as the first parameter and the selector as the second. The Objective-C runtime provides an API for looking up selectors from their name. Many years ago, I wrote a trivial libClang tool that took an Objective-C header and emitted a C header that exposed all of the methods as static inline functions. I can’t remember what I did with it but it was on the order of 100 lines of code, so rewriting it would be pretty trivial.

                                                                                    If not, it’s gonna be a hard sell to a project as a “secondary api” like llvm-c, because you don’t even get the larger group of C users.

                                                                                    There are fewer C programmers than C++ programmers these days. This is one of the problems that projects like Linux and FreeBSD have attracting new talent: the intersection between good programmers and people who choose C over C++ is rapidly shrinking and includes very few people under the age of 35.

                                                                                    LLVM has llvm-c for two reasons. The most important one is that it’s a stable ABI. LLVM does not have a policy of providing a stable ABI for any of the C++ classes. This is a design decision that is completely orthogonal to the language. There’s been discussion about making llvm-c a thin (machine-generated) wrapper around a stable C++ interface to core LLVM functionality. That’s probably the direction that the project will go eventually, once someone bothers to do the work.

                                                                                    1. 1

                                                                                      I’ve been discounting memory management because it can be foisted off onto the user. On the other hand something like register or memory passing or how x86-64 uses SSE regs for doubles cannot be done by the user unless you want to manually generate calling code in memory.

                                                                                      You do for C as well. Parsing C header files and extracting enough information to be able to reliably expose everything with anything less than a full C compiler is not going to work and every tool that I’ve seen that tries fails in exciting ways. But that isn’t enough.

                                                                                      Sure but there again you can foist things off onto the user. For instance, D only recently gained a proper C header frontend; until now it got along fine enough by just manually declaring extern(C) functions. I believe JNI and CFFI do the same. It’s annoying but it’s possible, which is more than can be said for many C++ bindings.

                                                                                      There are fewer C programmers than C++ programmers these days.

                                                                                      I meant C as a secondary API, ie. C++ as primary then C as auxiliary, as opposed to Objective-C as auxiliary.

                                                                                      Yes. Objective-C methods are invoked by calling objc_msgSend with the receiver as the first parameter and the selector as the second. The Objective-C runtime provides an API for looking up selectors from their name.

                                                                                      I don’t know the requirements for deploying with the ObjC runtime. Still, nice!

                                                                                      1. 2

                                                                                        I’ve been discounting memory management because it can be foisted off onto the user.

                                                                                        That’s true only if you’re bridging two languages with manual memory management, which is not the common case for interop. If you are exposing a library to a language with a GC, automatic reference counting, or ownership-based memory management then you need to handle this. Or you end up with an interop layer that everyone hates (e.g JNI).

                                                                                        Sure but there again you can foist things off onto the user. For instance, D only recently gained a proper C header frontend; until now it got along fine enough by just manually declaring extern(C) functions. I believe JNI and CFFI do the same. It’s annoying but it’s possible, which is more than can be said for many C++ bindings.

                                                                                        Which works for simple cases. For some counterexamples, C has _Complex types, which typically follow different rules for argument passing and returning to structures of the same layout (though they sometimes don’t, depending on the ABI). Most languages don’t adopt this stupidity and so you need to make sure that your custom C parser can express some C complex type. The same applies if you want to define bitfields in C structures in another language, or if the C structure that you’re exposing uses packed pagmas or attributes, uses _Alignas, and so on. There’s a phenomenal amount of complexity that you can punt on if you want to handle only trivial cases, but then you’re using a very restricted subset of C.

                                                                                        JNI doesn’t allow calling arbitrary C functions, it requires that you write C functions that implement native methods on a Java object. This scopes the problem such that the JVM needs to be able to handle calling only C functions that use Java types (8 to 64-bit signed integers or pointers) as arguments return values. These can then call back into the JVM to access fields, call methods, allocate objects, and so on. If you want to return a C structure into Java then you must create a buffer to store it and an object that owns the buffer and exposes native methods for accessing the fields. It’s pretty easy to use JNI to expose Java classes into other languages that don’t run in the JVM, it’s much harder to use it to expose C libraries into Java (and that’s why everyone who uses it hates it).

                                                                                        I meant C as a secondary API, ie. C++ as primary then C as auxiliary, as opposed to Objective-C as auxiliary.

                                                                                        If you have a stable C++ API, then bridging C++ provides you more semantic information for your compat layer than a C wrapper around the stable C++ API would. Take a look at Sol3 for an example: it can expose C++ objects directly into Lua, with correct memory management, without any C wrappers. C++ libraries often conflate a C API with an ABI-stable API but this is not necessary.

                                                                                        I don’t know the requirements for deploying with the ObjC runtime. Still, nice!

                                                                                        The requirements for the runtime are pretty small but for it to be useful you want a decent implementation of at least the Foundation framework, which provides types like arrays, dictionaries, and strings. That’s a bit harder.

                                                                                        1. 2

                                                                                          I don’t know. I feel like you massively overvalue the importance of memory management and undervalue the importance of binding generation and calling convention compatibility. For instance, as far as I can tell sol3 requires manual binding of function pointers to create method calls that can be called from Lua. From where I’m standing, I don’t actually save anything effort-wise over a C binding here!

                                                                                          Fair enough, I didn’t know that about JNI. But that’s actually a good example of the notion that a binding language needs to have a good semantic match with its target. C has an adequate to poor semantic match on memory management and any sort of higher-kinded functions, but it’s decent on data structure expressiveness and very terse, and it’s very easy to get basic support working quick. C++ has mangling, a not just platform-dependent but compiler-dependent ABI with lots of details, headers that often use advanced C++ features (I’ve literally never seen a C API that uses _Complex - or bitfields) and still probably requires memory management glue.

                                                                                          Remember that the context here was Qt vs GTK! Getting GTK bound to any vaguely C-like language (let’s say any language with a libc binding) to the point where you can make calls is very easy - no matter what your memory management is. At most it makes it a bit awkward. Getting Qt bound is an epic odyssey.

                                                                                          1. 4

                                                                                            I feel like you massively overvalue the importance of memory management and undervalue the importance of binding generation and calling convention compatibility

                                                                                            I’m coming from the perspective of having written interop layers for a few languages at this point. Calling conventions are by far the easiest thing to do. In increasing levels of difficulty, the problems are:

                                                                                            • Exposing functions.
                                                                                            • Exposing plain data types.
                                                                                            • Bridging string and array / dictionary types.
                                                                                            • Correctly managing memory between two languages.
                                                                                            • Exposing general-purpose rich types (things with methods that you can call).
                                                                                            • Exposing rich types in both directions.

                                                                                            C only seems easy because C<->C interop requires a huge amount of boilerplate and so C programmers have a very low bar for what ‘transparent interoperability’ means.

                                                                                            For instance, as far as I can tell sol3 requires manual binding of function pointers to create method calls that can be called from Lua. From where I’m standing, I don’t actually save anything effort-wise over a C binding here!

                                                                                            It does, because it’s an EDSL in C++, but that code could be mechanically generated (and if reflection makes it into C++23 then it can be generated from within C++). If you pass a C++ shared_ptr<T> to Sol3, then it will correctly deallocate the underlying object once neither Lua nor C++ reference it any longer. This is incredibly important for any non-trivial binding.

                                                                                            Remember that the context here was Qt vs GTK! Getting GTK bound to any vaguely C-like language (let’s say any language with a libc binding) to the point where you can make calls is very easy - no matter what your memory management is.

                                                                                            Most languages are not ‘vaguely C-like’. If you want to use GTK from Python, or C#, how do you manage memory? Someone has had to write bindings that do the right thing for you. From my vague memory, it uses GObject, which uses C macros to define objects and to manage reference counts. This means that whoever manages the binding layer has had to interop with C macros (which are far harder to get to work than C++ templates - we have templates working for the Verona C++ interop layer but we’re punting on C macros for now and will support a limited subset of them later). This typically requires hand writing code at the boundary, which is something that you really want to avoid.

                                                                                            Last time I looked at Qt, they were in the process of moving from their own smart pointer types to C++11 ones but in both cases as long as your binding layers knows how to handle smart pointers (which really just means knowing how to instantiate C++ templates and call methods on them) then it’s trivial. If you’re a tool like SWIG, then you just spit out C++ code and make the C++ compiler handle all of this for you. If you’re something more like the Verona interop layer then you embed a C++ parser / AST generator / codegen path and make it do it for you.

                                                                                            1. 1

                                                                                              I’m coming from the perspective of having written interop layers for a few languages at this point.

                                                                                              Yeah … same? I think it’s just that I tend to be obsessed with variations on C-like languages, which colors my perception. You sound like you’re a lot more broad in your interests.

                                                                                              C only seems easy because C<->C interop requires a huge amount of boilerplate and so C programmers have a very low bar for what ‘transparent interoperability’ means.

                                                                                              I don’t agree. Memory management is annoying, sure, and having to look up string ownership for every call gets old quick, but for a stateful UI like GTK you can usually even just let it leak. I mean, how many widgets does a typical app need? Grab heaptrack, identify a few sites of concern and jam frees in there, and move on with your life. It’s possible to do it shittily easily, and I value that a lot.

                                                                                              If you’re a tool like SWIG, then you just spit out C++ code and make the C++ compiler handle all of this for you.

                                                                                              Hey, no shade on SWIG. SWIG is great, I love it.

                                                                                              From my vague memory, it uses GObject, which uses C macros to define objects and to manage reference counts. This means that whoever manages the binding layer has had to interop with C macros

                                                                                              Nah, it’s really only a few macros, and they do fairly straightforward things. Last time I did GTK, I just wrote those by hand. I tend to make binders that do 90% of the work - the easy parts - and not worry about the rest, because that conserves total effort. With C that works out because functions usually take structs by pointer, so if there’s a weird struct that doesn’t generate I can just define a close-enough facsimile and cast it, and if there’s a weird function I define it. With C++ everything is much more interdependent - if you have a bug in the vtable layout, there’s nothing you can do except fix it.

                                                                                              When I’ll eventually want Qt in my current language, I’ll probably turn to SWIG. It’s what I used in Jerboa. But it’s an extra step to kludge in, that I don’t particularly look forward to. If I just want a quick UI with minimal effort, GTK is the only game in town.

                                                                                              edit: For instance, I just kludged this together in half an hour: https://gist.github.com/FeepingCreature/6fa2d3b47c6eb30a55846e18f7e0e84c This is the first time I’ve tried touching the GTK headers on this language. It’s exposed issues in the compiler, it’s full of hacks, and until the last second I didn’t really expect it to work. But stupid as it is, it does work. I’m not gonna do Qt for comparison, because I want to go to bed soon, but I feel it’s not gonna be half an hour. Now to be fair, I already had a C header importer around, and that’s a lot of time sunk into that that C++ doesn’t get. But also, I would not have attempted to write even a kludgy C++ header parser, because I know that I would have given up halfway through. And most importantly - that kludgy C header importer was already practically useful after a good day of work.

                                                                                              edit: If there’s a spectrum of “if it’s worth doing, it’s worth doing properly” to “minimal distance of zero to cool thing”, I’m heavily on the right side. I think that might be the personality difference at play here? For me, a binding generator is purely a tool to get at a juicy library that I want to use. There’s no love of the craft lost there.

                                                                              2. 1

                                                                                So does plasma support Electron/Swift/Java/Kotlin? I know electron applications run on my desktop so I assume you mean directly as part of the desktop. If so that is pretty cool. Please forgive my ignorance, desktop UI frameworks are way outside my usual area of expertise.

                                                                              3. 2

                                                                                I only minimally use KDE on the computers at my university’s CS department, but I’ve been using cinnamon for almost four years now. I think that Plasma wins in the customizable aspect. There is just so many things that can be adjusted.

                                                                                Cinnamon on the other hand feels far more polished, with fewer options for customization. I personally use cinnamon with Arch, but when I occasionally use Mint, the full desktop with all of mint’s applications is very cohesive and well thought out, though not without flaws.

                                                                                I sometimes think that cinnamon isn’t evangelized as frequently because it’s well enough designed that it sort of fades into the background while using it

                                                                          2. 3

                                                                            I’ve used Cinnamon for years, but it inevitably breaks (or I break it). I recently looked into the alternatives again, and settled on KDE because it looked nice, it and Gnome are the two major players so things are more likely to Just Work, and it even had some functionality I wanted that Gnome didn’t. I hopped back to Cinnamon within the week, because yeah, the papercuts. Plasma looks beautiful in screenshots, and has a lot of nice-sounding features, but the moment you actually use it, you bang your face into something that shouldn’t be there. It reminded me of first trying KDE in the mid-2000s, and it was rather disappointing to feel they’ve been spinning in circles in a lot of ways. I guess that isn’t exactly uncommon for the Linux desktop though…

                                                                            1. 3

                                                                              I agree with your assessment of Plasma and GNOME (Shell). Plasma mostly looks fine, but every single time I use it–without fail–I find some buggy behavior almost immediately, and it’s always worse than just having misaligned labels on some UI elements, too. It’s more like I’ll check a setting checkbox and then go back and it’s unchecked, or I’ll try to put a panel on one or another edge of the screen and it’ll cause the main menu to open on the opposite edge like it looped around, or any other number of things that just don’t actually work right. Even after they caved on allowing a single-key desktop shortcut (i.e., using the Super key to open the main menu), it didn’t work right when I would plug/unplug my laptop from my desk monitors because of some weirdness around the lifecycle of the panels and the main menu button; I’d only be able to have the Super key work as a shortcut if it was plugged in or if it was not, but not both. That one was a little while ago, so maybe it’s better now.

                                                                              Ironically, Plasma seems to be all about “configuration” and having 10,000 knobs to tweak, but the only way it actually works reasonably well for me is if you don’t touch anything and use it exactly how the devs are dog-fooding it.

                                                                              The GNOME guys had the right idea when it came to stripping options, IMO. It’s an unpopular opinion in some corners, but I think it’s just smart to admit when you don’t have the resources to maintain a high bar of quality AND configurability. You have to pick one, and I think GNOME picked the right one.

                                                                            2. 5

                                                                              I have never understood why KDE isn’t the default VM for any serious linux distribution.

                                                                              Me neither, but I’m glad to hear it is the default desktop experience on the recently released Steam Deck.

                                                                              1. 3

                                                                                Do SUSE/OpenSUSE not count as serious Linux distributions anymore?

                                                                                It’s also the default for Manjaro as shipped by Pine64. (I think Manjaro overall has several variants… the one Pine64 ships is KDE-based.)

                                                                                Garuda is also a serious Linux distribution, and KDE is their flagship.

                                                                                1. 1

                                                                                  I tried to use Plasma multiple times on Arch Linux but every time I tried it turned out to be too unstable. The most annoying bug I remember was that kRunner often crashed after entering some letters, taking down the whole desktop session with it. In the end I stuck with Gnome because it was stable and looked consistent. I do like the concept of Plasma but I will avoid it on any machine I do serious work with.