Threads for ethoh

  1. 1

    YMODEM (with 1K blocks, CRC and filename/size in packet 0) is a huge improvement.

    1. 1

      I take they never actually released the source code?

      Sigh.

      1. 3

        Doesn’t look like it. I have this in a CuttleCart in my real GTE Sylvania Inty and it actually works. Would have been fun if they supported the Intellivoice, too.

      1. 3

        The RPi 400 was still in stock recently — it’s a slightly-overclocked RPi 4 built into a keyboard. I bought one from SparkFun in April.

        1. 4

          The 400 is worth it IMHO, relative to the plain rpi4. The whole keyboard acts as passive heatsink. It outperforms rpi4 with fancy cooling solutions. And it is cheaper than these rpi4 after you add the cost of the cooling solutions.

          1. 3

            I hadn’t heard this before and had been looking into cooling cases for an rpi4. So I had a quick search and found this article, really interesting that yes, the rpi400 stays passively cooler than the rpi4 in an active Argon case. Awesome!

            https://tutorial.cytron.io/2020/11/02/raspberry-pi-400-thermal-performance/

            1. 1

              apart from the looks / space required that is actually a good argument for taking it as “homeserver”

            2. 3

              Plus, where else are you going to find a keyboard with a raspberry key?

              1. 1

                Here’s one that doesn’t have a pi built-in: https://www.raspberrypi.com/products/raspberry-pi-keyboard-and-hub/

            3. 1

              Microcenter has those in stock locally here right now, too.

            1. 10

              I hope the author gets the help they need, but I don’t really see how the blame for their psychological issues should be laid at the feet of their most-recent employer.

              1. 50

                In my career I’ve seen managers cry multiple times, and this is one of the places that happened. A manager should never have to ask whether they’re a coward, but that happened here.

                I dunno, doesn’t sound like they were the only person damaged by the experience.

                Eventually my physicians put me on forced medical leave, and they strongly encouraged me to quit…

                Seems pretty significant when medical professionals are telling you the cure for your issues is “quit this job”?

                1. 16

                  Seems pretty significant when medical professionals are telling you the cure for your issues is “quit this job”?

                  A number of years ago I developed some neurological problems, and stress made it worse. I was told by two different doctors to change or quit my job. I eventually did, and it helped, but the job itself was not the root cause, nor was leaving the sole cure.

                  I absolutely cannot speak for OP’s situation, but I just want to point out that a doctor informing you to rethink your career doesn’t necessarily imply that the career is at fault. Though, in this case, it seems like it is.

                  1. 4

                    It doesn’t seem like the OP’s doctors told them to change careers though, just quit that job.

                    1. 3

                      To clarify, I’m using “career change” in a general sense. I would include quitting a job as a career change, as well as leaving one job for another in the same industry/domain. I’m not using it in the “leave software altogether” sense.

                2. 24

                  I’m trusting the author’s causal assessment here, but employers (especially large businesses with the resources required) can be huge sources of stress and prevent employees from having the time or energy needed to seek treatment for their own needs, so they can both cause issues and worsen existing ones.

                  It’s not uncommon, for example, for businesses to encourage unpaid out-of-hours work for salaried employees by building a culture that emphasizes personal accountability for project success; this not only increases stress and reduces free time that could otherwise be used to relieve work-related stress, it teaches employees to blame themselves for what could just as easily be systemic failures. Even if an employee resists the social pressure to put in extra hours in such an environment, they’ll still be penalized with (real or imagined) blame from their peers, blame from themselves for “not trying hard enough”, and likely less job safety or fewer benefits.

                  In particular, there’s relevance from the business’ failure to support effective project management, manage workloads, or generally address problems repeatedly and clearly brought up to them. These kinds of things typically fuel burnout. The author doesn’t go into details enough for an outside observer to make a judgment call one way or the other, but if you trust the author’s account of reality then it seems reasonable to blame the employer for, at the least, negligently fueling these problems through gross mismanagement.

                  Arguably off-topic, but I think it might squeak by on the grounds that it briefly ties the psychological harm to the quality of a technical standard resulting from the mismanaged business process.

                  1. 3

                    a culture that emphasizes personal accountability for project success; this not only increases stress and reduces free time that could otherwise be used to relieve work-related stress, it teaches employees to blame themselves for what could just as easily be systemic failures.

                    This is such a common thing. An executive or manager punts on actually organizing the work, whether from incompetence or laziness, and then tries to make the individuals in the system responsible for the failures that occur. It’s hardly new. Deming describes basically this in ‘The New Economics’ (look up the ‘red bead game’).

                    More cynically, is WebAssembly actuall in Google’s interests? It doesn’t add revenue to Google Cloud. It’s going to make their data collection harder (provide Google analytics libraries for how many languages?). It was clearly a thing that was gaining momentum, so if they were to damage it, they would need to make sure they had a seat at the table and then make sure that the seat was used as ineffectually and disruptively as possible.

                    1. 9

                      More cynically, is WebAssembly actually in Google’s interests?

                      I think historically the answer would have been yes. Google has at various points been somewhat hamstrung by shipping projects with slow front end JS in them and responded by trying to make browsers themselves faster. e.g. creating V8 and financially contributing to Mozilla.

                      I couldn’t say if Google now has any incentive to not make JS go fast. I’m not aware of one. I suspect still the opposite. I think they’re also pushing mobile web apps as a way to inconvenience Apple; I think Google currently want people to write portable software using web tech instead of being tempted to write native apps for iOS only.

                      That said, what’s good for the company is not the principle factor motivating policy decisions. What’s good for specific senior managers inside Google is. Otherwise you wouldn’t see all these damn self combusting promo cycle driven chat apps from Google. A company is not a monolith.

                      ‘The New Economics’

                      I have this book and will have to re-read at least this bit tomorrow. I have slightly mixed feelings about it, mostly about the writing style.

                      1. 1

                        Making JS fast is one thing. Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                        Your point about the senior managers’ interests driving what’s done is on point, though. Google and Facebook especially are weird because ads fund the company, and the rest is all some kind of loss leader floating around divorced from revenue.

                        The only thing I’ll comment about Deming is that the chapter on intrinsic vs extrinsic motivation should be ignored, as that’s entirely an artifact despite its popularity. The rest of the book has held up pretty well.

                        1. 10

                          Making JS fast is one thing. Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                          Google doesn’t need to maintain their analytics libraries in many other languages, only to expose APIs callable from those languages. All WebAssembly languages can call / be called by JavaScript.

                          More generally, Google has been the biggest proponent of web apps instead of web services. Tim Berners-Lee’s vision for the web was that you’d have services that provided data with rich semantic markup. These could be rendered as web pages but could equally plug into other clients. The problem with this approach is that a client that can parse the structure of the data can choose to render it in a way that simply ignores adverts. If all of your adds are in an <advert provider="google"> block then an ad blocker is a trivial browser extension, as is something that displays ads but restricts them to plain text. Google’s web app push has been a massive effort to convince everyone to obfuscate the contents of their web pages. This has two key advantages for Google:

                          • Writing an ad blocker is hard if ads and contents are both generated from a Turing-complete language using the same output mechanisms.
                          • Parsing such pages for indexing requires more resources (you can’t just parse the semantic markup, you must run the interpreter / JIT in your crawler, which requires orders of magnitude more hardware than simply parsing some semantic mark-up. This significantly increases the barrier to entry for new search engines, protecting Google’s core user-data-harvesting tool.

                          WebAssembly fits very well into Google’s vision for the web.

                          1. 2

                            I used to work for a price-comparison site, back when those were actual startups. We had one legacy price information page that was Java applet (remember those?) Supposedly the founders were worried about screen scrapers so wanted the entire site rendered with applets to deter them.

                            1. 1

                              This makes more sense than my initial thoughts. Thanks.

                            2. 2

                              Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?

                              This is something I should have stated explicitly but didn’t think to: I don’t think wasm is actually going to be the future of non-JS languages in the browser. I think they for the next couple decades at least, wasm is going to be used for compute kernels (written in other langs like C++ and Rust) that get called from JS.

                              I’m taking a bet here that targeting wasm from langs with substantial runtimes will remain unattractive indefinitely due to download weight and parsing time.

                              about Deming

                              I honestly think many of the points in that book are great but hoo boy the writing style.

                      2. 1

                        That is exactly what I thought while reading this. I understand that to a lot of people, WebAssembly is very important, and they have a lot of emotions vested into the success. But to the author’s employer, it might not be as important, as it might not directly generate revenue. The author forgets that to the vast, vast majority of people on this earth, having the opportunity to work on such a technology at a company like Google is an unparalleled privilege. Most people on this earth do not have the opportunity to quit their job just because a project is difficult, or because meetings run long or it is hard to find consensus. Managing projects well is incredibly hard. But I am sure that the author was not living on minimum wage, so there surely was compensation for the efforts.

                        It is sad to hear that the author has medical issues, and I hope those get sorted out. And those kinds of issues do exacerbate stressful jobs. But that is not a good reason for finger pointing. Maybe the position just was not right for the author, maybe there are more exciting projects that are waiting in the future. I certainly hope so. But it is important not to blame one’s issues on others, that is not a good attitude in life.

                        1. 25

                          Using the excuse that because there exist others less fortunate, it’s not worth fighting to make something better is also not a good attitude in life.

                          Reading between the lines, it feels to me like there was a lot that the author left unsaid, and that’s fine. It takes courage to share a personal story about mental wellbeing, and an itemized list of all the wrongs that took place is not necessary to get the point the author was trying to make across.

                          My point is that I’d be cautious about making assumptions about the author’s experiences as they didn’t exactly give a lot of detail here.

                          1. 3

                            Using the excuse that because there exist others less fortunate, it’s not worth fighting to make something better is also not a good attitude in life.

                            This is true. It is worth fighting to make things better

                            Reading between the lines, it feels to me like there was a lot that the author left unsaid, and that’s fine. It takes courage to share a personal story about mental wellbeing, and an itemized list of all the wrongs that took place is not necessary to get the point the author was trying to make across.

                            There is a lot of things that go into mental wellbeing. Some things you can control, some things are genetic. I don’t know what the author left out, but I have not yet seen a study showing that stressful office jobs give people brain damage. There might be things the author has not explained, but at the same time that is a very extreme claim. In fact, if that were true, I am sure that the author should receive a lot in compensation.

                            My point is that I’d be cautious about making assumptions about the author’s experiences as they didn’t exactly give a lot of detail here.

                            I agree with you, but I also think that if someone makes a very bold claim about an employer, especially about personal injury, that these claims should be substantiated. There is a very big difference between “working there was hard, I quit” and “the employer acted recklessly and caused me personal injury”. And I don’t really know which one the author is saying, because from the description could be interpreted as it just being a difficult project to see through.

                            1. 8

                              In fact, if that were true, I am sure that the author should receive a lot in compensation.

                              By thinking about it for a few seconds you can realize that this can easily not happen. The OP itself says that they don’t have documented evidence from the time because of all the issues they were going through. And it’s easy to see why: if your mental health is damaged, your brain is not working right, would you be mindful enough to take detailed notes of every incident and keep a trail of evidence for later use in compensation claims? Or are you saying that compensation would be given out no questions asked?

                              1. 3

                                All I’m saying is, there is a very large difference between saying this job was very stressful, I had trouble sleeping and it negatively affected my concentration and memory and saying this job gave me brain damage. Brain damage is relatively well-defined:

                                The basic definition of brain damage is an injury to the brain caused by various conditions such as head trauma, inadequate oxygen supply, infections, or intracranial hemorrhage. This damage may be associated with a behavioral or functional abnormality.

                                Additionally, there are ways to test for this, a neurologist can make that determination. I’m not a neurologist. But it would be the first time I heard that brain damage be caused by psychosomatic issues. I believe that the author may have used this term in error. That’s why I said what I said — if you, or anyone, has brain damage as a result of your occupation, that is definitely grounds for compensation. And not a small compensation either, as brain damage is no joke. This is a very different category from mere psychological stress from working for an apparently mismanaged project.

                                1. 5

                                  Via https://www.webmd.com/brain/brain-damage-symptoms-causes-treatments

                                  Brain damage is an injury that causes the destruction or deterioration of brain cells.

                                  Anxiety, stress, lack of sleep, and other factors can potentially do that. So I don’t see any incorrect use of the phrase ‘brain damage’ here. And anyway, you missed the point. Saying ‘This patient has brain damage’ is different from saying ‘Working in the WebAssembly team at Google caused this patient’s brain damage’. When you talk about causation and claims of damage and compensation, people tend to demand documentary evidence.

                                  I agree brain damage is no joke, but if you look at society it’s very common for certain types of relatively-invisible mental illnesses to be downplayed and treated very lightly, almost as a joke. Especially by people and corporations who would suddenly have to answer for causing these injuries.

                                  1. 4

                                    Anxiety, stress, lack of sleep and other factors cannot, ever, possibly, cause brain damage. I think you have not completely read that article. It states – as does the definition that I linked:

                                    All traumatic brain injuries are head injuries. But head injury is not necessarily brain injury. There are two types of brain injury: traumatic brain injury and acquired brain injury. Both disrupt the brain’s normal functioning.

                                    • Traumatic Brain Injury(TBI) is caused by an external force – such as a blow to the head – that causes the brain to move inside the skull or damages the skull. This in turn damages the brain.
                                    • Acquired Brain Injury (ABI) occurs at the cellular level. It is most often associated with pressure on the brain. This could come from a tumor. Or it could result from neurological illness, as in the case of a stroke.

                                    There is no kind of brain injury that is caused by lack of sleep or stress. That is not to say that these things are not also damaging to one’s body and well-being.

                                    Mental illnesses can be very devastating and stressful on the body. But you will not get a brain injury from a mental illness, unless it makes you physically impact your brain (causing traumatic brain injury), ingest something toxic, or have a stroke. It is important to be very careful with language and not confuse terms. The term “brain damage” is colloquially often used to describe things that are most definitely not brain damage, like “reading this gave me brain damage”. I hope you understand what I’m trying to state here. Again, the author has possibly misused the term “brain damage”, or there is some physical trauma that happened that the author has not mentioned in the article.

                                    I hope you understand what I am trying to say here!

                                    1. 9

                                      Anxiety and stress raise adrenaline levels, which in turn cause short- and long-term changes in brain chemistry. It sounds like you’ve never been burnt out; don’t judge others so harshly.

                                      1. 3

                                        Anxiety and stress are definitely not healthy for a brain. They accelerate aging processes, which is damaging. But brain damage in a medical context refers to large-scale cell death caused by genetics, trauma, stroke or tumors.

                                      2. 8

                                        There seems to be a weird definitional slide here from “brain damage” to “traumatic brain injury.” I think we are all agreed that her job did not give her traumatic brain injury, and this is not claimed. But your claim that stress and sleep deprivation cannot cause (acquired) brain injury is wrong. In fact, you will find counterexamples by just googling “sleep deprivation brain damage”.

                                        “Mental illnesses can be … stressful on the body.” The brain is part of the body!

                                        1. 1

                                          I think you – and most of the other people that have responded to my comment – have not quite understood what I’m saying. The argument here is about the terms being used.

                                          Brain Damage

                                          Brain damage, as defined here, is damage caused to the brain by trauma, tumors, genetics or oxygen loss, such as during a stroke. This leads to potentially large chunks of your brain to die off. This means you can lose entire brain regions, potentially permanently lose some abilities (facial recognition, speech, etc).

                                          Sleep Deprivation

                                          See Fundamental Neuroscience, page 961:

                                          The crucial role of sleep is illustrated by studies showing that prolonged sleep deprivation results in the distruption of metabolic processes and eventually death.

                                          When you are forcibly sleep deprived for a long time, such as when you are being tortured, your body can lose the ability to use nutrients and finally you can die. You need to not sleep at all for weeks for this to happen, generally this is not something that happens to people voluntarily, especially not in western countries.

                                          Stress

                                          The cells in your brain only have a finite lifespan. At some point, they die and new ones take their place (apoptosis). Chronic stress and sleep deprivation can speed up this process, accelerating aging.

                                          Crucially, this is not the same as an entire chunk of your brain to die off because of a stroke. This is a very different process. It is not localized, and it doesn’t cause massive cell death. It is more of a slow, gradual process.

                                          Summary

                                          Mental illnesses can be … stressful on the body.” The brain is part of the body!

                                          Yes, for sure. It is just that the term “brain damage” is usually used for a very specific kind of pattern, and not for the kind of chronlc, low-level damage done by stress and such. A doctor will not diagnose you with brain damage after you’ve had a stressful interaction with your coworker. You will be diagnosed with brain damage in the ICU after someone dropped a hammer on your head. Do you get what I’m trying to say?

                                          1. 4

                                            I get what you are trying to say, I think you are simply mistaken. If your job impairs your cognitive abilities, then it has given you brain damage. Your brain, is damaged. You have been damaged in your brain. The cells and structures in your brain have taken damage. You keep trying to construct this exhaustive list of “things that are brain damage”, and then (in another comment) saying that this is about them not feeling appreciated and valued or sort of vaguely feeling bad, when what they are saying is that working at this job impaired their ability to form thoughts. That is a brain damage thing! The brain is an organ for forming thoughts. If the brain can’t thoughts so good no more, then it has been damaged.

                                            The big picture here is that a stressful job damaged this person’s health. Specifically, their brain’s.

                                            1. 3

                                              I understand what you are trying to say, but I think you are simply mistaken. We (as a society) have definitions for the terms we use. See https://en.wikipedia.org/wiki/Brain_damage:

                                              Neurotrauma, brain damage or brain injury (BI) is the destruction or degeneration of brain cells. Brain injuries occur due to a wide range of internal and external factors. In general, brain damage refers to significant, undiscriminating trauma-induced damage.

                                              This is not “significant, undiscriminating trauma-induced damage” (for context, trauma here refers to physical trauma, such as an impact to the head, not psychological trauma). What the author describes does not line up with any of the Causes of Brain Damage. It is simply not the right term.

                                              Yes, the author has a brain, and there is self-reported “damage” to it. But just because someone is a man and feels like he polices the neighborhood, does not make me a “police man”. Just because I feel like my brain doesn’t work right after a traumatic job experience does not mean I have brain damage™.

                                              1. 1

                                                The Wikipedia header is kind of odd. The next sentence after “in general, brain damage is trauma induced” lists non-trauma-induced categories of brain damage. So I don’t know how strong that “in general” is meant to be. At any rate, “in general” is not at odds with the use of the term for non-trauma induced stress/sleep depriv damage.

                                                At any rate, if you click through to Acquired Brain Injury, it says “These impairments result from either traumatic brain injury (e.g. …) or nontraumatic injury … (e.g. listing a bunch of things that are not traumatic.)”

                                                Anyway, the Causes of Brain Damage list is clearly not written to be exhaustive. “any number of conditions, including” etc.

                                        2. 2

                                          There is some evidence that lack of sleep may kill brain cells: https://www.bbc.com/news/health-26630647

                                          It’s also possible to suffer from mini-strokes due to the factors discussed above.

                                          In any case, I feel like you’re missing the forest for the trees. Sure, it’s important to be correct with wording. But is that more important than the bigger picture here, that a stressful job damaged this person’s health?

                                          1. 2

                                            the bigger picture here, that a stressful job damaged this person’s health

                                            Yes, that is true, and it is a shame. I really wish that the process around WASM be less hostile, and that this person not be impacted negatively, even if stressful and hard projects are an unfortunate reality for many people.

                                            I feel like you’re missing the forest for the trees.

                                            I think that you might be missing the forest for the trees – I’m not saying that this person was not negatively impacted, I am merely stating that it is (probably, unless there is evidence otherwise) to characterize this impact as “brain damage”, because from a medical standpoint, that term has a more narrow definition that damage due to stress does not fulfill.

                                  2. 4

                                    Hello, you might enjoy this study.

                                    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4561403/

                                    I looked through a lot of studies to try and find a review that was both broad and to the point.

                                    Now, you are definitely mixing a lot of terms here… but I hope that if you read the research, you can be convinced, at the very least, that stress hurts brains (and I hope that reading the article and getting caught in this comment storm doesn’t hurt yours).

                                    1. 2

                                      Sleep Deprivation and Oxidative Stress in Animal Models: A Systematic Review tells us that sleep deprivation can be shown to increase oxidative stress:

                                      Current experimental evidence suggests that sleep deprivation promotes oxidative stress. Furthermore, most of this experimental evidence was obtained from different animal species, mainly rats and mice, using diverse sleep deprivation methods.

                                      Although, https://pubmed.ncbi.nlm.nih.gov/14998234/ disagrees with this. Furthermore, it is known that oxidative stress promotes apoptosis, see Oxidative stress and apoptosis :

                                      Recent studies have demonstrated that reactive oxygen species (ROS) and the resulting oxidative stress play a pivotal role in apoptosis. Antioxidants and thiol reductants, such as N-acetylcysteine, and overexpression of manganese superoxide (MnSOD) can block or delay apoptosis.

                                      The article that you linked Stress effects on the hippocampus: a critical review mentions that stress has an impact on the development of the brain and on it’s workings:

                                      Uncontrollable stress has been recognized to influence the hippocampus at various levels of analysis. Behaviorally, human and animal studies have found that stress generally impairs various hippocampal-dependent memory tasks. Neurally, animal studies have revealed that stress alters ensuing synaptic plasticity and firing properties of hippocampal neurons. Structurally, human and animal studies have shown that stress changes neuronal morphology, suppresses neuronal proliferation, and reduces hippocampal volume

                                      I do not disagree with this. I think that anyone would be able to agree that stress is bad for the brain, possibly by increasing apoptosis (accelerating ageing), decreasing the availability of nutrients. My only argument is that the term brain damage is quite narrowly defined (for example here) as (large-scale) damage to the brain caused by genetics, trauma, oxygen starvation or a tumor, and it can fall into one of two categories: traumatic brain injuries and acquired brain injuries. If you search for “brain damage” on pubmed, you will find the term being used like this:

                                      You will not find studies or medical diagnoses of “brain damage due to stress”. I hope that you can agree that using the term brain damage in a context such as the author’s, without evidence of traumatic injury or a stroke, is wrong. This does not take away the fact that the author has allegedly experienced a lot of stress at their previous employer, one of the largest and high-paying tech companies, and that this experience has caused the author personal issues.

                                      On an unrelated note: what is extremely fascinating to me is that some chemicals such as methamphetamine (at low concentrations) or minocycline are neuroprotective being able to lessen brain damage for example due to stroke. But obviously, at larger concentrations the opposite is the case.

                                      1. 1

                                        How about this one then? https://www.sciencedirect.com/science/article/abs/pii/S0197458003000484

                                        We can keep going, it is not difficult to find these… Your’re splitting a hair which should not be split.

                                        What’s so wrong about saying a bad work environment can cause brain damage?

                                        1. 1

                                          Your’re splitting a hair which should not be split.

                                          There is nothing more fun than a civil debate. I would argue that any hair deserves being split. Worst case, you learn something new, or form a new opinion.

                                          What’s so wrong about saying a bad work environment can cause brain damage?

                                          Nothing is wrong with that, if the work environment involves heavy things, poisonous things, or the like. This is why OSHA compliance is so essential in protecting people’s livelihoods. I just firmly believe, and I think that the literature agrees with me on this, that “brain damage” as a medical definition refers to large-scale cell death due to trauma or stroke, and not chronic low-level damage caused by stress. The language we choose to use is extremely important, it is the only facility we have to exchange information. Language is not useful if it is imprecise or even wrong.

                                          How about this one then?

                                          Let’s take a look what we got here. I’m only taking a look at the abstract, for now.

                                          Stress is a risk factor for a variety of illnesses, involving the same hormones that ensure survival during a period of stress. Although there is a considerable ambiguity in the definition of stress, a useful operational definition is: “anything that induces increased secretion of glucocorticoids”.

                                          Right, stress causes elevated levels of glucocorticoids, such as cortisol.

                                          The brain is a major target for glucocorticoids. Whereas the precise mechanism of glucocorticoid-induced brain damage is not yet understood, treatment strategies aimed at regulating abnormal levels of glucocorticoids, are worth examining.

                                          Glucocorticoids are useful in regulating processes in the body, but they can also do damage. I had never heard of the term glucocorticoid-induced brain damage, and searching for it in the literature only yields this exact article, so I considered this a dead end. However, in doing some more research, I did find two articles that somewhat support your hypothesis:

                                          In Effects of brain activity, morning salivary cortisol, and emotion regulation on cognitive impairment in elderly people, it is mentioned that high cortisol levels are associated with hippocampus damage, supporting your hypothesis, but it only refers to elderly patients with Mild Cognitive Impairment (MCI):

                                          Cognitive impairment is a normal process of aging. The most common type of cognitive impairment among the elderly population is mild cognitive impairment (MCI), which is the intermediate stage between normal brain function and full dementia.[1] MCI and dementia are related to the hippocampus region of the brain and have been associated with elevated cortisol levels.[2]

                                          Cortisol regulates metabolism, blood glucose levels, immune responses, anti-inflammatory actions, blood pressure, and emotion regulation. Cortisol is a glucocorticoid hormone that is synthesized and secreted by the cortex of adrenal glands. The hypothalamus releases a corticotrophin-releasing hormone and arginine vasopressin into hypothalamic-pituitary portal capillaries, which stimulates adrenocorticotropic hormone secretion, thus regulating the production of cortisol. Basal cortisol elevation causes damage to the hippocampus and impairs hippocampus-dependent learning and memory. Chronic high cortisol causes functional atrophy of the hypothalamic-pituitary-adrenal axis (HPA), the hippocampus, the amygdala, and the frontal lobe in the brain.

                                          Additionally, Effects of stress hormones on the brain and cognition: Evidence from normal to pathological aging mentions that chronic stress is a contributor to memory performance decline.

                                          We might be able to find a few mentions of brain damage outside of the typical context (as caused by traumatic injury, stroke, etc) in the literature, but at least we can agree that the term brain damage is quite unusual in the context of stress, can we not? Out of the 188,764 articles known by pubmed, only 18,981 mention “stress”, and of those the almost all are referring to “oxidative stress” (such as that experienced by cells during a stroke). I have yet to find a single study or article that directly states brain damage as being a result of chronic stress, in the same way that there are hundreds of thousands of studies showing brain damage from traumatic injuries to the brain.

                                          1. 2

                                            Well, if anybody asks me I will tell them that too much stress at work causes brain damage… and now I can even point to some exact papers!

                                            I agree that it’s a little hyperbolic, but it’s not that hyperbolic. If we were talking about drug use everyone would kind of nod and say, ‘yeah, brain damage’ even if the effects were tertiary and the drug use was infrequent.

                                            But stress at work! Ohohoho, that’s just life my friend! Which really does not need to be the way of the world… OP was right to get out, especially once they started exhibiting symptoms suspiciously like the ones cited in that last paper (you know, the sorts of symptoms you get when your brain is suffering from some damage).

                                            1. 2

                                              If someone tells me that they got brain damage from stress at work, I will laugh, tell them to read the Wikipedia article article and then move on. But that is okay, we can agree to disagree. I understand that there are multiple possible definitions for the term brain damage.

                                              If we were talking about drug use everyone would kind of nod and say, ‘yeah, brain damage’ even if the effects were tertiary and the drug use was infrequent.

                                              In my defense, people often use terms incorrectly.

                                              OP was right to get out

                                              I agree. Brain damage or not, Google employee or not, if you are suffering at work you should not stay there. We all have very basic needs, and one of them is being valued and being happy to work.

                                              Anyways, I hope you have a good weekend!

                                    2. 6

                                      I have not yet seen a study showing that stressful office jobs give people brain damage.

                                      This is a bizarre and somewhat awful thread. Please could you not post things like this in future?

                                      1. 8

                                        I disagree. The post seemed polite, constructive, and led to (IMO) a good conversation (including some corrections to the claims in the post).

                                        1. 4

                                          Parent left a clear method for you to disprove them by providing a counter-example.

                                          If you can point to some peer-reviewed research on the topic, by all means do so.

                                          1. 5

                                            Yea but this is an obnoxious, disrespectful, and disingenuous way to conduct an argument. I haven’t read any studies proving anything about this subject one way or another. Because I am not a mental health researcher. So it’s easy for me to make that claim, and present the claim as something that matters, when really it’s a pointless claim that truly does not matter at all.

                                            Arguing from an anecdotal position based on your own experience, yet demanding the opposing side provide peer-reviewed studies to contradict your anecdotal experience, places a disproportionate burden on them to conduct their argument. And whether intentional or not, it strongly implies that you have little to no respect for their experiences or judgement. That you will only care about their words if someone else says them.

                                1. 6

                                  is there any real reason to adopt new or custom compression schemes in 2022? of course there are many formats used in existing applications, protocols, and formats (e.g. zlib/gzip, bzip2, xz, snappy, …) and they are here to stay.

                                  but nowadays the near-omnipresence of zstd (https://en.wikipedia.org/wiki/Zstd) and brotli (https://en.wikipedia.org/wiki/Brotli), both of which are extremely widely and successfully used in many different scenarios, seem to be the right choice for most new applications?

                                  1. 6

                                    zstd and brotli are both optimized for speed over compressed size. IMO there is a valid niche for algorithms that are slower but make smaller archives, especially if they still decompress fast.

                                    1. 2

                                      Which is why decompression speed and memory usage would be nice to have in the benchmarks.

                                    2. 3

                                      I feel like I’ve seen lz4 far more widely used than Brotli, and I’m surprised you wouldn’t mentioned it when talking about zstd.

                                      1. 1

                                        brotli is rather big (and gaining traction) on the web: in http, web fonts, etc.

                                        anyway, yes, lz4 is also widely used, and belongs to the same family (lz77) as brotli. the lz4 author is also the original author of zstd, btw.

                                    1. 8

                                      Cute benchmark pic, but:

                                      • No decompression runtime / memory.
                                      • No lz4 nor zstd comparison points.
                                      1. 12

                                        I agree, the benchmark is weird. Besides the fact that some important competitors are missing, I understand that compression implementations make trade-offs between time/memory and compression efficiency, so it’s hard to compare them fairly by just testing one of their configuration. Instead you should plot the graph, over varying configurations, of time/memory consumption and compression ratio. (If the line drawn by one algorithm is always “above” another, then one can say that it is always better, but often they will cross in some places, telling us interesting facts about the strengths and weaknesses of both.)

                                        1. 2

                                          Good point. And there should be a tool for running benchmarks and plotting the charts automatically.

                                          1. 2

                                            First commit on May 1st this year. Hopefully they’ll get to it!

                                        2. 2

                                          They do compare to lz4 and zstd for some of the test workloads, not sure why not for everything. They’re not the comparison I’d like though, I don’t see an xz comparison anywhere. For me, xz has completely replaced bzip2, for the things where compression ratio is the most important factor. lz4 and zstd are replacing gzip for things where I need more of a balance between compression ratio and throughput / memory consumption.

                                          1. 2

                                            They do compare to lz4 and zstd for some of the test workloads, not sure why not for everything.

                                            They added that after my comment… and yeah, it’s odd they didn’t do it on all workloads.

                                            xz is based on lzma, but not exactly the same. Maybe they thought including lzma was enough.

                                            1. 1

                                              Tangentially, I tried using brotli for something at work recently and at compression level 4 it beats deflate wl hands down, at about the same compression speed. I was impressed.

                                          1. 4

                                            Gopherus is a good multi-platform client. It notably works on many UNIX-likes and MS-DOS/clones.

                                            1. 2

                                              I’ve been using gnucash for nearly two decades, but the more alternatives the better.

                                              1. 1

                                                A crude test, but often sufficient, is ioping.

                                                1. 2

                                                  Cute as a curiosity.

                                                  But unfortunately, Intel seems to confine AVX-512 to only a subset of CPUs, even on their newest generation.

                                                  1. 35

                                                    Public offices using open standards should be the norm.

                                                    It is sad that this isn’t the case, and thus still makes the news.

                                                    1. 8

                                                      Public offices using open standards should be the norm.

                                                      Exactly. Ever since I can remember I couldn’t comprehend why governments, even the army, so willingly use social media, plaster their logo on their website, and so on. It’s such an obvious bad idea. I mean, why would one willingly make oneself dependent on massive for-profit corporations with a history of scandals every fortnight.

                                                      1. 27

                                                        …so willingly use social media…

                                                        Because that’s where the people are. If your goal is to reach, or be available to, as many people as possible, then using social media sites is necessary (though, I would also argue, insufficient). That being said, I agree that governments shouldn’t allow their data to become trapped in walled gardens and the like, hence the “insufficient” bit.

                                                        Edit: As an example, the county I live in posts notices and such on Instagram. They also post the information on their web site, but honestly, I only see them on Instagram. I don’t want them to stop doing that just because Instagram is problematic in various ways. It still exposes me to interesting info that I wouldn’t otherwise go out of my way to find.

                                                    1. 2

                                                      They’re likely using modern x86 CPUs. They do have instructions to accelerate crc32/crc32c. I have no idea why they’re not just using those.

                                                      It is also very complicated-looking high level code that smells of premature optimization, potentially doing way more damage than good on a modern compiler. An asm implementation would be like 30 lines.

                                                      1. 3

                                                        It appears that they are using those instructions. If you have at least SSE 4.2 support then they use _mm_crc32_u64 at https://github.com/facebook/rocksdb/blob/main/util/crc32c.cc#L365. And if you also have additionally have pclmulqdq support then they compute 3 CRCs in parallel and combine them: https://github.com/facebook/rocksdb/blob/main/util/crc32c.cc#L686.

                                                        There’s also support for extensions on non-x86 CPUs (ARM and PowerPC) as well as a portable fallback path for when none of the above are available.

                                                        1. 1

                                                          Oh, they are indeed. It really looks obtuse.

                                                          I wonder what a clean way to do this would be. Maybe some assembly implementations in their own source files, for different CPUs, with some runtime detection, with a C or C++ implementation as fallback.

                                                      1. 2

                                                        I remember my first multitasking experience.

                                                        On a basic A500 (68000, 512KB “Chip RAM”), I opened tens of clocks (Workbench 1.3’s :utilities/clock), and they were all running without issue.

                                                        Impressed with the level of bloat Windows 95 must have had, to not be able to update a single clock in the taskbar once per second, on an actual 386 with its 32bit ALU and higher IPC.

                                                        These days, with a multicore 64bit GHz+ machine, I enjoy i3status and its clock updating every 5 seconds.

                                                        1. 9

                                                          Impressed with the level of bloat Windows 95 must have had

                                                          You should try to understand things instead of insulting things. It easily could do it, but (as the linked article describes) it came with a cost, and that cost meant that in certain circumstances, it’d harm performance on something the user actually cared about in the name of something that was expendable.

                                                          Could you open tens of clocks on your old system while running another benchmark task that actually needed all the available memory without affecting anything?

                                                          With the one minute update, the taskbar needed only be paged in once a minute, then can be swapped back out to give the running application the memory back. A few kilobytes can make a difference when you’re thrashing to the hard drive and back every second.

                                                          1. 4

                                                            it’d harm performance on something the user actually cared about in the name of something that was expendable.

                                                            AmigaOS isn’t just preemptive, it also has hard priorities. If a higher priority task becomes runnable, the current task will be instantly preempted.

                                                            Could you open tens of clocks on your old system while running another benchmark task that actually needed all the available memory without affecting anything?

                                                            “All the available memory” means there’s no memory for clocks or multitasking to begin with.

                                                            Taking over the system was as easy as calling exec.library’s Disable(), which disables interrupts, then doing whatever you wanted with the system. This is how e.g. Minix 1.5 would take over the A500.

                                                            Alternatively, it is possible to disable preemption while still allowing interrupts to be serviced, with Forbid().

                                                            With the one minute update, the taskbar needed only be paged in once a minute

                                                            Why does the taskbar use so much ram that this would even matter, in the first place?

                                                            1. 2

                                                              “All the available memory” means there’s no memory for clocks or multitasking to begin with.

                                                              Windows 95 supported virtual memory and page file swapping. There’s a significant performance drop off when you cross the boundary into that being required, and the more times you cross it, the worse it gets.

                                                              Why does the taskbar use so much ram that this would even matter, in the first place?

                                                              They were squeezing benchmarks. Even a small number affects it. Maybe it was more marketing than anything else, but still the benefit of showing seconds are dubious so they decided it wasn’t worth it anyway.

                                                          2. 3

                                                            I’d imagine context switching is much faster on Amiga OS, since there’s only a single address space and no memory protection.

                                                            1. 2

                                                              The 68000 has very low and quite consistent interrupt latency, and AmigaOS did indeed not support/use an MMU, but I don’t see how this is relevant considering how much faster and higher clocked the 80386s that win95 requires are.

                                                              1. 3

                                                                I think maybe you give the 80386 too much credit. I don’t think the x86 processors of the day were really that much faster than their m68k equivalents, and the ones with higher clock speeds were generally saddled with a system bus that ran at half or less than the speed of the chip. Add on the cost of maintaining separate sets of page tables per process, and the invalidation of the wee little bit of cache such a chip might have when switching between them, and doing all of this on a register-starved and generally awkward architecture.

                                                          1. 3

                                                            I uploaded firmware to a single board computer with Kermit last week. How much do you want to bet it doesn’t support any of these protocol extensions?

                                                            1. 2

                                                              As long as it works. Kermit always manages to somehow work.

                                                            1. 2

                                                              I am hopeful that riscv64 will be next.

                                                              1. 7

                                                                It will be a while. 32-bit Arm and MIPS (any variant) never made it to tier 1 for roughly the same reason: the inability to define a platform. Arm used to be a complete mess, different boot processes, different UARTs, different interrupt controllers, and so on. Basically everything that you need in early boot was different on boards, so it was very hard to target ARMv6, you needed to target specific SoCs. This improved a lot and by ARMv7 it was more or less possible, but most of the interest moved to AArch64 platforms by then. Arm now has a couple of specs for complete systems, which make it far easier to produce a portable image.

                                                                RISC-V is trying to define platform standards but vested interests are not helping: existing SoC vendors all want their (usually not-very-well-designed) core components to be part of the spec and are reluctant to engage in a process of designing things based on experience. I also have no idea what the patent space for interrupt controllers looks like. I wouldn’t be surprised if there are things related to virtualisation that are patented by Arm or Intel in different variants, negotiating around that is probably not fun.

                                                                MIPS was even worse because every vendor wanted to ship something that was, generally, R4K + some extra stuff. Privileged mode on MIPS was not part of the core spec (it was a separate coprocessor, though generally not a physical coprocessor on later machines) and so anything that didn’t just copy R4K had different ways of manipulating virtual memory (some of the Cavium parts had hardware page-table walkers, R4K had a purely software-managed TLB, but some R4K-derivatives had different variants on the software-managed TLB). Some booted the same code on every core and required them to read the core ID (using a coprocessor-0 instruction that wasn’t universally supported) and spin if they were not core 0 until they received a notification that the initial thread had booted, others started only one core running until they received an interrupt.

                                                                Worse, on MIPS, most of the CPUs also had a load of ISA extensions. If you targeted MIPS III then code would run, but often at a fraction of the speed of targeting the supported extensions. If you targeted the extensions then your binaries weren’t portable. This made shipping a package repo basically impossible. This is likely to be the case with RISC-V as well. The privileged extension is not great, so I expect a lot of the Chinese vendors will see it as space for differentiation and do something better and ship their own patched Linux. The userspace ISA space has a huge number of extensions already. You can target some lowest common denominator (I think the Linux distros are aiming for RV32IADMFC or similar) but a bunch of things get a big speedup from the various bit-manipulation extensions or either the vector or SIMD extensions. If you don’t require them then your binaries will be slow, if you do then you’re reducing the number of cores.

                                                                I don’t know if fragmentation will kill RISC-V but it killed MIPS and it prevented Arm from attaining any kind of dominance for 20 or so years until they started aggressively killing it off (a few companies still have licenses that allow them to ship ARMv4 + arbitrary extensions but I don’t know of anyone who has done so for well over a decade). It will definitely not help grow a software ecosystem outside of platforms that have well-funded app stores that can recompile everything on the server for every target variant. That’s not something that a community-funded open-source operating system can afford. Especially for a platform that is so hostile to FreeBSD (all of the feedback from the folks like Andy and Ruslan doing FreeBSD bringup on RISC-V was rejected because ‘Linux doesn’t need it’).

                                                              1. 4

                                                                No, it hasn’t. The page is there in preparation for the release but it has not yet actually been released.

                                                                1. 5

                                                                  It hasn’t been properly announced but sets/packages are already available on the mirrors e.g. https://fastly.cdn.openbsd.org/pub/OpenBSD/7.1/

                                                                  1. 4

                                                                    I’d argue if it’s not announced it’s not released. Also sets and packages could in theory still be overwritten.

                                                                    1. 1

                                                                      I’d argue further if the release date on the page itself doesn’t yet exist, it hasn’t been released.

                                                                      Released May ?, 2022.

                                                                      1. 4

                                                                        They just actually released! :)

                                                                        Including an announcement from The de Raadt:

                                                                        https://marc.info/?l=openbsd-announce&m=165054715122282&w=2

                                                                        Maybe vermaden knew more?

                                                                        1. 2

                                                                          Haha. Whelp, I’ll go back to my corner.

                                                                          1. 1

                                                                            It’s all in the commits…

                                                                        2. 1

                                                                          Received email from Theo (from the announce maillist).

                                                                          We are pleased to announce the official release of OpenBSD 7.1.

                                                                          Therefore I consider it officially released, now.

                                                                      2. 5

                                                                        I assumed that if OpenBSD Webzine states that - then its released - sorry, my bad :)

                                                                      1. 8

                                                                        I really like the SMP section, particularly:

                                                                        Implemented poll(2), select(2), ppoll(2) and pselect(2) on top of kqueue.

                                                                        1. 15

                                                                          Ready for general use on Apple silicon Macs. That’s pretty cool!

                                                                          1. 9

                                                                            Personally more excited about the new “riscv64” RISC-V port.

                                                                            1. 2

                                                                              “The iron fist in the velvet glove.”

                                                                            1. 2

                                                                              The challenge now would be to move on to a less overkill hardware platform (like rp2040 or gd32v), and a thinner and more reliable software stack (a RTOS e.g. Genode/seL4, nuttX).

                                                                              This is much easier to do at this point, where you already got something that works.

                                                                              1. 3

                                                                                Would love to do that. I’ve built bare metal things in Rust on ARM Cortex microcontrollers using RTIC before but the need for Wi-Fi limits things at the moment. Progress is being made on the Espressif front—they hired someone to work on Rust support for their devices and the most recent update noted preliminary support for Wi-Fi so perhaps it’s not too far off.

                                                                                1. 2

                                                                                  esp-idf is a rather decent C SDK. Rust or no Rust. It would be a pretty easy project. You might not even need to allocate any heap memory (yourself) for this simple use case.

                                                                                2. 1

                                                                                  depends what you mean by reliable. i would just go for Openwrt Linux. will result in a much smaller system, and has its own robust init and process monitoring systems.

                                                                                  1. 1

                                                                                    I would avoid Linux. It has way too much code (both kernel and userspace, even for distributions like openwrt) to even talk about reliable.

                                                                                    Nevermind Linux, for the task at hand, not even dynamic memory allocation is required or desirable.

                                                                                    1. 4

                                                                                      just depends on what you are prioritizing. you mentioned several experimental projects like genode and sel4. i would not waste my time unless i really cared about this or wanted to experiment with those projects specifically. openwrt would allow for a quick iteration with a substantial size reduction.

                                                                                      1. 2

                                                                                        Yeah, it’s all about priorities. I’d jump at the chance, in a setting where there’s no time constraints like such a personal project.

                                                                                1. 53

                                                                                  This article is a cautionary tale about dogma and religious belief in computing, and the role of “serverless” is only ancillary. Teams making hundreds of repos, copy pasting code, chasing shiny new thingies configured with YAML/JSON/EDN/… - this can happen with almost any underlying deploy technology in the name of “Microservices”. The underlying issue is not the addictive nature of the cloud provider’s overhyped abstractions - it is this:

                                                                                  When it’s not okay to talk about the advantages and disadvantages of [The Way] with other engineers without fear of reprisal, it might be a cult. Many of these engineers say [The Way] is the only way to [Be] anymore.

                                                                                  In this article, The Way is “AWS lambda”, but it could easily be “Kubernetes”, “Event sourcing”, “Microkernels”, “Microservices”, “Cluster framework (akka/erlang/…)” or any other technology. Anything that can be a panacea that becomes a poison when dogma is too strong.

                                                                                  I enjoy cautionary tales, but I’d be more interested to hear how to solve the underlying cultural issue of dogma within an org, since changing minds and engineering practice is so much harder than changing code.

                                                                                  1. 17

                                                                                    That’s one takeaway, but an equally important one is that serverless has inherent limitations when compared with “non-scalable” solutions … like hosting your app on a Unix box. That includes the first one (testing), “microservice hell”, and “inventing new problems”.

                                                                                    The last one is particularly important. In more time-tested stacks there are “idioms” for each kind of problem you will encounter. Serverless only provides a very small part of the stack (stateless, fine grained), and if your application doesn’t fit (which is very likely), then you will need to invent some custom solution.

                                                                                    And what he’s saying is that, after experience, those custom solutions (“hacks”) ended up worse than the time-tested ones.

                                                                                    I agree the abuse of HTTP falls in the other category though. That is not inherent to serverless – if you got it wrong there, you would get it wrong everywhere.

                                                                                    1. 9

                                                                                      You could make a similar cautionary tale about Enterprise software set in the early 00s with the rise of Java and XML.

                                                                                      1. 3

                                                                                        Yep, if you squint, AWS serverless is similar in outcome to IT-managed WebSphere. Although, at least WebSphere tried to be J2EE-compliant. Lambda and other services are purely proprietary, with no attempt to provide open standards.

                                                                                        1. 13

                                                                                          Lambda is just Greek for “bean”.

                                                                                      2. 4

                                                                                        I think this is symptomatic of the problematic relationship we have with senior engineers. Software engineering has a tendency for neophilia, where if someone criticizes a new idea they’re often dismissed as an “old timer” who’s afraid of change and whose opinions are irrelevant. I have the impression at least that in other fields the opinion of seniors are taken very seriously because they have experience that juniors lack. Certainly seniors are less prone to be wooed by new technology and have an abundance of past experience that the technology can be evaluated against. It’s really hard to ask critical questions like “How would new technology X handle scenario Y?” if you’ve never had to deal with “scenario Y” before.

                                                                                        One idea would be to have some kind of “senior engineer advisory board” or “council of elders” that could weigh in on technical decisions. At Pixar they famously had a “brain trust” of experienced employees that would vet important decisions at various stages of the production. The point would be to institutionalize some kind of sanity checking with more experienced developers so that we don’t have to make the same mistakes twice, both as an organization and as a field.

                                                                                        I’m not advocating for letting senior engineers rule by fiat, just that we should look more to seniors for guidance and counsel, just like pretty much every other industry seems to be doing.

                                                                                        1. 5

                                                                                          Microkernels

                                                                                          That item does not belong in that list.

                                                                                          1. 3

                                                                                            Why not? The point is that in a healthy engineering culture, no technology choice should be canon law, unable to be consciously weighed against alternatives, whether it’s your favourite pet or not.

                                                                                            1. 7

                                                                                              It lacks the cargo cult; Everybody seems to hate ’em. For no good reason.

                                                                                              1. 6

                                                                                                I think that’s uncharitable, and also untrue. Uncharitable because clearly lots of people are interested in microkernels one way or another, and because like all technical choices it represents a trade-off between competing concerns; your expectations of system behaviour may well not match the expectations of somebody less enthusiastic about microkernels.

                                                                                                It’s untrue because I hear loud positive noises about microkernels, even just on this site, all the time; e.g., your own comment over here!

                                                                                                1. 1

                                                                                                  Yes, it is pretty much just me, and that is sad.

                                                                                                2. 4

                                                                                                  I think that’s untrue now but it’s alternated. In the early ’90s, there was a lot of dogma around microkernels. OSF/1 was going to change the UNIX landscape completely. Minix was a demonstration of how operating systems should be written. Linux was a terrible design, *BSD was a legacy thing that would eventually just become a server on a microkernel (or, ideally, ripped apart and bits of it used to build multiple servers).

                                                                                                  Then people got performance data on Mach. System call overheads were insanely high because Mach did port rights checking on every message and a system call required at least two messages to handle. The dogma shifted from microkernels are unconditionally good to microkernels are unconditionally bad.

                                                                                                  In the background, systems like L4 and QNX showed that microkernels can outperform monolithic kernels but that didn’t really take off until multicore systems became common and shared-everything concurrency in a single kernel proved to be a bottleneck. Most of the debate moved on because microkernels quietly won by rebranding themselves as hypervisors and running Linux / *BSD / Windows as a single-server isolation layer.

                                                                                                  These days, the L4 family is deployed on more devices than any other OS family, most monolithic kernels are run on top of a microkernel^Whypervisor, and anyone writing a new OS for production use is building a microkernel. Even monolithic kernels are adding userspace driver frameworks and gradually evolving towards microkernels. There’s a lot less dogma.