I hope the author gets the help they need, but I don’t really see how the blame for their psychological issues should be laid at the feet of their most-recent employer.
In my career I’ve seen managers cry multiple times, and this is one of the places that happened. A manager should never have to ask whether they’re a coward, but that happened here.
I dunno, doesn’t sound like they were the only person damaged by the experience.
Eventually my physicians put me on forced medical leave, and they strongly encouraged me to quit…
Seems pretty significant when medical professionals are telling you the cure for your issues is “quit this job”?
Seems pretty significant when medical professionals are telling you the cure for your issues is “quit this job”?
A number of years ago I developed some neurological problems, and stress made it worse. I was told by two different doctors to change or quit my job. I eventually did, and it helped, but the job itself was not the root cause, nor was leaving the sole cure.
I absolutely cannot speak for OP’s situation, but I just want to point out that a doctor informing you to rethink your career doesn’t necessarily imply that the career is at fault. Though, in this case, it seems like it is.
To clarify, I’m using “career change” in a general sense. I would include quitting a job as a career change, as well as leaving one job for another in the same industry/domain. I’m not using it in the “leave software altogether” sense.
I’m trusting the author’s causal assessment here, but employers (especially large businesses with the resources required) can be huge sources of stress and prevent employees from having the time or energy needed to seek treatment for their own needs, so they can both cause issues and worsen existing ones.
It’s not uncommon, for example, for businesses to encourage unpaid out-of-hours work for salaried employees by building a culture that emphasizes personal accountability for project success; this not only increases stress and reduces free time that could otherwise be used to relieve work-related stress, it teaches employees to blame themselves for what could just as easily be systemic failures. Even if an employee resists the social pressure to put in extra hours in such an environment, they’ll still be penalized with (real or imagined) blame from their peers, blame from themselves for “not trying hard enough”, and likely less job safety or fewer benefits.
In particular, there’s relevance from the business’ failure to support effective project management, manage workloads, or generally address problems repeatedly and clearly brought up to them. These kinds of things typically fuel burnout. The author doesn’t go into details enough for an outside observer to make a judgment call one way or the other, but if you trust the author’s account of reality then it seems reasonable to blame the employer for, at the least, negligently fueling these problems through gross mismanagement.
Arguably off-topic, but I think it might squeak by on the grounds that it briefly ties the psychological harm to the quality of a technical standard resulting from the mismanaged business process.
a culture that emphasizes personal accountability for project success; this not only increases stress and reduces free time that could otherwise be used to relieve work-related stress, it teaches employees to blame themselves for what could just as easily be systemic failures.
This is such a common thing. An executive or manager punts on actually organizing the work, whether from incompetence or laziness, and then tries to make the individuals in the system responsible for the failures that occur. It’s hardly new. Deming describes basically this in ‘The New Economics’ (look up the ‘red bead game’).
More cynically, is WebAssembly actuall in Google’s interests? It doesn’t add revenue to Google Cloud. It’s going to make their data collection harder (provide Google analytics libraries for how many languages?). It was clearly a thing that was gaining momentum, so if they were to damage it, they would need to make sure they had a seat at the table and then make sure that the seat was used as ineffectually and disruptively as possible.
More cynically, is WebAssembly actually in Google’s interests?
I think historically the answer would have been yes. Google has at various points been somewhat hamstrung by shipping projects with slow front end JS in them and responded by trying to make browsers themselves faster. e.g. creating V8 and financially contributing to Mozilla.
I couldn’t say if Google now has any incentive to not make JS go fast. I’m not aware of one. I suspect still the opposite. I think they’re also pushing mobile web apps as a way to inconvenience Apple; I think Google currently want people to write portable software using web tech instead of being tempted to write native apps for iOS only.
That said, what’s good for the company is not the principle factor motivating policy decisions. What’s good for specific senior managers inside Google is. Otherwise you wouldn’t see all these damn self combusting promo cycle driven chat apps from Google. A company is not a monolith.
‘The New Economics’
I have this book and will have to re-read at least this bit tomorrow. I have slightly mixed feelings about it, mostly about the writing style.
Making JS fast is one thing. Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?
Your point about the senior managers’ interests driving what’s done is on point, though. Google and Facebook especially are weird because ads fund the company, and the rest is all some kind of loss leader floating around divorced from revenue.
The only thing I’ll comment about Deming is that the chapter on intrinsic vs extrinsic motivation should be ignored, as that’s entirely an artifact despite its popularity. The rest of the book has held up pretty well.
Making JS fast is one thing. Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?
Google doesn’t need to maintain their analytics libraries in many other languages, only to expose APIs callable from those languages. All WebAssembly languages can call / be called by JavaScript.
More generally, Google has been the biggest proponent of web apps instead of web services. Tim Berners-Lee’s vision for the web was that you’d have services that provided data with rich semantic markup. These could be rendered as web pages but could equally plug into other clients. The problem with this approach is that a client that can parse the structure of the data can choose to render it in a way that simply ignores adverts. If all of your adds are in an <advert provider="google"> block then an ad blocker is a trivial browser extension, as is something that displays ads but restricts them to plain text. Google’s web app push has been a massive effort to convince everyone to obfuscate the contents of their web pages. This has two key advantages for Google:
WebAssembly fits very well into Google’s vision for the web.
I used to work for a price-comparison site, back when those were actual startups. We had one legacy price information page that was Java applet (remember those?) Supposedly the founders were worried about screen scrapers so wanted the entire site rendered with applets to deter them.
Making a target for many other languages, as opposed to maintaining analytics libraries and other ways of gathering data for one languages?
This is something I should have stated explicitly but didn’t think to: I don’t think wasm is actually going to be the future of non-JS languages in the browser. I think they for the next couple decades at least, wasm is going to be used for compute kernels (written in other langs like C++ and Rust) that get called from JS.
I’m taking a bet here that targeting wasm from langs with substantial runtimes will remain unattractive indefinitely due to download weight and parsing time.
about Deming
I honestly think many of the points in that book are great but hoo boy the writing style.
That is exactly what I thought while reading this. I understand that to a lot of people, WebAssembly is very important, and they have a lot of emotions vested into the success. But to the author’s employer, it might not be as important, as it might not directly generate revenue. The author forgets that to the vast, vast majority of people on this earth, having the opportunity to work on such a technology at a company like Google is an unparalleled privilege. Most people on this earth do not have the opportunity to quit their job just because a project is difficult, or because meetings run long or it is hard to find consensus. Managing projects well is incredibly hard. But I am sure that the author was not living on minimum wage, so there surely was compensation for the efforts.
It is sad to hear that the author has medical issues, and I hope those get sorted out. And those kinds of issues do exacerbate stressful jobs. But that is not a good reason for finger pointing. Maybe the position just was not right for the author, maybe there are more exciting projects that are waiting in the future. I certainly hope so. But it is important not to blame one’s issues on others, that is not a good attitude in life.
Using the excuse that because there exist others less fortunate, it’s not worth fighting to make something better is also not a good attitude in life.
Reading between the lines, it feels to me like there was a lot that the author left unsaid, and that’s fine. It takes courage to share a personal story about mental wellbeing, and an itemized list of all the wrongs that took place is not necessary to get the point the author was trying to make across.
My point is that I’d be cautious about making assumptions about the author’s experiences as they didn’t exactly give a lot of detail here.
Using the excuse that because there exist others less fortunate, it’s not worth fighting to make something better is also not a good attitude in life.
This is true. It is worth fighting to make things better
Reading between the lines, it feels to me like there was a lot that the author left unsaid, and that’s fine. It takes courage to share a personal story about mental wellbeing, and an itemized list of all the wrongs that took place is not necessary to get the point the author was trying to make across.
There is a lot of things that go into mental wellbeing. Some things you can control, some things are genetic. I don’t know what the author left out, but I have not yet seen a study showing that stressful office jobs give people brain damage. There might be things the author has not explained, but at the same time that is a very extreme claim. In fact, if that were true, I am sure that the author should receive a lot in compensation.
My point is that I’d be cautious about making assumptions about the author’s experiences as they didn’t exactly give a lot of detail here.
I agree with you, but I also think that if someone makes a very bold claim about an employer, especially about personal injury, that these claims should be substantiated. There is a very big difference between “working there was hard, I quit” and “the employer acted recklessly and caused me personal injury”. And I don’t really know which one the author is saying, because from the description could be interpreted as it just being a difficult project to see through.
In fact, if that were true, I am sure that the author should receive a lot in compensation.
By thinking about it for a few seconds you can realize that this can easily not happen. The OP itself says that they don’t have documented evidence from the time because of all the issues they were going through. And it’s easy to see why: if your mental health is damaged, your brain is not working right, would you be mindful enough to take detailed notes of every incident and keep a trail of evidence for later use in compensation claims? Or are you saying that compensation would be given out no questions asked?
All I’m saying is, there is a very large difference between saying this job was very stressful, I had trouble sleeping and it negatively affected my concentration and memory and saying this job gave me brain damage. Brain damage is relatively well-defined:
The basic definition of brain damage is an injury to the brain caused by various conditions such as head trauma, inadequate oxygen supply, infections, or intracranial hemorrhage. This damage may be associated with a behavioral or functional abnormality.
Additionally, there are ways to test for this, a neurologist can make that determination. I’m not a neurologist. But it would be the first time I heard that brain damage be caused by psychosomatic issues. I believe that the author may have used this term in error. That’s why I said what I said — if you, or anyone, has brain damage as a result of your occupation, that is definitely grounds for compensation. And not a small compensation either, as brain damage is no joke. This is a very different category from mere psychological stress from working for an apparently mismanaged project.
Via https://www.webmd.com/brain/brain-damage-symptoms-causes-treatments
Brain damage is an injury that causes the destruction or deterioration of brain cells.
Anxiety, stress, lack of sleep, and other factors can potentially do that. So I don’t see any incorrect use of the phrase ‘brain damage’ here. And anyway, you missed the point. Saying ‘This patient has brain damage’ is different from saying ‘Working in the WebAssembly team at Google caused this patient’s brain damage’. When you talk about causation and claims of damage and compensation, people tend to demand documentary evidence.
I agree brain damage is no joke, but if you look at society it’s very common for certain types of relatively-invisible mental illnesses to be downplayed and treated very lightly, almost as a joke. Especially by people and corporations who would suddenly have to answer for causing these injuries.
Anxiety, stress, lack of sleep and other factors cannot, ever, possibly, cause brain damage. I think you have not completely read that article. It states – as does the definition that I linked:
All traumatic brain injuries are head injuries. But head injury is not necessarily brain injury. There are two types of brain injury: traumatic brain injury and acquired brain injury. Both disrupt the brain’s normal functioning.
- Traumatic Brain Injury(TBI) is caused by an external force – such as a blow to the head – that causes the brain to move inside the skull or damages the skull. This in turn damages the brain.
- Acquired Brain Injury (ABI) occurs at the cellular level. It is most often associated with pressure on the brain. This could come from a tumor. Or it could result from neurological illness, as in the case of a stroke.
There is no kind of brain injury that is caused by lack of sleep or stress. That is not to say that these things are not also damaging to one’s body and well-being.
Mental illnesses can be very devastating and stressful on the body. But you will not get a brain injury from a mental illness, unless it makes you physically impact your brain (causing traumatic brain injury), ingest something toxic, or have a stroke. It is important to be very careful with language and not confuse terms. The term “brain damage” is colloquially often used to describe things that are most definitely not brain damage, like “reading this gave me brain damage”. I hope you understand what I’m trying to state here. Again, the author has possibly misused the term “brain damage”, or there is some physical trauma that happened that the author has not mentioned in the article.
I hope you understand what I am trying to say here!
Anxiety and stress raise adrenaline levels, which in turn cause short- and long-term changes in brain chemistry. It sounds like you’ve never been burnt out; don’t judge others so harshly.
Anxiety and stress are definitely not healthy for a brain. They accelerate aging processes, which is damaging. But brain damage in a medical context refers to large-scale cell death caused by genetics, trauma, stroke or tumors.
There seems to be a weird definitional slide here from “brain damage” to “traumatic brain injury.” I think we are all agreed that her job did not give her traumatic brain injury, and this is not claimed. But your claim that stress and sleep deprivation cannot cause (acquired) brain injury is wrong. In fact, you will find counterexamples by just googling “sleep deprivation brain damage”.
“Mental illnesses can be … stressful on the body.” The brain is part of the body!
I think you – and most of the other people that have responded to my comment – have not quite understood what I’m saying. The argument here is about the terms being used.
Brain DamageBrain damage, as defined here, is damage caused to the brain by trauma, tumors, genetics or oxygen loss, such as during a stroke. This leads to potentially large chunks of your brain to die off. This means you can lose entire brain regions, potentially permanently lose some abilities (facial recognition, speech, etc).
Sleep DeprivationSee Fundamental Neuroscience, page 961:
The crucial role of sleep is illustrated by studies showing that prolonged sleep deprivation results in the distruption of metabolic processes and eventually death.
When you are forcibly sleep deprived for a long time, such as when you are being tortured, your body can lose the ability to use nutrients and finally you can die. You need to not sleep at all for weeks for this to happen, generally this is not something that happens to people voluntarily, especially not in western countries.
StressThe cells in your brain only have a finite lifespan. At some point, they die and new ones take their place (apoptosis). Chronic stress and sleep deprivation can speed up this process, accelerating aging.
Crucially, this is not the same as an entire chunk of your brain to die off because of a stroke. This is a very different process. It is not localized, and it doesn’t cause massive cell death. It is more of a slow, gradual process.
SummaryMental illnesses can be … stressful on the body.” The brain is part of the body!
Yes, for sure. It is just that the term “brain damage” is usually used for a very specific kind of pattern, and not for the kind of chronlc, low-level damage done by stress and such. A doctor will not diagnose you with brain damage after you’ve had a stressful interaction with your coworker. You will be diagnosed with brain damage in the ICU after someone dropped a hammer on your head. Do you get what I’m trying to say?
I get what you are trying to say, I think you are simply mistaken. If your job impairs your cognitive abilities, then it has given you brain damage. Your brain, is damaged. You have been damaged in your brain. The cells and structures in your brain have taken damage. You keep trying to construct this exhaustive list of “things that are brain damage”, and then (in another comment) saying that this is about them not feeling appreciated and valued or sort of vaguely feeling bad, when what they are saying is that working at this job impaired their ability to form thoughts. That is a brain damage thing! The brain is an organ for forming thoughts. If the brain can’t thoughts so good no more, then it has been damaged.
The big picture here is that a stressful job damaged this person’s health. Specifically, their brain’s.
I understand what you are trying to say, but I think you are simply mistaken. We (as a society) have definitions for the terms we use. See https://en.wikipedia.org/wiki/Brain_damage:
Neurotrauma, brain damage or brain injury (BI) is the destruction or degeneration of brain cells. Brain injuries occur due to a wide range of internal and external factors. In general, brain damage refers to significant, undiscriminating trauma-induced damage.
This is not “significant, undiscriminating trauma-induced damage” (for context, trauma here refers to physical trauma, such as an impact to the head, not psychological trauma). What the author describes does not line up with any of the Causes of Brain Damage. It is simply not the right term.
Yes, the author has a brain, and there is self-reported “damage” to it. But just because someone is a man and feels like he polices the neighborhood, does not make me a “police man”. Just because I feel like my brain doesn’t work right after a traumatic job experience does not mean I have brain damage™.
The Wikipedia header is kind of odd. The next sentence after “in general, brain damage is trauma induced” lists non-trauma-induced categories of brain damage. So I don’t know how strong that “in general” is meant to be. At any rate, “in general” is not at odds with the use of the term for non-trauma induced stress/sleep depriv damage.
At any rate, if you click through to Acquired Brain Injury, it says “These impairments result from either traumatic brain injury (e.g. …) or nontraumatic injury … (e.g. listing a bunch of things that are not traumatic.)”
Anyway, the Causes of Brain Damage list is clearly not written to be exhaustive. “any number of conditions, including” etc.
There is some evidence that lack of sleep may kill brain cells: https://www.bbc.com/news/health-26630647
It’s also possible to suffer from mini-strokes due to the factors discussed above.
In any case, I feel like you’re missing the forest for the trees. Sure, it’s important to be correct with wording. But is that more important than the bigger picture here, that a stressful job damaged this person’s health?
the bigger picture here, that a stressful job damaged this person’s health
Yes, that is true, and it is a shame. I really wish that the process around WASM be less hostile, and that this person not be impacted negatively, even if stressful and hard projects are an unfortunate reality for many people.
I feel like you’re missing the forest for the trees.
I think that you might be missing the forest for the trees – I’m not saying that this person was not negatively impacted, I am merely stating that it is (probably, unless there is evidence otherwise) to characterize this impact as “brain damage”, because from a medical standpoint, that term has a more narrow definition that damage due to stress does not fulfill.
Hello, you might enjoy this study.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4561403/
I looked through a lot of studies to try and find a review that was both broad and to the point.
Now, you are definitely mixing a lot of terms here… but I hope that if you read the research, you can be convinced, at the very least, that stress hurts brains (and I hope that reading the article and getting caught in this comment storm doesn’t hurt yours).
Sleep Deprivation and Oxidative Stress in Animal Models: A Systematic Review tells us that sleep deprivation can be shown to increase oxidative stress:
Current experimental evidence suggests that sleep deprivation promotes oxidative stress. Furthermore, most of this experimental evidence was obtained from different animal species, mainly rats and mice, using diverse sleep deprivation methods.
Although, https://pubmed.ncbi.nlm.nih.gov/14998234/ disagrees with this. Furthermore, it is known that oxidative stress promotes apoptosis, see Oxidative stress and apoptosis :
Recent studies have demonstrated that reactive oxygen species (ROS) and the resulting oxidative stress play a pivotal role in apoptosis. Antioxidants and thiol reductants, such as N-acetylcysteine, and overexpression of manganese superoxide (MnSOD) can block or delay apoptosis.
The article that you linked Stress effects on the hippocampus: a critical review mentions that stress has an impact on the development of the brain and on it’s workings:
Uncontrollable stress has been recognized to influence the hippocampus at various levels of analysis. Behaviorally, human and animal studies have found that stress generally impairs various hippocampal-dependent memory tasks. Neurally, animal studies have revealed that stress alters ensuing synaptic plasticity and firing properties of hippocampal neurons. Structurally, human and animal studies have shown that stress changes neuronal morphology, suppresses neuronal proliferation, and reduces hippocampal volume
I do not disagree with this. I think that anyone would be able to agree that stress is bad for the brain, possibly by increasing apoptosis (accelerating ageing), decreasing the availability of nutrients. My only argument is that the term brain damage is quite narrowly defined (for example here) as (large-scale) damage to the brain caused by genetics, trauma, oxygen starvation or a tumor, and it can fall into one of two categories: traumatic brain injuries and acquired brain injuries. If you search for “brain damage” on pubmed, you will find the term being used like this:
You will not find studies or medical diagnoses of “brain damage due to stress”. I hope that you can agree that using the term brain damage in a context such as the author’s, without evidence of traumatic injury or a stroke, is wrong. This does not take away the fact that the author has allegedly experienced a lot of stress at their previous employer, one of the largest and high-paying tech companies, and that this experience has caused the author personal issues.
On an unrelated note: what is extremely fascinating to me is that some chemicals such as methamphetamine (at low concentrations) or minocycline are neuroprotective being able to lessen brain damage for example due to stroke. But obviously, at larger concentrations the opposite is the case.
How about this one then? https://www.sciencedirect.com/science/article/abs/pii/S0197458003000484
We can keep going, it is not difficult to find these… Your’re splitting a hair which should not be split.
What’s so wrong about saying a bad work environment can cause brain damage?
Your’re splitting a hair which should not be split.
There is nothing more fun than a civil debate. I would argue that any hair deserves being split. Worst case, you learn something new, or form a new opinion.
What’s so wrong about saying a bad work environment can cause brain damage?
Nothing is wrong with that, if the work environment involves heavy things, poisonous things, or the like. This is why OSHA compliance is so essential in protecting people’s livelihoods. I just firmly believe, and I think that the literature agrees with me on this, that “brain damage” as a medical definition refers to large-scale cell death due to trauma or stroke, and not chronic low-level damage caused by stress. The language we choose to use is extremely important, it is the only facility we have to exchange information. Language is not useful if it is imprecise or even wrong.
How about this one then?
Let’s take a look what we got here. I’m only taking a look at the abstract, for now.
Stress is a risk factor for a variety of illnesses, involving the same hormones that ensure survival during a period of stress. Although there is a considerable ambiguity in the definition of stress, a useful operational definition is: “anything that induces increased secretion of glucocorticoids”.
Right, stress causes elevated levels of glucocorticoids, such as cortisol.
The brain is a major target for glucocorticoids. Whereas the precise mechanism of glucocorticoid-induced brain damage is not yet understood, treatment strategies aimed at regulating abnormal levels of glucocorticoids, are worth examining.
Glucocorticoids are useful in regulating processes in the body, but they can also do damage. I had never heard of the term glucocorticoid-induced brain damage, and searching for it in the literature only yields this exact article, so I considered this a dead end. However, in doing some more research, I did find two articles that somewhat support your hypothesis:
In Effects of brain activity, morning salivary cortisol, and emotion regulation on cognitive impairment in elderly people, it is mentioned that high cortisol levels are associated with hippocampus damage, supporting your hypothesis, but it only refers to elderly patients with Mild Cognitive Impairment (MCI):
Cognitive impairment is a normal process of aging. The most common type of cognitive impairment among the elderly population is mild cognitive impairment (MCI), which is the intermediate stage between normal brain function and full dementia.[1] MCI and dementia are related to the hippocampus region of the brain and have been associated with elevated cortisol levels.[2]
Cortisol regulates metabolism, blood glucose levels, immune responses, anti-inflammatory actions, blood pressure, and emotion regulation. Cortisol is a glucocorticoid hormone that is synthesized and secreted by the cortex of adrenal glands. The hypothalamus releases a corticotrophin-releasing hormone and arginine vasopressin into hypothalamic-pituitary portal capillaries, which stimulates adrenocorticotropic hormone secretion, thus regulating the production of cortisol. Basal cortisol elevation causes damage to the hippocampus and impairs hippocampus-dependent learning and memory. Chronic high cortisol causes functional atrophy of the hypothalamic-pituitary-adrenal axis (HPA), the hippocampus, the amygdala, and the frontal lobe in the brain.
Additionally, Effects of stress hormones on the brain and cognition: Evidence from normal to pathological aging mentions that chronic stress is a contributor to memory performance decline.
We might be able to find a few mentions of brain damage outside of the typical context (as caused by traumatic injury, stroke, etc) in the literature, but at least we can agree that the term brain damage is quite unusual in the context of stress, can we not? Out of the 188,764 articles known by pubmed, only 18,981 mention “stress”, and of those the almost all are referring to “oxidative stress” (such as that experienced by cells during a stroke). I have yet to find a single study or article that directly states brain damage as being a result of chronic stress, in the same way that there are hundreds of thousands of studies showing brain damage from traumatic injuries to the brain.
Well, if anybody asks me I will tell them that too much stress at work causes brain damage… and now I can even point to some exact papers!
I agree that it’s a little hyperbolic, but it’s not that hyperbolic. If we were talking about drug use everyone would kind of nod and say, ‘yeah, brain damage’ even if the effects were tertiary and the drug use was infrequent.
But stress at work! Ohohoho, that’s just life my friend! Which really does not need to be the way of the world… OP was right to get out, especially once they started exhibiting symptoms suspiciously like the ones cited in that last paper (you know, the sorts of symptoms you get when your brain is suffering from some damage).
If someone tells me that they got brain damage from stress at work, I will laugh, tell them to read the Wikipedia article article and then move on. But that is okay, we can agree to disagree. I understand that there are multiple possible definitions for the term brain damage.
If we were talking about drug use everyone would kind of nod and say, ‘yeah, brain damage’ even if the effects were tertiary and the drug use was infrequent.
In my defense, people often use terms incorrectly.
OP was right to get out
I agree. Brain damage or not, Google employee or not, if you are suffering at work you should not stay there. We all have very basic needs, and one of them is being valued and being happy to work.
Anyways, I hope you have a good weekend!
I have not yet seen a study showing that stressful office jobs give people brain damage.
This is a bizarre and somewhat awful thread. Please could you not post things like this in future?
I disagree. The post seemed polite, constructive, and led to (IMO) a good conversation (including some corrections to the claims in the post).
Parent left a clear method for you to disprove them by providing a counter-example.
If you can point to some peer-reviewed research on the topic, by all means do so.
Yea but this is an obnoxious, disrespectful, and disingenuous way to conduct an argument. I haven’t read any studies proving anything about this subject one way or another. Because I am not a mental health researcher. So it’s easy for me to make that claim, and present the claim as something that matters, when really it’s a pointless claim that truly does not matter at all.
Arguing from an anecdotal position based on your own experience, yet demanding the opposing side provide peer-reviewed studies to contradict your anecdotal experience, places a disproportionate burden on them to conduct their argument. And whether intentional or not, it strongly implies that you have little to no respect for their experiences or judgement. That you will only care about their words if someone else says them.
Nice work @indygreg ;)
The prospect of retiring our pool of Mac minis sitting in a datacenter for the sole purpose of signing things is very enticing!
I’ll plug my colleague Mike Conley’s Joy of Coding series where he live streams Firefox development: https://mikeconley.github.io/joy-of-coding-episode-guide/
Tons of episodes in the backlog.
Neat! This might seem esoteric but knowing how to properly constrain your dependencies is a very hard problem with many solutions that reasonable minds can disagree on. Looking forward to testing this out.
Just to offer an alternative, I use “Dark Reader” [1] for Chrome which tries to automatically apply a dark theme to websites. It’s not great for most websites (so I keep it as a opt-in per site), but does a really good job with simple sites like lobsters.
[1] https://chrome.google.com/webstore/detail/dark-reader/eimadpbcbfnmbkopoojfekhnkhdbieeh?hl=en-US
Just be aware that these kind of extensions get full access to all you see and do on your browser, because they need it in order to function.
Is dark mode a reasonable tradeoff? That’s for you to decide.
For this specific extension, Dark Reader is recommended by Mozilla on AMO. This means it has passed an additional level of security / privacy review beyond what a typical extension receives.
Of course your point is still valid. But if you are a Firefox user who trusts Mozilla more than the Dark Reader dev(s), this may sway your decision.
A workable (IMO) middleground is to just grab (and ideally audit) the source and then load the unpacked extension on individual devices. This dodges the “I made an extension with justifiably broad permissions and am selling it to a party that will do Bad Things with those permissions for a shitload of money” threat.
Yup, but not many people do that.
I know how to do it but I didn’t. Used to use 2-3 extensions with this kind of access. Now I no longer use them, and simply accept that the web is not as comfortable as I’d like it to be.
Dark reader also lets you apply custom styling. So you can take the CSS in this post and copy it in the Dev Tools panel in Dark reader to use it.
It would help if Firefox would actually make a better product that’s not a crappy Chrome clone. The “you need to do something different because [abstract ethical reason X]” doesn’t work with veganism, it doesn’t work with chocolate sourced from dubious sources, it doesn’t work with sweatshop-based clothing, doesn’t work with Free Software, and it sure as hell isn’t going to work here. Okay, some people are going to do it, but not at scale.
Sometimes I think that Mozilla has been infiltrated by Google people to sabotage it. I have no evidence for this, but observed events don’t contradict it either.
It would help if Firefox would actually make a better product that’s not a crappy Chrome clone. The “you need to do something different because [abstract ethical reason X]” doesn’t work with veganism, it doesn’t work with chocolate sourced from dubious sources, it doesn’t work with sweatshop-based clothing, doesn’t work with Free Software, and it sure as hell isn’t going to work here. Okay, some people are going to do it, but not at scale.
I agree, but the deck is stacked against Mozilla. They are a relatively small nonprofit largely funded by Google. Structurally, there is no way they can make a product that competes. The problem is simply that there is no institutional counterweight to big tech right now, and the only real solutions are political: antitrust, regulation, maybe creating a publicly-funded institution with a charter to steward the internet in the way Mozilla was supposed to. There’s no solution to the problem merely through better organizational decisions or product design.
I don’t really agree; there’s a lot of stuff they could be doing better, like not pushing out updates that change the colour scheme in such a way that it becomes nigh-impossible to see which tab is active. I don’t really care about “how it looks”, but this is just objectively bad. Maybe if you have some 16k super-HD IPS screen with perfect colour reproduction at full brightness in good office conditions it’s fine, but I just have a shitty ThinkPad screen and the sun in my home half the time (you know, like a normal person). It’s darn near invisible for me, and I have near-perfect eyesight (which not everyone has). I spent some time downgrading Firefox to 88 yesterday just for this – which it also doesn’t easily allow, not if you want to keep your profile anyway – because I couldn’t be arsed to muck about with userChrome.css hacks. Why can’t I just change themes? Or why isn’t there just a setting to change the colour?
There’s loads of other things; one small thing I like to do is not have a “x” on tabs to close it. I keep clicking it by accident because I have the motor skills of a 6 year old and it’s rather annoying to keep accidentally closing tabs. It used to be a setting, then it was about:config, then it was a userChrome.css hack, now it’s a userChrome.css hack that you need to explicitly enable in about:config for it to take effect, and in the future I probably need to sacrifice a goat to our Mozilla overlords if I want to change it.
I also keep accidentally bookmarking stuff. I press ^D to close terminal windows and sometimes Firefox is focused and oops, new bookmark for you! Want to configure keybinds for Firefox? Firefox say no; you’re not allowed, mere mortal end user; our keybinds are perfect and work for everyone, there must be something wrong with you if you don’t like it! It’s pretty darn hard to hack around this too – more time than I was willing to spend on it anyway – so I just accepted this annoyance as part of my life 🤷
“But metrics show only 1% of people use this!” Yeah, maybe; but 1% here and 5% there and 2% somewhere else and before you know it you’ve annoyed half (of not more) of your userbase with a bunch of stuff like that. It’s the difference between software that’s tolerable and software that’s a joy to use. Firefox is tolerable, but not a joy. I’m also fairly sure metrics are biased as especially many power users disable it, so while useful, blindly trusting it is probably not a good idea (I keep it enabled for this reason, to give some “power user” feedback too).
Hell, I’m not even a “power user” really; I have maybe 10 tabs open at the most, usually much less (3 right now) and most settings are just the defaults because I don’t really want to spend time mucking about with stuff. I just happen to be a programmer with an interest in UX who cares about a healthy web and knows none of this is hard, just a choice they made.
These are all really simple things; not rocket science. As I mentioned a few days ago, Firefox seems have fallen victim to a mistaken and fallacious mindset in their design.
Currently Firefox sits in a weird limbo that satisfies no one: “power users” (which are not necessarily programmers and the like, loads of people with other jobs interested in computers and/or use computers many hours every day) are annoyed with Firefox because they keep taking away capabilities, and “simple” users are annoyed because quite frankly, Chrome gives a better experience in many ways (this, I do agree, is not an easy problem to solve, but it does work “good enough” for most). And hey, even “simple” users occasionally want to do “difficult” things like change something that doesn’t work well for them.
So sure, while there are some difficult challenges Firefox faces in competing against Google, a lot of it is just simple every-day stuff where they just choose to make what I consider to be a very mediocre product with no real distinguishing features at best. Firefox has an opportunity to differentiate themselves from Chrome by saying “yeah, maybe it’s a bit slower – it’s hard and we’re working on that – but in the meanwhile here’s all this cool stuff you can do with Firefox that you can’t with Chrome!” I don’t think Firefox will ever truly “catch up” to Chrome, and that’s fine, but I do think they can capture and retain a healthy 15%-20% (if not more) with a vision that consists of more than “Chrome is popular, therefore, we need to copy Chrome” and “use us because we’re not Chrome!”
Speaking of key bindings, Ctrl + Q is still “quit without any confirmation”. Someone filed a bug requesting this was changeable (not even default changed), that bug is now 20 years old.
It strikes me that this would be a great first issue for a new contributor, except the reason it’s been unfixed for so long is presumably that they don’t want it fixed.
A shortcut to quit isn’t a problem, losing user data when you quit is a problem. Safari has this behaviour too, and I quite often hit command-Q and accidentally quit Safari instead of the thing I thought I was quitting (since someone on the OS X 10.8 team decided that the big visual clues differentiating the active window and others was too ugly and removed it). It doesn’t bother me, because when I restart Safari I get back the same windows, in the same positions, with the same tabs, scrolled to the same position, with the same unsaved form data.
I haven’t used Firefox for a while, so I don’t know what happens with Firefox, but if it isn’t in the same position then that’s probably the big thing to fix, since it also impacts experience across any other kind of browser restart (OS reboots, crashes, security updates). If accidentally quitting the browser loses you 5-10 seconds of time, it’s not a problem. If it loses you a load of data then it’s really annoying.
Firefox does this when closing tabs (restoring closed tabs usually restores form content etc.) but not when closing the window.
The weird thing is that it does actually have a setting to confirm when quitting, it’s just that it only triggers when you have multiple tabs or windows open and not when there’s just one tab 🤷
The weird thing is that it does actually have a setting to confirm when quitting, it’s just that it only triggers when you have multiple tabs or windows open and not when there’s just one tab
Does changing browser.tabs.closeWindowWithLastTab in about:config fix that?
I have it set to false already, I tested it to make sure and it doesn’t make a difference (^W won’t close the tab, as expected, but ^Q with one tab will still just quit).
I quite often hit command-Q and accidentally quit Safari
One of the first things I do when setting up a new macOS user for myself is adding alt-command-Q in Preferences → Keyboard → Shortcuts → App Shortcuts for “Quit Safari” in Safari. Saves my sanity every day.
Yes, it changes the binding on the OS level, so the shortcut hint in the menu bar is updated to show the change
You can do this in windows for firefox (or any browser) too with an autohotkey script. You can set it up to catch and handle a keypress combination before it reaches any other application. This will be global of course and will disable and ctrl-q hotkey in all your applications, but if you want to get into detail and write a more complex script you can actually check which application has focus and only block the combination for the browser.
This sounds like something Chrome gets right - if I hit CMD + Q I get a prompt saying “Hold CMD+Q to Quit” which has prevented me from accidentally quitting lots of times. I assumed this was MacOS behaviour, but I just tested Safari and it quit immediately.
Disabling this shortcut with browser.quitShortcut.disabled works for me, but I agree that bug should be fixed.
Speaking of key bindings, Ctrl + Q is still “quit without any confirmation”.
That was fixed a long time ago, at least on Linux. When I press it, a modal says “You are about to close 5 windows with 24 tabs. Tabs in non-private windows will be restored when you restart.” ESC cancels.
That’s strange. I’m using latest Firefox, from Firefox, on Linux, and I don’t ever get a prompt. Another reply suggested a config tweak to try.
I had that problem for a while but it went away. I have browser.quitShortcut.disabled as false in about:config. I’m not sure if it’s a default setting or not.
quitShortcut
It seems that this defaults to false. The fact you have it false, but don’t experience the problem, is counter-intuitive to me. Anyway the other poster’s suggestion was to flip this, so I’ll try that. Thanks!
That does seem backwards. Something else must be overriding it. I’m using Ubuntu 20.04, if that matters. I just found an online answer that mentions the setting.
On one level, I disagree – I have zero problems with Firefox. My only complaint is that sometimes website that are built to be Chrome-only don’t work sometimes, which isn’t really Firefox’s problem, but the ecosystem’s problem (see my comment above about antitrust, etc). But I will grant you that Firefox’s UX could be better, that there are ways the browser could be improved in general. However, I disagree here:
retain a healthy 15%-20% (if not more)
I don’t think this is possible given the amount of resources Firefox has. No matter how much they improve Firefox, there are two things that are beyond their control:
Even the best product managers and engineers could not reverse Firefox’s design. We need a political solution, unless we want the web to become Google Web (tm).
Why can’t I just change themes?
You can. The switcher is at the bottom of the Customize Toolbar… view.
Hm, last time I tried this it didn’t do much of anything other than change the colour of the toolbar to something else or a background picture; but maybe it’s improved now. I’ll have a look next time I try mucking about with 89 again; thanks!
https://color.firefox.com/ to save the trouble of searching.
I agree with Firefox’s approach of choosing mainstream users over power-users - that’s the only way they’ll ever have 10% or more of users. Firefox is doing things with theming that I wish other systems would do - they have full “fresco” themes (images?) in their chrome! It looks awesome! I dream about entire DEs and app suites built from the ground up with the same theme of frescoes (but with an different specific fresco for each specific app, perhaps tailored to that app). Super cool!
I don’t like the lack of contrast on the current tab, but “give users the choice to fix this very specific issue or not” tends to be extremely shortsighted - the way to fix it is to fix it. Making it optional means yet another maintenance point on an already underfunded system, and doesn’t necessarily even fix the problem for most users!
More importantly, making ultra-specific optionss like that is usually pushing decisions onto the user as a method of avoiding internal politicking/arguments, and not because pushing to the user is the optimal solution for that specific design aspect.
As for the close button, I am like you. You can set browser.tabs.tabClipWidth to 1000. Dunno if it is scheduled to be removed.
As for most of the other grips, adding options and features to cater for the needs of a small portion of users has a maintenance cost. Maybe adding the option is only one line, but then a new feature needs to work with the option enabled and disabled. Removing options is just a way to keep the code lean.
My favorite example in the distribution world is Debian. Debian supports tries to be the universal OS. We are drowning with having to support everything. For examples, supporting many init systems is more work. People will get to you if there is a bug in the init system you don’t use. You spend time on this. At the end, people not liking systemd are still unhappy and switch to Devuan which supports less init systems. I respect Mozilla to keep a tight ship and maintaining only the features they can support.
Nobody would say anything if their strategy worked. The core issue is that their strategy obviously doesn’t work.
adding options and features to cater for the needs of a small portion of users
It ’s not even about that.
It’s removing things that worked and users liked by pretending that their preferences are invalid. (And every user belongs to some minority that likes a feature others may be unaware of.)
See the recent debacle of gradually blowing up UI sizes, while removing options to keep them as they were previously.
Somehow the saved cost to support some feature doesn’t seem to free up enough resources to build other things that entice users to stay.
All they do with their condescending arrogance on what their perfectly spherical idea of a standard Firefox user needs … is making people’s lives miserable.
They fired most of the people that worked on things I was excited about, and it seems all that’s left are some PR managers and completely out-of-touch UX “experts”.
As for most of the other grips, adding options and features to cater for the needs of a small portion of users has a maintenance cost. Maybe adding the option is only one line, but then a new feature needs to work with the option enabled and disabled. Removing options is just a way to keep the code lean.
It seems to me that having useful features is more important than having “lean code”, especially if this “lean code” is frustrating your users and making them leave.
I know it’s easy to shout stuff from the sidelines, and I’m also aware that there may be complexities I may not be aware of and that I’m mostly ignorant of the exact reasoning behind many decisions (most of us here are really, although I’ve seen a few Mozilla people around), but what I do know is that 1) Firefox as a product has been moving in a certain direction for years, 2) that Firefox has been losing users for years, 3) that I know few people who truly find Firefox an amazing browser that a joy to use, and that in light of that 4) keep doing the same thing you’ve been doing for years is probably not a good idea, and 5) that doing the same thing but doing it harder is probably an even worse idea.
I also don’t think that much of this stuff is all that much effort. I am not intimately familiar with the Firefox codebase, but how can a bunch of settings add an insurmountable maintenance burden? These are not “deep” things that reach in to the Gecko engine, just comparatively basic UI stuff. There are tons of projects with a much more complex UI and many more settings.
Hell, I’d argue that even removing the RSS was also a mistake – they should have improved it instead, especially after Google Reader’s demise there was a huge missed opportunity there – although it’s a maintenance burden trade-off I can understand it better, it also demonstrates a lack of vision to just say “oh, it’s old crufty code, not used by many (not a surprise, it sucked), so let’s just remove it, people can just install an add-on if they really want it”. This is also a contradiction with Firefox’s mantra of “most people use the defaults, and if it’s not used a lot we can just remove it”. Well, if that’s true then you can ship a browser with hardly any features at all, and since most people will use the defaults they will use a browser without any features.
Browsers like Brave and Vivaldi manage to do much of this; Vivaldi has an entire full-blown email client. I’d wager that a significant portion of the people leaving Firefox are actually switching to those browsers, not Chrome as such (but they don’t show up well in stats as they identify as “Chrome”). Mozilla nets $430 million/year; it’s not a true “giant” like Google or Apple, but it’s not small either. Vivaldi has just 55 employees (2021, 35 in 2017); granted, they do less than Mozilla, but it doesn’t require a huge team to do all of this.
And every company has limited resources; it’s not like the Chrome team is a bottomless pit of resources either. A number of people in this thread express the “big Google vs. small non-profit Mozilla”-sentiment here, but it doesn’t seem that clear-cut. I can’t readily find a size for the Chrome team on the ‘net, but I checked out the Chromium source code and let some scripts loose on that: there are ~460 Google people with non-trivial commits in 2020, although quite a bit seems to be for ChromeOS and not the browser part strictly speaking, so my guestimate is more 300 people. A large team? Absolutely. But Mozilla’s $430/million a year can match this with ~$1.5m/year per developer. My last company had ~70 devs on much less revenue (~€10m/year). Basically they have the money to spare to match the Chrome dev team person-for-person. Mozilla does more than just Firefox, but they can still afford to let a lot of devs loose on Gecko/Firefox (I didn’t count the number devs for it, as I got some other stuff I want to do this evening as well).
It’s all a matter of strategy; history is littered with large or even huge companies that went belly up just because they made products that didn’t fit people’s demands. I fear Firefox will be in the same category. Not today or tomorrow, but in five years? I’m not so sure Firefox will still be around to be honest. I hope I’m wrong.
As for your Debian comparison; an init system is a fundamental part of the system; it would be analogous to Firefox supporting different rendering or JS engines. It’s not even close to the same as “an UI to configure key mappings” or “a bunch of settings for stuff you can actually already kind-of do but with hacks that you need to explicitly search for and most users don’t know it exists”, or even a “built-in RSS reader that’s really good and a great replacement for Google Reader”.
I agree with most of what you said. Notably the removal of RSS support. I don’t work for Mozilla and I am not a contributor, so I really can’t answer any of your questions.
Another example of maintaining a feature would be Alsa support. It has been removed, this upsets some users, but for me, this is understandable as they don’t want to handle bug reports around this or the code to get in the way of some other features or refactors. Of course, I use Pulseaudio, so I am quite biased.
I think ALSA is a bad example; just use Pulseaudio. It’s long since been the standard, everyone uses it, and this really is an example of “147 people who insist on having an überminimal Linux on Reddit being angry”. It’s the kind of technical detail with no real user-visible changes that almost no one cares about. Lots of effort with basically zero or extremely minimal tangible benefits.
And ALSA is a not even a good or easy API to start with. I’m pretty sure that the “ALSA purists” never actually tried to write any ALSA code otherwise they wouldn’t be ALSA purists but ALSA haters, as I’m confident there is not a single person that has programmed with ALSA that is not an ALSA hater to some degree.
Pulseaudio was pretty buggy for a while, and its developer’s attitude surrounding some of this didn’t really help, because clearly if tons of people are having issues then all those people are just “doing it wrong” and is certainly not a reason to fix anything, right? There was a time that I had a keybind to pkill pulseaudio && pulseaudio --start because the damn thing just stopped working so often. The Grand Pulseaudio Rollout was messy, buggy, broke a lot of stuff, and absolutely could have been handled better. But all of that was over a decade ago, and it does actually provide value. Most bugs have been fixed years ago, Poettering hasn’t been significantly involved since 2012, yet … people still hold an irrational hatred towards it 🤷
ALSA sucks, but PulseAudio is so much worse. It still doesn’t even actually work outside the bare basics. Firefox forced me to put PA on and since then, my mic randomly spews noise and sound between programs running as different user ids is just awful. (I temporarily had that working better though some config changes, then a PA update - hoping to fix the mic bug - broke this… and didn’t fix the mic bug…)
I don’t understand why any program would use the PA api instead of the alsa ones. All my alsa programs (including several I’ve made my own btw, I love it whenever some internet commentator insists I don’t exist) work equally as well as pulse programs on the PA system… but also work fine on systems where audio actually works well (aka alsa systems). Using the pulse api seems to be nothing but negatives.
Not sure if this will help you but I absolutely cannot STAND the default Firefox theme so I use this: https://github.com/ideaweb/firefox-safari-style
I stick with Firefox over Safari purely because it’s devtools are 100x better.
There’s also the fact that web browsers are simply too big to reimplement at this point. The best Mozilla can do (barely) is try to keep up with the Google-controlled Web Platform specs, and try to collude with Apple to keep the worst of the worst from being formally standardized (though Chrome will implement them anyway). Their ability to do even that was severely impacted by their layoffs last year. At some point, Apple is going to fold and rebase Safari on Chromium, because maintaining their own browser engine is too unprofitable.
At this point, we need to admit that the web belongs to Google, and use it only to render unto Google what is Google’s. Our own traffic should be on other protocols.
For a scrappy nonprofit they don’t seem to have any issues paying their executives millions of dollars.
I mean, I don’t disagree, but we’re still talking several orders of magnitude less compensation than Google’s execs.
A shit sandwich is a shit sandwich, no matter how low the shit content is.
(And no, no one is holding a gun to Mozilla’s head forcing them to hire in high-CoL/low-productivity places.)
Product design can’t fix any of these problems because nobody is paying for the product. The more successful it is, the more it costs Mozilla. The only way to pay the rent with free-product-volume is adtech, which means spam and spying.
I don’t agree this is a vague ethical reason. Problem with those are concerns like deforestation (and destruction of habitats for smaller animals) to ship almond milk across the globe, and sewing as an alternative to poverty and prostitution, etc.
The browser privacy question is very quantifiable and concrete, the source is in the code, making it a concrete ethical-or-such choice.
ISTR there even being a study or two where people were asked about willingness to being spied upon, people who had no idea their phones were doing what was asked about, and being disconcerted after the fact. That’s also a concrete way to raise awareness.
At the end of the day none of this may matter if people sign away their rights willingly in favor of a “better” search-result filter bubble.
I don’t think they’re vague (not the word I used) but rather abstract; maybe that’s no the best word either but what I mean with it is that it’s a “far from my bed show” as we would say in Dutch. Doing $something_better on these topics has zero or very few immediate tangible benefits, but rather more abstract long-term benefits. And in addition it’s also really hard to feel that you’re really making a difference as a single individual. I agree with you that these are important topics, it’s just that this type of argument is simply not all that effective at really making a meaningful impact. Perhaps it should be, but it’s not, and exactly because it’s important we need to be pragmatic about the best strategy.
And if you’re given the choice between “cheaper (or better) option X” vs. “more expensive (or inferior) option Y with abstract benefits but no immediate ones”, then I can’t really blame everyone for choosing X either. Life is short, lots of stuff that’s important, and can’t expect everyone to always go out of their way to “do the right thing”, if you can even figure out what the “right thing” is (which is not always easy or black/white).
My brain somehow auto-conflated the two, sorry!
I think we agree that the reasoning in these is inoptimal either way.
Personally I wish these articles weren’t so academic, and maybe not in somewhat niche media, but instead mainstream publications would run “Studies show people do not like to be spied upon yet they are - see the shocking results” clickbaity stuff.
At least it wouldn’t hurt for a change.
It probably wasn’t super-clear what exactly was intended with that in the first place so easy enough of a mistake to make 😅
As for articles, I’ve seen a bunch of them in mainstream Dutch newspapers in the last two years or so; so there is some amount of attention being given to this. But as I expended on in my other lengthier comment, I think the first step really ought to be making a better product. Not only is this by far the easiest to do and within our (the community’s) power to do, I strongly suspect it may actually be enough, or at least go a long way.
It’s like investing in public transport is better than shaming people for having a car, or affordable meat alternatives is a better alternative than shaming people for eating meat, etc.
I agree to an extent. Firefox would do well to focus on the user experience front.
I switched to Firefox way back in the day, not because of vague concerns about the Microsoft hegemony, or even concerns about web standards and how well each browser implemented them. I switched because they introduced the absolutely groundbreaking feature that is tabbed browsing, which gave a strictly better user experience.
I later switched to Chrome when it became obvious that it was beating Firefox in terms of performance, which is also a factor in user experience.
What about these days? Firefox has mostly caught up to Chrome on the performance point. But you know what’s been the best user experience improvement I’ve seen lately? Chrome’s tab groups feature. It’s a really simple idea, but it’s significantly improved the way I manage my browser, given that I tend to have a huge number of tabs open.
These are the kinds of improvements that I’d like to see Firefox creating, in order to lure people back. You can’t guilt me into trying a new browser, you have to tempt me.
But you know what’s been the best user experience improvement I’ve seen lately? Chrome’s tab groups feature. It’s a really simple idea, but it’s significantly improved the way I manage my browser, given that I tend to have a huge number of tabs open.
Opera had this over ten years ago (“tab stacking”, added in Opera 11 in 2010). Pretty useful indeed, even with just a limited number of tabs. It even worked better than Chrome groups IMO. Firefox almost-kind-of has this with container tabs, which are a nice feature actually (even though I don’t use it myself), and with a few UX enhancements on that you’ve got tab groups/stacking.
Opera also introduced tabbed browsing by the way (in 2000 with Opera 4, about two years before Mozilla added it in Phoenix, which later became Firefox). Opera was consistently way ahead of the curve on a lot of things. A big reason it never took off was because for a long time you had to pay for it (until 2005), and after that it suffered from “oh, I don’t want to pay for it”-reputation for years. It also suffered from sites not working; this often (not always) wasn’t even Opera’s fault as frequently this was just a stupid pointless “check” on the website’s part, but those were popular in those days to tell people to not use IE6 and many of them were poor and would either outright block Opera or display a scary message. And being a closed-source proprietary product also meant it never got the love from the FS/OSS crowd and the inertia that gives (not necessarily a huge inertia, but still).
So Firefox took the world by storm in the IE6 days because it was free and clearly much better than IE6, and when Opera finally made it free years later it was too late to catch up. I suppose the lesson here is that “a good product” isn’t everything or a guarantee for success, otherwise we’d all be using Opera (Presto) now, but it certainly makes it a hell of a lot easier to achieve success.
Opera had a lot of great stuff. I miss Opera 😢 Vivaldi is close (and built by former Opera devs) but for some reason it’s always pretty slow on my system.
This is fair and I did remember Opera being ahead of the curve on some things. I don’t remember why I didn’t use it, but it being paid is probably why.
I agree, I loved the Presto-era Opera and I still use the Blink version as my main browser (and Opera Mobile on Android). It’s still much better than Chrome UX-wise.
I haven’t used tab groups, but it looks pretty similar to Firefox Containers which was introduced ~4 years ahead of that blog post. I’ll grant that the Chrome version is built-in and looks much more polished and general purpose than the container extension, so the example is still valid.
I just wanted to bring this up because I see many accusations of Firefox copying Chrome, but I never see the reverse being called out. I think that’s partly because Chrome has the resources to take Mozilla’s ideas and beat them to market on it.
Disclaimer: I’m a Mozilla employee
One challenge for people making this kind of argument is that predictions of online-privacy doom and danger often don’t match people’s lived experiences. I’ve been using Google’s sites and products for over 20 years and have yet to observe any real harm coming to me as a result of Google tracking me. I think my experience is typical: it is an occasional minor annoyance to see repetitive ads for something I just bought, and… that’s about the extent of it.
A lot of privacy advocacy seems to assume that readers/listeners believe it’s an inherently harmful thing for a company to have information about them in a database somewhere. I believe privacy advocates generally believe that, but if they want people to listen to arguments that use that assumption as a starting point, they need to do a much better job offering non-circular arguments about why it’s bad.
I think it has been a mistake to focus on loss of privacy as the primary data collection harm. To me the bigger issue is that it gives data collectors power over the creators of the data and society as a whole, and drives destabilizing trends like political polarization and economic inequality. In some ways this is a harder sell because people are brainwashed to care only about issues that affect them personally and to respond with individualized acts.
In some ways this is a harder sell because people are brainwashed to care only about issues that affect them personally and to respond with individualized acts.
I’m not @halfmanhalfdonut but I don’t think that brainwashing is needed to get humans to behave like this. This is just how humans behave.
things like individualism, solidarity, and collaboration exist on a spectrum, and everybody exhibits each to some degree. so saying humans just are individualistic is tautological, meaningless. everyone has some individualism in them regardless of their upbringing, and that doesn’t contradict anything in my original comment. that’s why I asked if there was some disagreement.
to really spell it out, modern mass media and culture condition people to be more individualistic than they otherwise would be. that makes it harder to make an appeal to solidarity and collaboration.
to really spell it out, modern mass media and culture condition people to be more individualistic than they otherwise would be. that makes it harder to make an appeal to solidarity and collaboration.
I think we’re going to have to agree to disagree. I can make a complicated rebuttal here, but it’s off-topic for the site, so cheers!
I think you’re only seeing the negative side (to you) of modern mass media and culture. Our media and culture also promote unity, tolerance, respect, acceptance, etc. You’re ignoring that so that you can complain about Google influencing media, but the reality is that the way you are comes from those same systems of conditioning.
The fact that you even know anything about income inequality and political polarization are entirely FROM the media. People on the whole are not as politically divided as media has you believe.
sure, I only mentioned this particular negative aspect because it was relevant to the point I was making in my original comment
I agree with everything you’ve written in this thread, especially when it comes to the abstractness of pro-Firefox arguments as of late. Judging from the votes it seems I am not alone. It is sad to see Mozilla lose the favor of what used to be its biggest proponents, the “power” users. I truly believe they are digging their own grave – faster and faster it seems, too. It’s unbelievable how little they seem to be able to just back down and admit they were wrong about an idea, if only for a single time.
Firefox does have many features that Chrome doesn’t have: container tabs, tree style tabs, better privacy and ad-blocking capabilities, some useful dev tools that I don’t think Chrome has (multi-line JS and CSS editors, fonts), isolated profiles, better control over the home screen, reader mode, userChrome.css, etc.
Jujutsu is and will remain a completely unusable project for me until it has support for pre-commit hooks, unfortunately. I enjoyed what I saw when I demoed it a bit to learn, but every repo I’ve worked with in my 12-year career has had at least one pre-commit hook in it (including personal projects which generally have 2-3!), and running them manually completely defeats the entire purpose of them especially in a professional setting.
I keep checking in on the issue and the tracker for it, but still no luck.
I think given that the original issue starts with
shows the completely different universe the devs of
jjlive in, and it leads me to believe that this feature just won’t get much priority because obviously they never use it, and all projects like this are (rightfully, usually) targeting fixing problems for the people making it.I have my hopes still, but until then…. back to
git.I’m curious why people like pre-commit hooks. I run my makefile far more frequently than I commit, and it does the basic testing and linting. And the heavyweight checks done on the server after pushing. So there doesn’t seem much point to me in adding friction to a commit when the code has already been checked, and will be checked and reviewed again.
To take an example from my jj-style patch-manipulation-heavy workflows:
maincommit (in a trunk-based development workflow).One should definitely distinguish between “checks that should run on each commit” and “pre-commit checks”.
I use pre-commit hooks extensively, to ensure that the code I’m pushing meets all kinds of project requirements. I use formatters, and linters, and check everything that can be checked. For one thing, it does away with the endless battles over meaningless nonsense, like where commas belong in SQL statements, or how many spaces to use for indenting. Another is it just reduces the load on the CI/CD systems, trading a small fraction of my time locally for expensive time we pay for by the cycle.
I’ll never go without them again.
ETA: but, based on sibling comments, it seems that the jj folks are On It, and yeah, it won’t be a “pre-commit” hook, the same way, but as long as it can be automagically run … ok, I’m in.
As others here have stated, I think the fundamental issue is that commits in jj are essentially automatic every time you save. There are a few consequences to this such as:
I care about these things too, but they’re tested in CI, once I am ready to integrate them, rather than locally.
Horses for courses. I’d rather use my compute than The Cloud but I am notoriously a cranky greybeard.
For sure, I’m just saying it is possible to care about those things and not use hooks. You should do what’s best for you, though.
Aren’t those all already available in the ide? I get my red squiggles as I write, instead of waiting for either the precommit or the ci.
Not everyone uses an IDE, or the same IDE, or the same settings in the IDE. I think that computers should automatically do what they can, and using pre-commit hooks (or whatever the jj equivalent will be) is a way to guarantee invariants.
Pre-commit hooks are really easy to enforce across a large team, while any sort of IDE settings are not. Some I’ve used before:
You can do all of these in other ways, but pre-commit makes it easy to do exactly the same thing across the entire team
I totally agree with you all that stuff is super important to run before changes make it to the repo (or even a PR). The problem is that pre-commit hooks (with a lower case “p”) fundamentally don’t mesh with Jujutsu’s “everything is committed all the time model”. There’s no index or staging area or anything. As soon as you save a file, it’s already committed as far as Jujutsu is concerned. That means there’s no opportunity for a tool to insert itself and say “no, this commit shouldn’t go through”.
The good news is that anything you can check in a pre-commit hook, works just as well in a pre-push hook, and that will work once the issues 3digitdev linked are fixed. In the meantime, I’ve made myself a shell alias that runs
pre-commit -a && jj git pushand that works well enough for me shrugNot everything runs that easily. Eg https://github.com/cycodehq/cycode-cli has a
pre_commitcommand that is designed specifically to run the security scan before commit. It doesn’t work the same before push because at that point the index doesn’t contain the stuff you need to scan.Hm, I’m not sure if
cycode-cliwould work with Jujutsu in that case. I’m sure if there’s enough demand someone would figure out a way to get it working. Even now, I’ve seen people MacGyvering their own pre-commit hooks by abusing their $EDITOR variable.. E,gexport EDITOR="cycode-cli && vim"><Many people never use an editor to write a commit message, they use the
-mflag directly on the command line 🙂But yeah, at that point we could mandate that we have to use a shell script wrapper for
git committhat does any required checks.Jujutsu has at least a large file settings: https://martinvonz.github.io/jj/latest/config/#maximum-size-for-new-files
In my years I’ve had exactly one repo that used makefiles.
The issue with makefiles has nothing to do with makefiles – the issue without pre-commit is deliberate action.
If I’m on a team of 12 developers, and we have a linter and a formatter which must be run so we can maintain code styles/correctness, I am NOT going to hope that all 12 developers remembered to run the linter before they made their PR. Why? because I FORGET to do it all the time too. Nothing irritates me more and wastes more money than pushing to a repo, making a PR, and having it fail over and over until I remember to lint/format. Why bother with all of that? Setup pre-commit hooks once, and then nobody ever has to think about it ever again. Commit, push, PR, etc, add new developers, add new things to run – it’s all just handled.
Can you solve this with a giant makefile that makes it so you have just one command to run before a PR? Yes. But that’s still a point of failure. A point of failure that could be avoided with existing tools that are genuinely trivial to setup (most linters/etc have pre-commit support and hooks pre-built for you!). Why let that point of failure stand?
Edit: Also, keep in mind the two arent mutually exclusive. You like your makefile, fine keep it. Run it however many times you want before committing. But if everyone must follow this step at least once before they commit…..why wait for them to make a mistake? Just do it for them.
Generally, I believe jj developers and users are in favor of the idea of defining and standardizing “checks” for each commit to a project, but the jj model doesn’t naturally lend itself to running them specifically at pre-commit time. The main problems with running hooks at literally pre-commit time:
jj fixcommand can run on in-memory commits, I believe, but this also means that it’s limited in capability and doesn’t support arbitrary commands.The hook situation for jj is not fully decided, but I believe the most popular proposal is:
jj fix: primarily for single-file formatters and lintersjj run: can run arbitrary commands by provisioning a full working copyFor the workflows you’ve specified, I think the above design would still work. You can standardize that your developers run certain checks and fixes before submitting code for review, but in a way that works in the jj model better, and might also have better throughput and latency considerations for critical workflows like making local commits.
This does sound promising.
@arxanas is speaking from experience and you can see this in action today if you want: it’s built into git-branchless (which he built) as
git test: https://github.com/arxanas/git-branchless/wiki/Command:-git-testgit-branchless is a lovely, git-compatible ui like jj, inspired by mercurial but with lots of cool things done even better (like
git test)Once on every clone, across the entire team if a hook changes or is added?
How do you manage hooks? Just document them in the top level readme?
We had at least one conversation at a Mercurial developer event discussing how to make hg configs (settings, extensions, hooks, etc) distributed by the server so clones would get reasonable, opinionated behavior by default. (It’s something companies care about.)
We could never figure out how to solve the trust/security issues. This feature is effectively a reverse RCE vulnerability. We thought we could allowlist certain settings. But the real value was in installing extensions. Without that, there was little interest. Plus if you care this much about controlling the client endpoint, you are likely a company already running software to manage client machines. So “just” hook into that.
I’m not entirely sure I follow what you’re asking. My guess here is that you’re unfamiliar with pre-commit, so I’ll answer from that perspective. Sorry if I’m assuming wrong
Pre-commit hooks aren’t some individualized separate thing. They are managed/installed inside the repo, and defined by a YAML file at the root of your repo. If you add a new one, it will get installed (once), then run the next time you run
git commitby default.As long as you have pre-commit installed on your system, you can wipe away the repo folder and re-clone all you want, and nothing has to change on your end.
If a new dev joins, all they have to do is clone, install pre-commit (single terminal command, run once), and then it just…works.
If a pre-commit hook changes (spoiler: They basically never do…) or is added/removed, you just modify the YAML, make a PR to the repo, and its merged. Everyone gets latest
masterand boom, they have the hook.There is no ‘management’. No need to really document even (although you can). They should be silent and hidden and just run every commit, making it so nobody ever has to worry about them unless their code breaks the checks (linters, etc).
Thank you.
You’re talking about?:
https://pre-commit.com/
Not?:
https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
I take it the former runs code on pre-commit for any cloned git repo (trusted or not) when installed locslly, while the latter needs setup after initially cloning to work?
So it changes git behavior to run code pre-commit on every repo - but doesn’t directly execute code, rather parses a yaml file?
Yes I am talking about the former. To be clear though I don’t know exact internals of pre-commit, but I don’t THINK it modifies git behavior directly. Instead it just has a hook that runs prior to the actual
git commitcommand executing, and it DOES execute code, but does so in a containerized way I believe. The YAML file is just a config file that acts sorta like a helm chart does; providing configuration values, what command to run – it’s different for each hook.If you’re curious, you can see an example in one of my personal repos where I am defining a pre-commit hook to run the “Ruff” tool that does linting/formatting on my project.
Also, a note: Pre-commit hooks only execute against the files inside the commit too! So they tend to be quite performant. My ruff checks add maybe like….100ms to the runtime of the commit command. This can obviously vary – I’ve had some that take a few seconds, but in any case they never feel like they’re “blocking” you.
FWIW I’d consider 100ms to be quite bad given that jj commits tend to take < 10ms.
I agree with folks in general that in jj, almost no commits made by developers will pass pre-commit checks so a rethinking/different model is required. (For most of my professional life I’ve just used a combination of IDE/LSP feedback and CI for this, a long with a local script or two to run. I also locally commit and even push a lot of broken/WIP code.)
I’m also curious how the developer experience is maintained with pre-commit hooks. I was on the source control at Meta for many years and there was an intense, data-driven focus on performance and success metrics for blocking operations like commit. Do most authors of pre-commit hooks bring a similar level of care to their work?
There has been some interesting discussion in response to my question about pre-commit hooks.
The thing I’m (still) curious about is how to choose the point in the workflow where the checks happen. From the replies, I get the impression that pre-commit hooks are popular in situations where there isn’t a build step, so there isn’t a hook point before developers run the code. But do programmers in these situations run the tests before commit? If so, why not run the lints as part of the tests? If the lints are part of the standard tests, then you can run the same tests in CI, which suggests to me that the pre-commit hook configuration is redundant.
I’m asking these questions because pre-commit hooks imply something about the development workflow that I must be missing. How can you get to the point of pushing a branch that hasn’t gone through a build and test script? Why does it make sense to block a commit that no-one else will see, instead of making the developer’s local tests fail, and blocking the merge request because of the lint failure?
For auto-formatting in particular, I don’t trust it to produce a good result. I would not like a pre-commit hook that reformats code and commits a modified version that I had not reviewed first. (The likelihood of weird reformatting depends somewhat on the formatter: clang-format and rustfmt sometimes make a mess and need to be persuaded to produce an acceptable layout with a few subtle rephrasings.)
For auto-formatting you can choose to either have it silently fix things, or block the commit and error out. But yes, generally you can never rely on pre-commit hooks to prevent checks because there’s no way to enforce all developers use it. You need to rely on CI for that either way.
The benefit to running the checks before pushing to a pull request for instance, is that it reduces notifications, ensures reviewers don’t waste time, reduces the feedback / fix loop, etc. Generally just increases development velocity.
But all those benefits can also be achieved with pre-push. So I see the argument for running the checks prior to pushing to the remote. I fail to see the argument for running the checks prior to committing. Someone elsewhere in the thread pointed out a checker that apparently inherently needs to run pre-commit, so I guess there’s that? I don’t know the specifics of that checker, but seems like it’s poorly designed to me.
We always have one dev that introduces Husky to the repo and then everybody else has to add
--no-verifyto their commits because that’s easier than going into an argument with the kind of dev that would introduce Husky to the repo.Most devs underestimate (as in have absolutely no idea as to the amount of rigor necessary here) what goes into creating pre-commit checkers that are actually usable and improve the project without making the developer experience miserable.
I mentioned this in another comment, but
pre-commit(the hook) will never be feasible for the simple fact that “committing” isn’t something you explicitly do in Jujutsu. It’s quite a mindshift for sure, but it’s one of those things that once you get used to it, you wonder how you ever did things the old way.The good news is that
pre-pushis totally possible, and as you noted, there is work underway to make the integration points there nicer. AIUI the issues you linked are getting closer to completion (just don’t expect them being fixed to mean that you’ll have the ability to runpre-commithooks).Find a rust capable dev, start hacking on it and engage with the Jujutsu crowd on IRC (#jujutsu at libera) or Discord. The developers are very approachable and can use additional PRs. That way everyone can eventually benefit from a completed hooks implementation.
Conveniently, absorb has just been merged, literally yesterday \o/
And authored by a long time Mercurial maintainer by the looks of it.
Mercurial’s DNA is all over Jujutsu and I’m loving watching Jujitsu’s evolution.
Yeah, I saw that! I’ll update my post once it makes it to a release.
Thanks for this heads up, was a nice ping to update my jj; I could really make use of this feature atm!
My son’s school has a greenhouse and he wanted to be able to monitor the outside and inside temperatures. So we are currently working on an ESP8266 with two themometers connected which will talk to a Rasberry Pi running Home Assistant OS. I remember dabbling with this stuff years back but thanks to the hard-working hackers behind this stuff, now everything is just dead simple and it all Just Works. If I didn’t have a thousand other projects in the queue already, I would love to go whole-hog on home automation. (It’s something I’ve been dreaming about for decades.)
Honestly, I recommend not going whole-hog and instead dipping your toe in.
It’s the perfect hobby to have on the side over the long term and only dabble with here and there as time permits. My journey has basically been:
It’s taken me about a year and a half to get to step 7. I probably average ~15-30 minutes / week chipping away at it. As a parent with almost no free time, it’s been a great little hobby!
Yet another Jujutsu post :)
The intended audience for this is mainly my colleagues at Mozilla, but I haven’t yet seen posts comparing Jujutsu to Mercurial, so figured this might be useful more generally.
I started using Jujitsu recently, and looks very promising, especially for splitting up a big PR into smaller conflict-free PRs. I never used Mercurial, but I like the idea of tracking change sets.
The similarity to cargo is what drew me to poetry. So I can imagine how something actually intended to be “cargo for python” might be appealing… Just now, though, I’m happy with poetry and don’t quite feel the pull to try something new to get the same stuff done.
Can anyone offer a short summary of why someone who’s happy with poetry might prefer to move to uv, without discussing how uv is faster? Speed is not a pain point, IMO, with poetry.
My attempt at a short summary would be: it’s pyenv+poetry+pipx. It can install python, manage dependencies for codebases, and install tools into their own dedicated virtual environments.
This, it combines the need to maintain, learn, and update 3 or 4 separately installed tools with just one tool that you can drop into ~/.local/bin
Oddly enough, I think it’s the learning (or really the explaining) part that will sway me first despite understanding and liking my workflow with pyenv and poetry.
I definitely plan to keep an eye on uv, and I suspect I’ll consider using it for each project I start, about the time I need to write the “Getting Started” section of the README. Downloading uv and letting it handle the rest given a properly decorated
pyproject.tomlat the root of my repository does seem much easier to explain than my current bootstrap process.Speed is like display resolution. You don’t really notice it until you upgrade and then holy shit you can never go back.
But a few other reasons I’m going to be migrating my projects from poetry to uv:
Between this and PDM is there a clear winner yet?
uv for few reasons:
So IMO, even if there are things that PDM does better today, I think it’s a matter of (very short) time before uv catches up. At that point, the fundamental benefits that uv has from being written in rust will give it the edge. PDM would need a rewrite to catch up, and by the time that happens it’s already too late. uv just has too much momentum.
The only way I see PDM remaining relevant is if uv doesn’t implement a plugin system and that is something that is important to enough people.
The pace of non-development for the 5-10 years before this was also pretty insane.
Lol true. Maybe the pace of development only seems insane compared to the last decade
Interesting idea, but I fear it won’t play as nice in the modern ecosystems of automated dependency upgrades and the compatible release pinning that they depend on. The article correctly notes that SemVer isn’t perfect, so ultimately your CI system is key for automated dependency upgrades either way.. But I can’t help but feel that the efficacy of these tools will suffer if they can’t make any assumptions about whether a given version is compatible or not.
Which leads me to my other point, I kind of wish they didn’t make this scheme compatible with SemVer because compatible release operators aren’t going to work well with it and people almost certainly aren’t going to check whether the project is using SemVer or EffVer before using them.
This looks pretty nifty! I’ve been using utterances on my blog for comments for years, and this seems to be extending that idea further. If I actually blogged more, I might try this out.
I really dislike this project for its misleading statements about no tracking, no lock-in when clearly you are limiting comments to users with Microsoft GitHub accounts along with the ToS & tracking Microsoft does with the contents on its platform.
utterances is pretty cool, too! My secret to write more, tho, is not to fiddle with code 😅
boring is good, because all fancy stuff will break sooner or later.
for dependency management i heartily recommend pip-tools, giving you
pip-compileandpip-synccommands that work on requirements files, which are the ‘native’ format used bypip(de facto standard python package installer). this is also a good read on the topic: https://hynek.me/articles/python-app-deps-2018/for venv management on local dev machines (not elsewhere), use direnv, which is a generic solution. unfortunately often overlooked and hence underappreciated, because many people look for python-specific things. with direnv, an
.envrcfile containing a single line (layout pythonorlayout pyenv 3.11.1) is enough to get automagic venv creation+activation. and it also works in editors. boring is good, again.I am the author of the “boring dependency management” article linked near the bottom of the post, and pip-tools is the only thing I recommend that isn’t one of the big three standard Python packaging tools (pip, setuptools, venv). I also only recommend it because it’s a relatively simple thing that could be quickly replaced with a script that calls
pip freeze/pip download/pip hash/etc. if I ever needed to. The convenience is that I don’t have to write that script myself when i use pip-tools.I have the opposite experience. Our team uses
pip-compileand we’re constantly running into problems. Someone invariably generates the lockfile with a different environment, orpip-compilefails to find a resolution that Poetry does. It’s gotten to the point where we’ve created a Docker image that contains a custom script just so we can generate our lockfile consistently. In my mind that is not boring. The boring thing would be to use the tool that is popular and just works.I’m not saying Poetry is perfect, and the brownout was terrible. But if a workflow needs to be distributed across a team, then it’s better to use something well trodden and that works out of the box versus kludging something together. If a workflow is for personal use only, then of course do whatever floats your boat.
I’m struggling to understand, if you’re already containerized, why your entire dependency management workflow wouldn’t be run in the container. I certainly run
pip-compilein the container where I intend topip install, for example.If the requirements are shared across a large team, not everyone will have the desire, knowledge or even ability (i.e, there are people using Windows) to do their development in a Docker container.
Docker works on Windows. And given that it provides a consistent environment and allows for the local and production environments to be as identical as possible, I’m a huge fan of just Dockerizing everything from the start. People can still write code in their favorite editor/IDE, just run it in Docker.
Yeah not disagreeing with the value of containerizing things, but Docker works on Windows if your company pays for a Docker Desktop subscription.
requirement files can also contain platform-specific things, so generating
requirements.txtfromrequirements.in(viapip-compile) requires running inside the same (or similar) environment. containers to the rescue, and you can also leverage them in your CI environment. see https://peps.python.org/pep-0496/#examplesTheir source code is in a git repo, but as a set of patches against releases, which makes contributing to them hard. This is particularly odd given their complaints about the Thunderbird workflow.
I’d love to see something a bit more explicit about why they feel the need to maintain a fork rather than working with upstream. There are some hints, but a lot of them could be interpreted as ‘we’re a bunch of people that no one wants to work with who are convinced that we’re always right’.
None of their bug fixes are particularly compelling for me. The things I’d like to see from Thunderbird are:
There is some context about their need to maintain a fork, a link at the bottom of the FAQ. It looks like this is the outcome of some conflict?
Heh, wow he even publishes the e-mails from the Thunderbird Council containing the accusations against him (not sure he comes out looking as good as he may think he does there). Tl;dr, he was temporarily banned from participation in Thunderbird for being abrasive, broke the conditions of said ban, then got banned permanently from the entire Mozilla community. He now blames cancel culture.
Source: https://betterbird.eu/faq/moz-governance.pdf if you scroll down to the attached Email
I think the patch-based approach is very brittle. And I’m not sure what to think of such accusations, which are directly related and affecting the project.
Part II: https://betterbird.eu/faq/permanent-ban.pdf
I feel like Bazel is soooooo cool in principle, but the thing that would be even more amazing is if someone extracted the sandboxing so that people could like… just write Python build scripts with caching.
Ultimately the sandboxing is magic, the caching is pretty amazing, but Starlark and a lot of Bazel restrictions are built around people having so much stuff that even reading a bunch of config files is costly. But there are loads of people who have “reasonably”-sized codebases…. but have relatively simple needs.
We used Bazel at $FORMERJOB to implement test caching stuff. Cutting the CI bill 60-70% (and of course improving throughput) was amazing! Things like “readme updates cause a full CI run” disappear, but without having to hack your way to that.
Tup might fit what you’re looking for if you don’t mind writing your build scripts with Lua instead of Python.
Nice, looks very similar to a dot-file manager I wrote for myself except actually documented and fleshed out. I may need to look into it to see if I can switch :)
Oh this is very nice! I like how you’re noting using different package managers for tasks. May have to rip this off :P
This is super cool but I’m not 100% on the use case—you still need to build on an Apple OS since you can’t legally cross-compile to macOS right¹? So why not do the signing/notarisation then?
¹ Xcode and Apple SDKs Agreement:
We have a pool of MacOS hardware to run builds / tests, and another pool to perform signing. The latter is a security risk so very tightly controlled and locked down. They are a pita to maintain.
Being able to sign on Linux will allow us to re-use the existing signing infrastructure we use for literally every other platform other than MacOS. It’ll be more secure and much less maintenance.
Ahh ok separate build vs sign envs makes sense. Thanks for the insight.
Yes, but you can also build software for macOS that doesn’t use Apple’s SDKs. For example CLI applications. I do know that for example in Go you can cross compile to
darwin, which works, but you still need to “notarize” your binary before it can run.I can’t stand Apple’s code signing. Their tooling seems to be designed to only upload from Xcode GUI to Mac App Store, and everything is else is left half-assed, buggy, and undocumented. There are tons of things that can go wrong with signing, but their error messages are vague and unhelpful. So I’m happy there’s another tool.
my usecase:
the Apple Uploader makes (on my uplink) ~350MB upstream traffic to upload a <50MB iOS ipa payload. Takes over an hour. I would be more than happy about an upload tool that would be not be as crappy in non high-end environments. So much about sustainability and double-standards.
The (swift) ipa for testing is ~15MB by the way and the previous ObjC version was 500K. This is how to ruin the planet with an upload obesity crisis.
The post by the author (who I believe is a lobste.rs member) ends on a sad note. I don’t think just the publicity caused this crash in enthusiasm - I’m guessing the internet was the internet and people were unkind to them which I can totally see killing enthusiasm for an endeavor, especially if the spotlight was shone too early.
To the author - I hope, once this 15min of hell has passed, your motivation comes back, and you keep working on it, since there must have been interesting problems in that space you wanted to solve.
Generally I’d agree with this sentiment.
But the author is known for being rather obnoxious and rude towards other projects he disagrees with, and was even banned from lobsters for this reason. So in this case I don’t feel too bad.
He’s also made significant effort - and improvement! - on those fronts.
I have first-hand experience of interacting with him on IRC, as a paying customer requesting with questions about his products. I wish all vendors were as approachable, polite, and direct as he is.
Re. the note on his ban - I too find myself disappointed in the world (of software) at times, as do many of my friends and colleagues. I note though that few people take the step of launching our own commercial products as a means of improving it.
commercial and ethical products
They might be opinionated, but they are still free software. That’s really not typical nowadays.
I have noticed some introspection, e.g. https://drewdevault.com/2022/07/09/Fediverse-toxicity.html.
I too have issues dealing with my frustration and textual interactions don’t make it any easier. Without easily accessible peers to discuss things with, it falls to the online community to help people cultivate their opinions.
I am thankful that many people here have the patience.
Agreed; and that’s a large part of the reason I made the switch to sourcehut from GitLab.
In this case you are the one being obnoxious and rude. You don’t know the guy, don’t spread rumors and hate.
I agree that it’s time lobsters moved on from this and stop bringing up DeVault’s past mistakes.
However, this isn’t a “rumor” or “hate”. They were simply stating a well-known fact about Drew’s aggressiveness and rudeness, one which I’ve also experienced and seen others experience. (To be fair, I’ve noticed good behavior has improved a lot over the past 12 months.)
Jeez, I really look forward to the day when lobsters can discuss Drew’s work before dragging up shit from 1 year ago.
I think it certainly is hate. These comments seem a lot like targeted harassment to me. Most of the commenters don’t seem to have first hand experience with what they are talking about. They also appear whenever drew does something good which just detracts from everything.
The reasons were not made public and it’s bad form to attack someone who can’t respond.
Ah, I am no longer as active on lobste.rs as I used to be and I missed that Drew got banned. I just searched through his history but didn’t find the smoking gun that got him banned. Anyhoo, sad all around.
There’s some context in this thread, though it doesn’t provide an exact reason.
I had a long response to his Wayland rant because I think the generalizations in that post were simply insulting at best and it drove me crazy.
He is a clever engineer, but he has a tendency to invite controversy and alienate people for no reason. After that rant of his, I lost any desire to ever engage with him again or use his products if I can help it, which may be extreme, but after numerous similar exchanges I think it’s unfortunately necessary.
This is both a red herring distracting from the actual issue and untrue. Firstly, I try to prioritize local companies and small businesses over large online retailers. Secondly, it’s possible to use a product (like an iPhone or Android device) and not completely agree with the company.
With Drew, I simply haven’t had a single good interaction with him and don’t have a compelling enough reason to look past that and use his products.
Yeah, I’m surprised and somewhat sad. He’s difficult and abrasive sometimes, but I respect his engineering.
im so tired of this sentiment
Saying you’re tired of another person’s take without giving any reason is a pretty vacuous and unnecessary comment. The button for minimizing threads is there for a reason.
I’m also tired of the sentiment that allows someone to be shitty just because they’re good at solving a problem.
Unfortunately (?) you can’t disallow someone from being shitty.
One can for certain exclude them from a group of friends that you care for.
This comment is inappropriate. I am sure that the tone and attitude here is not a fit for the community what we are aiming for on lobsters
The opposite leads to bad engineering decisions.
Health care and related fields have a concept of the quality-adjusted life year, which is used to measure impacts of various treatments, or policies, by assigning a value to both the quantity and quality of life. There are grounds for critiquing the way the concept is used in those fields, but the idea probably ports well to our own field where we could introduce the concept of the quality-adjusted code unit. Let’s call it QALC to mirror QALY for life-years.
The gist of the argument here is that while there are some people who produce an above-average number of QALCs, if they are sufficiently “abrasive” they may well end up driving away other people who would also have produced some number of QALCs. So suppose that
ais the number of QALCs produced by such a person, andlis the number lost by their driving away of other people. The argument, then, is that in many casesl > aor, more simply, that the person’s behavior causes a net loss overall, even when taking quality (or “good engineering” or whatever synonym you prefer) into account.My own anecdotal experience of involvement in various open-source projects is that we often drastically overestimate the “abrasive” person’s QALCs and underestimate the QALCs of those who are driven away, making it almost always a net loss to tolerate such behavior.
It’s not about “refusing to respect good engineering”. It’s saying that if we can get only
nQALCs from “respecting” (i.e., tolerating the behavior of this person) versusk > nQALCs from not, then “not” is the correct engineering choice because it leads to the greatest amount of QALCs.Or expressed differently: in general, the number of mega-hyper-genius programmers whose contributions are so far beyond what any other person or group of people could ever achieve… rounds to zero, and as such we do not need to entertain claims that some particular person’s misbehavior must be tolerated on grounds of their being such a programmer.
To me the opposite of “I respect his engineering” is “I don’t respect his engineering.”
If the question is whether people like Drew should be banned from discussion, I don’t think he is really that abrasive or disrespectful. Only the admins know what they banned him for, but I haven’t seen anyone claim that it was much worse than other things people can point to.
by what metric
I have no opinions on Drew but you are being astoundingly obnoxious.
I’m 100% OK with bad engineering decisions (within reason) if it means my life is more pleasant. If hanging out with brilliant assholes makes your life more pleasant, then by all means, go for it!
It took me 20 minutes to pay for something on my iPhone today because the app wouldn’t let me scroll down to the “submit” button, and the website wouldn’t either until I looked up how to hide the toolbar on Safari. That doesn’t make my life more pleasant.
Besides, you aren’t forced to hang out with people just because they are allowed to post.
By allowing them to post you allow them to hang out in your and the other users’ brains.
there is no tradeoff
we don’t have to accept abusive or toxic people in our communities
I think this mindset is what has lead to the success of the Rust project in such a short span of time. It turns out that having a diverse community of respectful individuals invites more of them and leads to better problem solving.
Are you implying that only difficult and abrasive engineers do good work? Because I have personal experience of the opposite, not to speak of numerous historical accounts.
No.