i liked this in general - but i think that “boring” is a moving target. when i opt for boring tech, i mostly mean well-understood tech. the author clearly likes golang - i do too - but i consider it a Boring choice these days.
also, staffing is a real concern - it’s not practical for a medium/large company to hire a ton of “hackers”. there simply aren’t enough out there - but there is a surplus of react devs who are totally capable of working on your stack.
Agree with boring being a moving target. I’ll always remember being told to choose “boring” Visual Basic over flash-in-the-pan Python. After all, even if we can find a Python programmer today, would anyone even still know what it is in 2010?
To be fair, Ruby was more of a flash in the pan than Python. Nobody would’ve known in advance that Python would continue to rise and rise in popularity like it has, mostly due to lots of scientific/HPC stuff coming out and now the ML stuff being available. And probably nobody would’ve expected Perl’s demise, either.
Ruby is still rapidly gaining popularity and seeing a lot of investment.
And I’m not sure Perl can ever die. But it’s less popular for new people than it used to be I agree.
I feel like this is one of those personal discipline issues: Whenever I try to read most CS papers, I find them utterly impenetrable.
I can usually get through the summary fine, but the actual body of the paper might as well be written in greek.
I’ve even tried the tack of saying “OK, I will stop and go look up each new term as I see it” but then I end up losing the thread and coming back unable to absorb the actual containing thought.
I KNOW this is about me and not about the papers, and I suspect there are lots more like me who understandably aren’t willing to be so open about their foibles :)
In my experience it depends on the topic. In general, the more theoretical or mathematical, the more opaque. I can’t follow most papers on PLT, both because they use ultra-concise notation and I quickly lose track of what ξ represents, and because they’re assuming understanding of books’-worth of concepts like lambda calculus.
On the other hand, I’ve been reading a lot about garbage collectors and memory allocators, and those papers are pretty clear. It may take some careful re-reading to understand a clever algorithm, but it’s worth it. Ditto for papers on data structures like hash tables or tries.
What always gets me about the math heavy papers is how often they double down to explain the easy stuff (e.g. matrix multiplication) in great detail and then skim over the hard stuff (e.g. how to calculate the matrix). Half the CS paper’s I’ve seen have some variation of the equation
Θ = Σaᵢbᵢ
The values of vectors a
and b
will then be buried in prose, often with one vector only being referred to by a proper noun that does not occur in any of the refernces. What is the presumed audience that can instantly figure out the complicated incantations the authors vaguely allude to, but need a daily reminder on how to calculate a dot product?
The thing that bugs me the most about the math notation is having to represent everything by a single letter (Roman or Greek.) I quickly lose track of what B and ε refer to. Meanwhile, every language since early BASIC has allowed multi-letter variable names.
It’s not you. It’s inherent to the process. Academic papers are written by domain experts for other domain-adjacent experts. Much of the content is often focused on very nuanced details that only confuse a relative outsider. It can take years of full time reading and experience in the domain to be able to read and understand the average (or at least the well written) academic papers from that domain.
Unfortunately, there’s very little incentive for academics to write slightly more accessible expositions. There’s incentive to write layman summaries and have their University’s PR group publish them, but very little incentive to write anything between those two extremes. Unless you count writing textbooks, but those usually aren’t free (as in beer).
I disagree. I’ve read some incredible papers that were rejected from top-tier venues because the authors explained their ideas so clearly that they seemed obvious.
A lot of the time, problems are only hard because they are poorly understood. Once you properly mail down the problem that you’re trying to solve, you constrain the set of solutions such that it becomes obvious. My favourite example of this is the spin lock paper, which started with a simple test and set lock and went through the cache coherency overhead and proposed increasingly good refinements. The thing I loved about this paper was that I came up with each of their approaches before they explained it because their analysis of the flaws in each previous approach made it obvious. It would be very hard to publish something like that now because reviewers would say it was too obvious.
They’re also usually horribly written. As someone elsewhere in the thread pointed out, papers are written to score points in a game, not to convey information. (Unscalable 2-column TeX PDF should have been our first clue…) For many articles, my reward for figuring out how to evade the paywall and plowing through the jargon and greek letters is to find obvious bugs in the code/pseudo-code.
All of which is to say: You’re not alone, far from it. And I don’t think you’re missing much. These days I skip most papers and just read blog articles.
arXiv papers can be read as HTML if you replace the X with a 5: https://ar5iv.labs.arxiv.org/html/2010.00029
Their repo has a page that answered several of the questions I had (e.g. how does this compare to Dhall).
It’s okay but still not very thorough. I would’ve expected it to show code comparison: they show some complex Dhall code but no corresponding Nickel code.
This seems kind of dumb. $88,000 is the annual salary of a junior software engineer in the US. If it will take more than 1/4 of the time of a senior engineer to make monitoring work as well as it does now without datadog, that’s probably a net loss. Certainly you’ll pay the opportunity cost of spending engineering resources on redoing all your monitoring.
I’m surprised by your stats of $88k for a junior developer. Do you have a source for that? I can believe that might be the case in California or New York, but it feels off for a national average. Our junior devs make less than half that. Heck, I’m only making a little over half that. My boss’s boss’s boss isn’t making $88k and I’m not entirely sure her boss is making that much.
Don’t get me wrong, I know we’re underpaid, but, out of the last three places I interviewed at, no one was offering more that $60k for senior devs. And one of those was in New York.
I made $65k my first year out of school working a government-ish job at a university research center 20 years ago. $45k for a dev in the US in 2023 is WILDLY underpaid.
Yes, junior devs here in Minnesota (low cost of living area) are routinely getting offers for $100k or so. There’s a ton of data on levels.fyi.
Cool as this is, NIST is not actually a trustworthy authority for secure coding. This is the same NIST that at least once allowed backdoors to be put into its recommended crypto algorithms.
Call me when the US Department of Transportation recommends it, when they certify code that fucks up people die.
I have friends at NIST. While the encryption team has made mistakes, they are still exceptionally trust worthy in their reference work, such as their famous [peanut butter standard reference material].(https://shop.nist.gov/ccrz__ProductDetails?sku=2387&cclcl=en_US)
Looks like there’s been some progress towards automotive requirements: https://ferrous-systems.com/blog/the-ferrocene-language-specification-is-here/
There has! Ferrous Systems are doing God’s work, the lack of a specification is a major hurdle in any field with accountability requirements.
(Independent of why accountability is necessary, if we’re being cynical :-P).
Fair enough. But this is still helpful in making the case for using Rust as an alternative to C, C++ and Ada for functional safety.
Oh, certainly. It’s good publicity! Maybe even some funding. But not, on its own, going to make engineering in Rust more trustworthy.
I am skeptical of things I read on the internet. I am even more skeptical of articles that seem to be written specifically to draw a large number of views. I admit that I clicked on the link, I admit that I read the first few paragraphs. At that point I stopped reading.
You’re very right that this post doesn’t do a very good job when it comes to the topic of how widespread the problem is; it’s based entirely on anecdata and conjecture.
However, if you’re looking for a more rigorous treatment of the topic, David Graeber’s book Bullshit Jobs is a great read: https://theanarchistlibrary.org/library/david-graeber-bullshit-jobs Graeber’s analysis shows this to be a much more widespread problem than your experience might lead you to encounter, probably because you’ve been able to avoid getting hired by the kind of people who perpetuate these patterns.
Thank you very much for that link. I started to read it, but I have to ask, since you have read it: it reads a bit like “I know what jobs should exist (manufacturing jobs) and these are not manufacturing jobs, so they are bullshit jobs.” Everything he lists seems a quite reasonable job. I mean it may feel right to say these jobs shouldn’t exist, but thankfully we’re not a command economy and are insulated from that particular kind of hubris. Someone is paying for that service/good, which is why it exists.
It’s a good question, and I can see why you would think that by reading the first few bits. There is a section of the book that addresses the question of defining it, because it obviously is very subjective. The key is that he defines it according to the judgement of the person doing the job:
Final Working Definition: a bullshit job is a form of paid employment that is so completely pointless, unnecessary, or pernicious that even the employee cannot justify its existence even though, as part of the conditions of employment, the employee feels obliged to pretend that this is not the case.
Someone is paying for that service/good, which is why it exists.
This is true in some sense, but one of the key points of the book is that many jobs exist specifically in order to boost the prestige of executives and managers. In certain kinds of organizations, having a large number of underlings is a source of political power, in a way that is completely disconnected from those people doing productive work. As you might expect, organizations with that particular dysfunction tend to be larger than they otherwise would be, meaning these jobs are also more numerous than you would expect.
Someone is paying for that service/good, which is why it exists.
Not quite. Some services exist solely to seek rent from the economy. Landlords, tax preparers, insurance firms, payday loan sharks, timeshare resellers, multi-level marketers, and more; these lines of work only exist to leech money without providing useful goods or services in return. Worse, many of them provide useless goods and services!
Personal solutions
Legislative solutions
Landlords: Pay money so I don’t have to worry about snow removal, mowing and repairing the roof? Not a bad deal. Or do you mean bad landlords? That requires moving to a less crowded place.
These things are often not done by the landlord, they are done by the landlord’s agent. The landlord takes money from you, keeps some, and uses the rest to pay someone to perform these tasks. Their income derives solely from having capital, not from doing any work. This is pretty much the purest sense of ‘rent seeking’ as an economics term.
Someone has something that other people want and they earn a living off providing that. Where’s the problem? What kind of world would it be if you could not do that? If people just took your stuff from you? Not a world I want to live in.
I think you need to read a lot more about economics before we can usefully have this conversation. I did not make a value judgement at all in my post, I attempted to explain a concept in economics. Your reply reads like you didn’t read the link that @Corbin posted at all and just want to have a political argument. This is not the right forum for that argument.
these lines of work only exist to leech money without providing useful goods or services in return.
Sounds like a value judgement to me.
My post was in response to the post that said that. You made a reply to my reply. I assumed that your post had more meaning than “Landlords who ask for rent are rent seeking.” but perhaps I was mistaken and that was indeed all you wrote and you had no opinion on the post I was originally responding to, re: Landlords (and other) behaviors.
https://lobste.rs/s/mbgpma/i_ve_been_employed_tech_for_years_i_ve#c_ykdxfv
Landlords don’t provide housing supply. Indeed, landlords have the ability to decrease housing supply by refusing to lease or sublet.
You really ought to take an introductory economics course; housing obeys most rules of economics, as a necessary (normal) good, and so any rent-seeking behavior is going to do the normal thing: raise prices, decrease supply, distort market.
I’m trying to think if I would like to live in a world where people are prohibited or heavily regulated from employing capital. Taking your example: If I own a house and do not live in it, do you envision a world where I must sell the house? Would I be barred from renting it? Would it stop at houses or also extend to the tools I own? Would your world stop Home Depot from renting out tools? What about my money? Could I rent it out?
if you could not do that? If people just took your stuff from you?
You’re making quite a leap here. “No rent-seeking” does not automatically translate to a chaotic frenzy where people can just take your stuff. But to answer your question, one possible world without landlords is a world where everyone is housed.
one possible world without landlords is a world where everyone is housed.
Not sure how that follows. Landlords aren’t in the game to keep houses empty. They have an enormous incentive to rent out the home and house someone.
I would rather predict that a world where people can’t rent out housing they own would lead to housing being kept empty as owners wait out housing down turns to sell units they no longer need. Or a thriving blackmarket in rentals.
Or do you envision a world where the state has a monopoly on housing?
If you are joking, well, you have your joke. If you are serious, I think I can’t learn any more from you, but thank you for the conversation.
Payday loans are outright illegal in some parts of the USA. MLMs are heavily regulated federally; pyramid schemes are illegal, and lotteries are generally either illegal, state-run, or heavily regulated.
A world without insurance is pretty simple; just give public oversight to management of risk funds. Then, all members of society can directly have their risks automatically hedged at all times. The only remaining needs for insurance are business-to-business and can be reformulated as service contracts. (If this sounds silly, consider the contrapositive: would you want a world where e.g. food stamps are privatized into “hunger insurance”? If everybody gets sick and dies, why do we need “health insurance” or “life insurance”?)
Will we have nationalized
I am skeptical of a political entity being able to run such insurance in a scale covering 350 million people in different markets. In theory its great because you have a giant pool, so the premiums (taxes) needed for the fund will go down, but management is key in such large organization. I don’t know that politically appointed organizations are great at such management.
For medical insurance specifically: Britain’s NHS and Canada’s system are of course nearby examples for health. It is not clear to me that this is worth the upheaval. I have never heard convincing arguments that innovation in health will continue to occur under a nationalized system. I suspect, like the NHS, there will be stagnation at best.
My core concern with any nationalized system is that its a monopoly. Monopolies are very hard to change, make accountable and have zero incentive for efficiency.
For medical insurance specifically: Britain’s NHS and Canada’s system are of course nearby examples for health.
The UK (and maybe Canada, I’m not familiar with the system) have nationalized health providers. This is separate from national insurance, like in Switzerland, where everyone must have medical insurance (and the state subsidizes it for very poor people) but the actual services are a mix of state and private care.
It makes a huge amount of sense from a political economy standpoint to ensure that basic health coverage is spread among all citizens. The US system, which combines substandard coverage with enormous costs, is a paradise for rent-seekers and hell for everyone else.
I’m not familiar with the Swiss system, however the system you describe sounds like the system in MA: you can have any insurance you want, but you have to have insurance (you pay a fine if you don’t) but the state will subsidize your insurance if you can’t pay. This is a state mandate and a means based subsidy. The insurance carriers are still independent, private entities.
Do you envision such a state mandate + needs based subsidy for each category that I listed, or just for health?
Or is it medicare for all (single payer) kind of system?
I was just pointing out that mandatory health insurance can coexist with a mix of health providers. The entire system does not have to be run by one provider.
Ok, so you are a proponent of a single payer system (like medicare). There are bargaining advantages (for the people) but it is not immediately clear to me what the long term effects of such a single payer system are. What is the motivation for innovation, for example.
I suppose one motivation is for a bigger piece of the pie: offer better services for the same fixed price everyone gets, or offer cheaper services. The problem starts to arise when you can’t pick providers (insurance companies already restrict the providers they work with, a national insurance corporation will most certainly do the same).
I personally believe it is an idea worth exploring, perhaps by gradually increasing coverage (say state by state, or income group by group). Just not clear to me what happens long term.
Someone is paying for that service/good, which is why it exists.
Is that really the same as a service/good being valuable though?
Once again, I am thankful I don’t live in a command economy where individuals get to decide what I should find valuable.
I’ve never worked a job where my employer didn’t dictate what was valuable to them and I’ve never lived in a democratic economy that wasn’t organized top-down. Sounds nice though.
Really. What fresh hell do you live in? I live in the United States, where I find that anyone with energy and imagination can make a go of it. I lived in India in the 1990s and even there, despite the heavy government involvement in everything I found people making their own independent ways. Of course, the people got tired of that and got rid of a lot of the red tape in the 2000s.
I also live in the US, where I find that people can make a go of it as long as it’s profitable. In my estimation it’s the prioritization of profit over actual value to society that drives the proliferation of bullshit jobs (because people in charge value prestige, among other things), as well as more dire things like pollution and carbon emissions.
Good questions. There are lots of ways to measure this and not one single source for all of it. We would need to rely on the experts in various fields to gather and interpret the data for us. But I would argue that some useful metrics would include global temperature (i.e. mitigating climate change as much as possible), life expectancy (currently in decline in the US), wealth equality (in sharp decline in the US for the last few decades), infant mortality (rising in the US), regional and biodiversity (in decline pretty much everywhere), pollution levels, criminal recidivism rates, racial equity in the education/medical/housing/prison/etc. systems, the gender pay gap…I’m sure there are a lot of other things but these are just from the top of my head.
In terms of who gets to decide, we all do–or should. And of course “how” is a very large question with no concise answer–there is a lot of valid discussion to be had about the relevance and nuances of any given metric. The point is that profitability is far and away the #1 driver of which problems people have the resources to work on (unless they want to be relegated to the non-profit sector, which has its own problems). But for the first time in history, I think we have the logistical and technological capacity to provide for people’s basic needs so that we can start to tackle these various aspects of quality of life more directly, and focus less on this indirection of aligning with the profit-motive. That’s the purpose of the distinction.
Good questions. There are lots of ways to measure this and not one single source for all of it. We would need to rely on the experts in various fields to gather and interpret the data for us. But I would argue that some useful metrics would include global temperature (i.e. mitigating climate change as much as possible), life expectancy (currently in decline in the US), wealth equality (in sharp decline in the US for the last few decades), infant mortality (rising in the US), regional and biodiversity (in decline pretty much everywhere), pollution levels, criminal recidivism rates, racial equity in the education/medical/housing/prison/etc. systems, the gender pay gap…I’m sure there are a lot of other things but these are just from the top of my head.
These are your values. Excellent. They do not have to be everyone’s values.
In terms of who gets to decide, we all do–or should
We do so currently with currency. It’s the quickest and most honest way to vote.
No. As I said, there is a level of indirection where if you care about solving any of these you first have to align your mission with the profit motive. The idea of profitable solutions to things like climate change and wealth inequality is laughable. Are you seriously arguing that the profit motive does not dilute any worthwhile endeavors?
The profit motive is shaped by legislation and by fashions, which determines what is profitable and not.
For example, in MA, a certain amount of our tax money is going into putting solar panels on people’s roofs. People who would otherwise not use solar are employing private companies to install solar capacity.
It’s not clear to me that this is a good solution to anything, but it is what the people have decided. I may not think its worthwhile (as opposed to a large solar power plant in one of our deserts, for example, piping power further north) but I don’t get to, alone, decide that.
What is not worthwhile to you is worthwhile to someone else. I think it is important that we all remember this.
I am aware of how legislation shapes markets under capitalism. The other side of the equation is how markets (or rather, billionaires who currently control the markets) shape legislation. In fact, a Princeton study found zero correlation between what the majority of Americans support and what actually gets signed into law. This is a logical and unsurprising consequence of the “vote with your dollar”: a lot of people get no votes while a small minority get almost all of them.
So yeah. I completely agree that my values are not everyone’s values, and that maybe most people don’t care about biodiversity, for example, but that is beside the point. The point is that most people’s values, whatever they are, are not able to be fully expressed in the current order.
I’m 100% serious about the state, by the way, although I have no illusions that it’s at all likely to happen in our lifetimes! Thanks to you as well. Even if you decided not to learn anything, it was a fun exercise. I’ll leave you with this quote from Ursula K. Le Guin:
We live in capitalism. Its power seems inescapable. So did the divine right of kings.
I am certain that you do live in such a “command” economy. I’m going to assume that you live in a place where there is at least one person who sells glass windows. That person would find it valuable for someone to go around your neighbourhood and throw rocks through all the windows. The rock tosser gets a salary and the glass maker sells more of their product. Both individuals find this transaction beneficial, but I strongly suspect that this would be illegal where you live.
As a society, we intentionally ban profitable actions with negative externalities (e.g. hitman, arsonist, thief). However, our legislature moves slowly and new such occupation (e.g. paid fake yelp reviews) pop up quickly. We cannot yet call these jobs criminal, but they are bullshit.
I got a different definition of bullshit jobs that made more sense to me earlier up this thread which defines “bullshit jobs” as a catchy term for sinecures.
The type of jobs you define as “bullshit” I would call “shady”. That sweet spot where it’s clear harm is being done to many and benefit to the few but legislation and enforcement haven’t caught up yet.
Some thoughts:
with a local tool like pass
Sadly its never explained what pass stands for.
I love the idea of a standardized way to provide login credentials to websites, so you hopefully don’t need browser/password manager injected scripts anymore, making it much more secure to use them. It will also hopefully make all the login-detection games irrelevant, currently it can be pretty annoying to get KeePassXC to recognize certain websites. Especially when they first show your username, and then switch the input to password via JS (why..).
In the face of new APIs and standards, the process of attempting to manage secrets with an external manager will becoming exceedingly challenging
Only if there is no standardized API to supply the browser with passwords via external password managers. I do not want to rely on the password storage of my browser, I want an external tool, cross platform, standardized format, cross OS and device, completely open source. I store way more things in there than just user/password. I also do not want any kind of forced browser login just to synchronize credentials. I can synchronize my kdbx file regardless of browsers, devices or operating systems. I can backup the file and use any software that can read/write KDBX (there are at least 6). And I do not need the cloud for any of that (syncthing).
The article mentions “normal users” multiple times, but seem to be content in throwing out everything else just to ease the login flow of “normal users”. I think you don’t need to remove the possibility of power users just to allow a admittedly much better typical login experience for the default flow: Allow external credential providers. Prevent criminalizing people that do not login via browser supplied login systems. Think about offline capable ways to store these passwords and please acknowledge the fact that family members will share accounts, if only because you have a netflix family subscription. Make it hard to prevent people from logging into services without a browser storing anything, otherwise you are creating a second coming of 2FA SMS tokens: If you do not want to give the service your phone number, you cannot use it. This also reminds me about the problems for people loosing their phone, being homeless or using shared devices (family, children,..).
For 2FA there is another good reason to exist, apart from something better than your typical password: Bruteforce Prevention. I don’t think many services actually have a good login bruteforce detection, but a TOTP can help out (you don’t need to rate limit much to make it impossible to find the correct TOTP in time, while you also won’t have to block the whole IP, something you don’t want to do). I agree that storing it in the same password manager is making the TOTP useless, but the fact that I’m using it is mostly because some kind of service requires me to do that and is not capable of using FIDO. I seriously hope that I eventually won’t have to fill out captchas anymore, after I entered user+password+TOTP.
Many of the tools users rely on to manage all their secrets aren’t frequently audited or if they are, any security assessment of their stack isn’t being published.
I’m pretty sure that’s the fact for browsers too, I want to see the audit of firefox’s 27,3 million LOC of C,C++,Rust and JS. The author also gives another reason why you maybe don’t want to use the browser password storage:
The entire browser stack is under constant attack because it has effectively become the new OS we all run.
If you are going to use a password manager, there are only two options: 1Password and Bitwarden
Depends on what you need, very wrong for me. I’m not using any of these (cloud) recommendations.
Like it or not you are gonna start to rely on the browser password manager a lot soon, so might as well get started now.
Sounds like the built-in login for microsoft in edge and google in chrome. My inner sarcastic is waiting for one-browser-per-login-provider.
I want you to go to this website and I want you to type in your parents password
I thought we’re trying to make it better for “normal users”, and now you ask me to do the one scammy thing we’re trying to prevent our family from doing ?
“Pass” is likely a reference to https://www.passwordstore.org/ Out of the various password managers that I’ve used, it’s the one that I’ve had the best luck with.
My desk isn’t worth mentioning but I’m inappropriately proud of my desktop.
I’m impressed by all the environmental data you’ve got on your desktop (humidity, pressure, visibility, etc.). Are you just pulling that from your local weather station? If so, what API do you use?
All the weather credit goes to Igor Chubin for developing wttr.in. If you’ve never encountered it before, running
curl wttr.in
will give your the local weather, beautifully formatter in your terminal. To get everything in convenient JSON format, it’s just
curl "wttr.in/?format=j1"
That is the most comprehensive (free) weather information collection that I’ve encountered. Add bonus that it also does the geolocation for me and gets an approximate latitude and longitude. Close enough to be useful, wrong enough that I can post the screenshot without doxxing myself.
My sister certainly enjoyed playing games on my family’s shared computers (first an Apple IIGS, later a 486 PC), though not the same games that my brothers played. I wasn’t much into playing games myself, mostly because I didn’t discover the good text adventures (e.g. Infocom) until much later. The fact that, of the 4 of us, only I was interested in programming merely means that it’s something that only a minority of computer users get into.
I was somewhat disappointed that the author didn’t have an answer for the question “Why did computer science see a downturn in female applicants during the home computer boom?”. Anyone else have insights on this?
Why did computer science see a downturn in female applicants during the home computer boom? Why was home computing such a boys’ club?
A study from 2018 summarized in this Atlantic article asks:
So what explains the tendency for nations that have traditionally less gender equality to have more women in science and technology than their gender-progressive counterparts do?
And suggests that:
“Countries with the highest gender equality tend to be welfare states,” they write, “with a high level of social security.” Meanwhile, less gender-equal countries tend to also have less social support for people who, for example, find themselves unemployed. Thus, the authors suggest, girls in those countries might be more inclined to choose STEM professions because they offer a more certain financial future than, say, painting or writing.
It’s just one study, but it’s a good reminder that people exercise agency.
Yes: there was no downturn unless you only look at gender ratios and ignore absolute numbers.
Around the early 1980s, as with the dotcom boom, there was an increase in interest from both men and women in the field. Then, men’s interest remained steady, while women’s interest dropped back. This is something that feminists ignore because it ruins their narrative of men driving women out. But just go look up the curves, and you will see this effect clear as day in the bachelor degrees. Because there was both and absolute and a relative shift, the relative graph is meaningless.
Computing was biased towards women because they were for rote clerical work and women needed them for their jobs. When home computers took off, along with self directed hacking, programming and more, men discovered they loved it, and women did not.
I am not a sociologist or psychologist or any other kind of -ist so I don’t know if this is accurate or meaningful, but I read somewhere that home computers turned computer science into a largely solitary activity (whereas it was originally a more communal one) and that, for whatever reason, that was less appealing to some demographics.
I’ve also read that once CS started paying really well, women were discouraged from pursuing it, as a form of structural sexism to reserve higher paying jobs for men.
Again, I am not knowledgeable in this sort of thing so don’t take this as something true or accurate. These are just two explanations that I’ve seen proposed.
I can contribute a small bit of second hand oral history to this matter. My father used to sell computer systems to mid-sized institutions (e.g. regional banks, county governments) in the 70s and 80’s. He told me how many institutions followed the same chain of “logic”
The idea that #3 could be removed with a stroke of the pen from management was never discussed. Similarly, a salesman could walk into the server room in a polyester leisure suit, fry a terminal, and it was just a call to support. Meanwhile, a female accountant who entered the room to grab her print out and left without incident was fired for insubordination.
The nylons thing wasn’t the only example. One bank did have a woman for their lead system operator. She had a decade of experience and had personally worked with Grace Hopper. She had three assistants, each fresh out of college with no coding experience or ability. All three assistants were paid fifty percent more than she was. The rational was that she couldn’t be paid more than her husband, who was on the bank’s maintenance staff. After she left the firm for a higher paying position elsewhere, the CEO fired her husband (for failing to keep her at the bank) and made multiple comments to the board about not allowing women in management positions on account of being flighty and disloyal.
I guess that, if I have any thesis to share, it’s that the push to gender computing as a male activity might have occurred around the time of the home computer revolution, but that corporate computing was already headed in that direction without the consumer market.
Edit: Fixed typos
Before the PC era there were taboos about nearly anyone entering the computer room. The computer operators were widely called a “priesthood”. But you didn’t need to enter to use the computer, because you could use a keypunch machine or, later, a timeshare terminal. So I don’t think the nylons factor alone would have kept women out, except as operators. (And back then a lot of women did work on computers doing data entry.)
I was somewhat disappointed that the author didn’t have an answer for the question “Why did computer science see a downturn in female applicants during the home computer boom?”. Anyone else have insights on this?
Yeah, I don’t think a bunch of advertisements will give you an accurate depiction of the market.
I use Nix (or rather, home-manager + Nix) to manage my own computer. NixOS scares me too much (I have tried to do the install on a VM a couple of times and don’t really have a high success rate of getting to boot).
I tried to get Nix to be useful for a Python-based thing at work, but at the end of the day Docker and docker-compose
did what I needed and was more along the beaten path (with VS Code integration in particular being very powerful). If someone were able to make a “turn this Dockerfile
into some nix expressions” thing that would be very fun and cool, though.
We got a looooot of value out of using Bazel for testing and the like, and I feel like the learning curve is easier (imagine that lol). I think if you have certain kinds of tech stacks it can be an easy onboarding process.
I’m not sure whether I’ve understood your use case properly, but it is possible to write a Nix module that will take an existing Dockerfile and run it on your server. I use it for some packages that I’ve been too lazy to convert into proper nix expressions. For example, the FreshRSS aggregator was as simple as:
virtualisation.oci-containers.containers = {
freshrss = {
image = "linuxserver/freshrss";
ports = [ "8124:80" ];
autoStart = true;
};
};
This project feels very exciting for multiple projects that I’ve been working on. If I might ask a couple of questions:
How unsupported would it be for me to write a backend that sent something besides emails? For example, a backend that sent and received private messages through the REST api of some site. Is this idea merely misguided, or is the idea that the messages are e-mails a fundamental invariant of the library that would cause the code to fail if the backend didn’t fully support the relevant RFCs.
Is there any support (or roadmap to support) IMAP authentication via OAUTH? My employer’s IMAP server recently switched to only supporting OAUTH based authentication, which has killed 90% of the software I had been using for e-mail. If it’s not on the roadmap, would it at least be a welcomed patch?
Are GUI builders really that useful? I did some GUI tools and a GUI app in the past and I’ve never understood what makes a GUI builder a better tool than just writing widget code directly in the editor. Assuming that the programmer creates a non-trivial application that contains GUI that is actually based on some state (shown/hidden panes, splitters, animation), is it really an advantage to have some windows in the GUI Builder, and some directly in the code?
New GUI-oriented frameworks actually deliberately skip having a GUI builder (i.e. Flutter), and design their tools around writing the GUI in the code, and using the hot-reload pattern to instantly apply the code to a graphical output. So I’m wondering are GUI builders a pattern that is worth investing in?
I used to feel the same way, but I’ve softened a bit. I’m now finding that putting the GUI purely in code has a bit of a bathtub curve to it. Like you, I find that GUIs that escape “Hello World” levels of complexity are often easier to handle in code than in a builder, since you hit a level of interactivity that escapes the model of the builder. However, as the GUI continues to grow, there’s often a growing amount boilerplate code for defining the GUI. Code that would be better expressed in a DSL. The breakthrough moment for me was realising that the native file format of the GUI builder can be that DSL. Now, depending on the builder and its format (some of which are truly execrable), you can get quite a bit of complexity before you reach the point where the builder is worthwhile. However, in the best case, reading the diffs of the builder files while doing a code review can be quicker and clearer than reading the diffs in the code, simply because you’re using a language designed for the job.
One big advantage is iteration cycles, especially if you’re new to a GUI library. I’ve recently been writing a GUI app with GTK and rust. While I’m not new to GTK overall, I am new to GTK4 and some of the GTK3 the components I’m relying on. Waiting for rust linking between each iteration has been painful enough that I have considered switching to Glade. Unfortunately Glade’s support for the components seems perpetually less than full coverage. Whenever I try it I can only make about 80% of the GUI using the builder and then I have a mix of both approaches in my project, which can be confusing in its own way.
I encountered the same slow linking as you, and fwiw I had great success leaning on mold as my linker: my change-rebuild cycle is less than a second.
My windows builds are still slow (even with llvms linker), but I’m tempted to try this technique posted by Robert Krahn. Maybe it’ll make your situation much better.
then it can help you make your code more concise and readable.
# Reuse result of "func" without splitting the code into multiple lines result = [y := func(x), y**2, y**3]
I don’t get it. Is this just to avoid writing one extra line, like this?
y = func(x)
result = [y, y**2, y**3]
The article explicitly addresses this objection:
You might say, “I can just add y = func(x) before the list declaration and I don’t need the walrus!”, you can, but that’s one extra, unnecessary line of code and at first glance - without knowing that func(x) is super slow - it might not be clear why the extra y variable needs to exist.
The walrus version makes it clear the y
only belongs to that statement, whereas the “one extra line” version pollutes the local scope, and makes your intention less clear. For short functions this won’t matter much, but the argument is sound.
for i in range(3):
print(i)
print(i) # prints 2
Local scope gets polluted all the damn time in Python. I’m not saying that’s desirable, but it is part of the language, and that’s a worse argument then most.
y
with the walrus, as written in the article, pollutes local scope anyway, by the by, as the result of func(x)
. No intentionality is lost and you’ve done literally the same damn thing.
Do what you can’t do and you’ve got my attention. (For my earlier post, using it within an any
, at least, takes advantage of short-circuiting (though implicit!))
y with the walrus, as written in the article, pollutes local scope anyway, by the by
You are correct, I should have tested myself before saying that. So yeah, the benefits look much weaker than I had supposed.
Yeah, I find the walrus operator redundant in almost every case I’ve seen it used. If I’m feeling generous, I’ll give partial credit for the loop-and-a-half and the short-circuiting any behavior— but it was wrong to add to the language and I’ve banned it from any Python I’m in charge of.
Edit: Also, the accumulator pattern as written is horrendous. Use fold
(or, yes, itertools, fine) as McCarthy intended
You can make the same argument with only minor modification of the examples against many things that now are considered not just acceptable but idiomatic Python.
For example, you always could just add the extra line to manually close a file handle instead of doing with open(some_file) as file_handle
– so should we ban the with
statement?
You could always implement coroutines with a bit more manual work via generators and the yield
statement – should we ban async
and await
”?
You could always add the extra lines to construct an exception object with more context – should we ban raise from
?
Hopefully you see the pattern here. Why is this bit of syntax suddenly such a clear hard line for some people when all the previous bits of syntax weren’t?
I once had a coworker that felt that functions were an unnecessary abstraction created by architecture astronauts. Everything people did with functions could be accomplished with GOTO. In languages without a GOTO statement, a “professional” programmer would write a giant while
loop containing a series of if
statements. Those if
statements then compared with a line_number
variable to see if the current line of code should be executed. The line_number
was incremented at the end of the loop. You could then implement GOTO simply by assigning a new value to line_number
. He argued that the resulting code was much more readable than having everything broken up into functions, since you could always see what the code was doing.
“If the language won’t let us set the CPU’s IP register, we’ll just make our own in memory!”
Yikes.
with
is a dedicated statement. It does 2 things ok, but only them, no more. You can’t add complexity by writing it in another complex structure like a list-comprehension. The walrus can be put everywhere, it’s vastly different IMHO.
I also find that in result = [y, y**2, y**3]
it’s much clearer to parse what’s going on at a quick glance, while you need to think more when coming up on the walrus.
Even clearer might be result = [y**1, y**2, y**3]
.
Was there any further work on this? I’m abuzz with a dozen ways I could use this, but there’s a ton of implementation details that I don’t understand.
There is a distributed adaptable microkernel called Off++ which implements Boxes: https://lsub.org/off/
I began with an empty configuration file and only added the bits I needed. My ~/.zshrc is 200 lines of code and loading 4 plugins.
I’m sorry … 200 freaking lines after cutting out the excess? Sorry, I’m not experienced with zsh – is it just completely useless without huge amounts of configuration?
Without seeing the 200 lines, I would assume that it’s for completions, prompts, and other niceties. After I dipped my toes into the zsh config world, I moved right over to fish where all those nice “modern” features were enabled by default.
But no. Zsh is perfectly usable with the defaults. I believe it’s the default on MacOS now.
This is what stops me from adopting anything but vscode. Endless configuration. At least vscode will sync my config
I’m curious about this, because it’s the opposite of my experience. We’ve always been a bring-your-own-editor firm and there’s quite a variety of editors in use (Emacs, CLion, Vim, Notepad++). With just a couple of commands (nothing fancy, just some cmake and conan), everyone can start writing code.
Recently, we’ve had a couple of developers come on board who preferred VS. They lost significant amounts of time trying because VS could not produce a working build without significant levels of configuration. They’d have questions (e.g. how to make a debug build) and I could tell them the cmake flag to call, but neither of us knew how to get VS to do that. One developer eventually wrote a powershell script that configured VS Code to produce a working build. They’d have to blow the directory away once a month and run the script again after VS stopped producing builds.
The thing is, I want to understand VS Code. It’s obviously a different way of looking at development than anything else out there and I’ve never regretted learning a new way of thinking. I want to experience the efficiency that you are getting out of it. However, I’ve never really found a good explanation of the VS way of working. Every guide I’ve seen has been 99% about learning the concept of coding and not really about VS as an IDE. I’ve tried just using it as a text editor (open a file, edit the code, compile in the terminal), but this wasn’t any different that Notepad++. I know that it has so much more to offer, but I’ve just never found the key to unlocking it.
you say VS in some places and VS Code in other places…. they’re two entirely different products. (and i find them both pretty hard to use, though i do kinda like the VS debugger - i just drag exes into it and thus sidestep all the project configs etc).
VS is the full IDE. VS Code is a javascript scriptable editor that can host other pieces.
My vim config lives in ~/.config, which is a clone of a git repo, so setting it up on a new machine just requires a git clone.
Zsh is completely usable with the default configuration, as is vim, tmux, etc. 200 lines sounds a lot but if you have a look at my conf it’s only a few aliases/functions, some zsh options and some plugins being loaded. It’s actually very “minimal” in comparison to some zsh frameworks.
https://github.com/aymericbeaumet/dotfiles/blob/master/.zshrc
Whenever I see Facebook, LinkedIn, or Google (these days), etc. talking about “the customer” I’m reminded of this scene from Mars Attacks:
Or the classic Twilight Zone episode “To Serve Man”
Distressingly accurate. Remember: If you don’t have to pay for something, then you are not their customer, you are their product.
Also remember: Plenty of companies are still happy to charge you for the privilege of being their product. The promise of cable TV was that there wouldn’t be advertising, since the subscriber was the customer.
If you do have to pay for something, then you are not their (only) customer, AND you are also a premium product that has money.
What about the Linux Kernel? I have (unfortunately) never done anything to further progression in regards to Kernel development.
After a fractured wrist in my younger years, I had to switch to one of these. I’ve stuck with the model ever since and have even converted a few other users. I’m always surprised that there aren’t more knock-offs
My vague memory is that there was some kind of patent claim that scared companies away from making knock-offs.
I use a couple of Elecom EX-G now, which is similar to the Logitech M57*, but comes in a cabled model, which I prefer. They even take replacement balls that fit those similar Logitech devices, so at least Elecom are not scared away.
I’m currently using a wired Perixx myself, which is also very similar to the Logitech. I’ll keep Elecom in mind if/when the Perixx wears out.
I’ve run into versions of this far more often than I should. My favourite anecdote along these lines was my mother trying to cancel her cable account. The company stated that only my father could cancel the account, since it was in his name. My mother responded that my father had passed away, but the company stated that they needed a death certificate signed by the coroner to confirm this. This was arguably reasonable, but also a nuisance, as the local coroner was currently on an eight-month backlog.
Eventually, my mother obtained my father’s death certificate coroner and returned to the cable company office. At this point, the branch manager was brought in. He stated that he could not be sure that this death certificate was for my father and not just another man with the same name. They said that my father would need to confirm, in person, that that was his death certificate.
This is not Kolkata, India by any chance? Actually, when I was closing down my parents house, all I needed were the death certificates. No one surrenders a telephone in Kolkata fradulently, I guess.
In any case, you could ask if the cable company would like to supply free cable. When they ask why, you could tell them, because you will stop paying.
Surely the solution is to simply stop paying, and watch as they try to bill a dead man? If it was a shared account start rejecting the charges, maintain a copy of them refusing to cancel the account.
I had similar issues halting our alarm service when Covid started - only my wife’s name was on the account - but they just accepted the cancellation with her signature (which wasn’t necessary to start off with). At least for an alarm service though you could make a strong argument for why such approval is necessary - but then it was trivially handled by me faxing (sigh) a random piece of paper I’d filled out, this defeating the purpose.
I’m hoping your mother wasn’t charged during that period. (You did eventually get the account canceled, right?)
They said that my father would need to confirm, in person, that that was his death certificate.
In the long run, just hiring a necromancer will be cheaper than the cable bills.
If this really does happen, I hope Google does the right thing and split off the Chromium project into a nonprofit, or a regulatory body forces them to do so. Having the world’s largest advertising business in complete control of the web is a negative for society as a whole.
If this really does happen, I hope Google does the right thing
Google, unsurprisingly, does the right thing by Google. They have a track record of supporting open standards just as long as it takes them to establish a nice captive userbase for a walled garden - then they ditch them. C.f. RSS and XMPP.
Yeah, they might force people to go through google to find a website. Most tech-illiterate (and even some tech-literate) people are already doing this - type “lobste.rs” in the Google search bar and click the top result.
For those who weren’t around for it, back in 2010, ReadWriteWeb posted an article about Facebook Connect which was briefly the top Google result for “Facebook Login”. The comments were then filled with hundreds of users who thought that the blog was just a Facebook’s latest redesign and were angry that they couldn’t find their photos and messages.
I really really wish people would stop trusting Google and would start treating breaking up Google’s browser market dominance as an emergency threat that takes priority over other pet causes.
Suppose that tomorrow Apple is forced to allow any browser engine that wants to be on iOS. The day after, every single Google property starts blocking all other browsers and displaying “To continue, please use Chrome, the fastest and most secure browser…”. And then they tell Mozilla they will not be renewing their funding deal when it expires.
And that’s it. That’s the end of the web browser market, everything becomes a variant of Chrome, and Google probably just forces people onto actual Chrome because it’s simplest. If you then suddenly decide it might be time to start going after Google’s browser monopoly, there will not be a market to come back to years down the line if that action eventually succeeds, because the market will have died basically on day one of the full monopoly and there no longer are competing browser engines or vendors who can revive it, because rebooting that market is a multi-year project at that point.
But instead of recognizing this, people decide that, say, Apple’s iOS market share is the biggest threat and the thing that urgently has to be broken up, when it’s at best a Pyrrhic victory that will just hand permanent control of the web to Google. iOS will still be there to go after once Google is broken up.
I also really don’t get the people who think Apple is a large of a threat as Google here. It’s also not ideal, but their market share is small, and it’s trivial to avoid Apple products. Less so with Google’s.
The situation with browser engines makes it even more sad - mobile Safari might be the only real thing preventing a total Chrome monopoly.
I’m conflicted, because while Google presents a theoretical threat, Apple poses a current threat.
Sure, Google could do that and maybe there wouldn’t be enough antitrust / developer blowback to make them walk it back, but it seems unlikely. Plus, there’s actually nothing stopping them from doing this now with iOS Chrome: it’s still detectable by their servers, and it still (presumably) sends them user data despite the different browser engine. Sure, they’d love if they could get their own browser engine and FLOC shit etc. on iOS, but they’d still be happy if all iOS users were using iOS chrome even if it’s just reskinned Safari + tracking. Since they haven’t locked non-Chrome iOS users out of their webapps, presumably they feel that they can’t.
Meanwhile, Apple’s BS is anticompetitive and hurts me now: they specifically gimp their browser to make it difficult to compete with the $100/year + 30% take of the app store. And they don’t just not implement things like Web Bluetooth and the other weird stuff coming out of Google these days that massively increases the fingerprinting and attack surface of the browser. They don’t implement completely reasonable parts of the spec like certain CSS Grid properties and screen orientation! Safari is the only browser I need to think about separately when writing code for the web, all the other browsers (even Firefox!) just work for the reasonable subset of things I use.
I know some people pine for the days when the web was just plain-text black-on-white blog posts without Javascript, but like it or not the web is the only non-walled-garden, graphical app delivery platform we have. I write and distribute free games and the web is the only platform I can do so on without installing gigabytes of crappy vendor SDKs, buying a Mac, paying Apple’s exorbitant platform fees, and struggling with arbitrary app store takedowns. I care a lot more about Apple’s anticompetitive practices here than Google’s Chrome-based theoretical ones.
You realize that Google degrading experience on their web properties for users of non-Chrome browsers is not some sort of way-out-there-maybe-one-day hypothetical, right? It’s a thing that has actually literally happened in the past. And on top of that, the documents that came out about AMP and what they were actually doing with it should really get people to sit up and pay attention and stop being all “well maybe they might one day” — they are already actively working on how to abuse their monopoly positions.
And the reason why Google would benefit from actual factual Chrome rather than merely Safari-wearing-Chrome-UI is precisely all the adware and surveillance and intrusive data collection stuff that you acknowledge exists and that Apple and Mozilla have consistently refused to implement in their browsers but Google desperately wants and will ram through whether it gets standardized or not. So no, they are not happy with the status quo and would not be happy with it if they had a chance to put real actual Chrome on iOS.
I do not care how much you personally hate Apple, or how many times someone says “But Safari is the new IE!” The highest-priority threat right now is Google, and until that threat is dealt with you’re going to have to live with Safari not being as good as you’d like it to be, which is by far the lesser evil here. Like I said: break up Google’s monopoly first, and then you can go after iOS browser engines to your heart’s delight.
This is exactly how Google got so big in the first place - the tech nerds love all the shiny toys Google gives them and think the companies that can’t keep up are the evil ones. Google gave us better search results (and more ads), more storage space on e-mail (and more opportunities for tracking), more capable browsers (that also spy more), better analytics (and more information about web users, even those that don’t use Google stuff), easier “office” collaboration tools (and more lock-in to the Google ecosystem), etc etc.
I really really wish people would stop trusting Google and would start treating breaking up Google’s browser market dominance as an emergency
Google learned the embrace, extend, and extinguish lesson very well from Microsoft, and they did it with a lot of free software. I’m wondering when they’re going to do an “extinguish” on Linux with their new Fucksya kernel or however that’s supposed to be spelled.
Really all of the tech conglomerates are merely surveillance organs with a side hustle. What people used to say at Google when I worked there was: “We’re the world’s biggest ad company which just happens to do some tech”. I started to exclude Apple from my blanket statement about tech conglomerates, and then I remembered that they randomly scan content on your devices now. So yeah, a pit of surveillance vipers, all of ’em.
Google may well be the largest threat long term, but we’re in a target-rich environment. All of the GAFAM pose a clear and present danger, but I’m totally on board with starting with Google, considering just how much power they hold. Aside from Chrom(ium) being a browser monopoly, Google is also where the latest round of HTTP protocols came from. When I worked there, spdy and quick were internal Google projects. They begat HTTP/2 and HTTP/3 respectively.
Google and FB are clearly the portions of “GAFAM” the are built on surveillance, and I say this as another ex-G engineer. Conflating them as being in any way similar as the rest (or “FAANG” because Netflix I think????) is at best disingenuous.
Apple goes to great lengths to detail that they don’t scan or monetize user data, so I’m not sure what you’re talking about?
https://www.theregister.com/2021/08/05/apple_csam_scanning/
Apple and MS are the least bad of the big 5 I think. I have less of a bone to pick with them than I do with Facebook and Google. I used to put Amazon in the MS and Apple camp, claiming, “All they wanna do is sell actual goods.” Yeah they’re terrible for other reasons, but for a long time I said that at least they weren’t a surveillance/ad company.
Then Alexa came out and I quickly changed my tune. My family has at least three of those things. There’s a “no Alexa” rule in my bedroom, but even so, the thing can hear voices back here when the door is open.
I don’t know why people conflate Netflix with the other 5, but they’re not on my radar.
The CSAM thing, which they didn’t end up shipping, also didn’t give them any access to any of your photos. The bulk of the concern was governments exploiting it.
There was never any question that the Apple gained no information about user content.
So again, claiming apple is scanning user content in any way similar to Google’s business model is BS.
I also have never been able to work out why Netflix became one of the five
It’s not about whether they have access to the content. Read back a few comments: “surveillance organ”. An agent of espionage need not directly perform any espionage.
Pamela Fox demonstrates in this rare display of developer humility that one needn’t be a power user of their editor to get stuff done. I appreciate that, especially in contrast to the evasions of past guests who can’t seem to admit that they actually have any practices that they would personally cop to being bad.
Setting that aside, though, I’m trying to understand the advantage of (neo)vi(m) keybindings over the conventions of GUI text editing contexts. For example, on a Mac, command+arrow takes you to the beginning or end of a line (horizontally) or document (vertically). Option lets you move by word. Doing any of that and adding the shift key selects text. IIRC, Windows has similar equivalents. Combine those keyboard conventions with the command palette and multiple-select patterns popularized by Sublime Text and you have a pretty efficient workflow with a much shallower learning curve. I can imagine if a good portion of your day is spent in an environment without a GUI, it would make sense to get pretty good at vim or emacs, but I would genuinely like to know, what is the case for GUI users to learn vim deeply?
I’m sure there’s no particularly good argument in favor of it, and I no longer actually recommend it to anyone, but having typed
vimtutor
one day 18 years ago, I’m too far gone to learn anything else. If you go throughvimtutor
and that’s not the path your life takes you’ll probably end up happier and better-adjusted than I am.I was a GUI user, but switched to vim(likes) for accessibility reasons.
So for me it’s not about what I can do, but how. I can do everything in my entire setup and never have to press more than one key per hand at a time, and rarely have to hold a modifier key at all.
My editor, web browser, window manager, email client, etc etc all work like that.
But yeah, I would never really recommend any of this to anyone who doesn’t need it.
I think I can relate to that. Started using touch input and key-navigation because I had to reduce the wrist pain from using a mouse.
Another vote for accessibility here. I have cerebral palsy and was developing wrist pain when using modifier keys. I switched to a vim layout and made extensive customisations to eliminate the need for modifier keys. This is purely anecdotal, but the problems have not recurred since I made these changes.
I really need to do a full post on this, but some of the specific vim motions i miss in vscode:
vi{char}
selects everything inside a quote/parenthesis/curly braces/sentence."Ay
appends your selection to thea
copy-register, which makes it easy to go through and collect a bunch of different little things to paste all at oncezf
lets you create new ad hoc folds over arbitray parts of the fileguu
lowercases the entire lineEach one is those is only a small boost to my productivity, but when I have 200 small boosts it adds up. That said, I avoid vim emulation in other editors. It’s easier for me to switch between two sets of muscle memory if they’re very different.
I’d love to read it. You have a great blog. Based on this summary alone, I’m not sure how or if I would use these features, but perhaps if I could see how you use them, in other words, in what context, that would be very interesting.
Have you tried reporting those? I got some vi behaviours fixed that way. vi(char) may be easy to add since I remember using di(char) and that worked fine.
Vim lets you use ctrl+arrow to jump by words as well. Useful in insert mode when you do not want to go back to the command mode for some reason. People should be able to discover that just by muscle memory, so it’s somewhat awkward that she did not stumble upon it. Yikes.
Vim’s [v]isual mode lets you move around your cursor to prepare a selection (instead of holding shift all the time) and then [d]elete it, [y]ank to [p]aste elsewhere. Among other possibilities. She would probably like it, since you can e.g. search using slash to locate where you want to place the end of the selection.
Or on a one uses Pos 1, End, Ctrl, etc. when hot on an Apple device.
I think they are just good, proper Editors of which there aaactually see rest few, GUI or not. They’re usually slow, resource hungry, inflexible, hard to impossible to configure, lack features one wants or just have but been designed well. Sometimes they suffer from instability, as in new versions (minor or major) breaking things.
That’s a general theme with software. It also applies to editors of course and when you have something reliable then you stick with it. For many people that’s something like vim or emacs.
It’s often the first editor that’s “good enough”. And if you are at any time SSH into something chances are you get to learn vim or nvi in the end. Even if nowadays some systems have other defaults. They tend to not be really great.
And once you know it you end up using it locally now and then, then it becomes a habit.
I think it’s easy to overestimate how much choice there is when you have a few requirements for editors.