I sort of disagree about ChatGPT. I used Claude to workshop a resume for a job I was offered. I didn’t ask Claude to write it though because LLMs are still worse writers than me. I used it as a writing coach to bounce ideas off of. Here are some of my prompts:
Summarize this resume and make a hiring recommendation for the attached job position.
What are the downsides?
How is this version? Better or worse?
Do you see any opportunities for reword this paragraph to improve it or is it fine as it is? Answer step by step:
How about the end of the paragraph?
Basically, Claude just gave me some areas to think about when I was writing for myself.
If that’s a joke/bait, disregard and laugh at me… but if not, it does depend on your accent, because I say “an award” as well. Actually, I can’t imagine an accent where it would be “a award”, but I’m not very imaginative.
I think I find than me somewhat more acceptable than than I, actually, although neither form strikes me (native speaker of American English) as outright incorrect. Assuming that it’s ungrammatical for pronominal complements of the preposition than to be in the objective case is a bad analysis of how English grammar actually works.
The rule is that “as” and “than” are followed by a nominative (because of the following verb, which can be omitted), and “like” (being a preposition) is followed by an objective.
However, most reasonable linguists will point out that “than me” makes sense when not followed by a verb, just like “than I” makes sense when considering the following verb (even if it is only implicit).
The one that is really dying out fast in English is the subjunctive form. Were it to disappear, not many tears will be shed.
Isn’t this a big distinction though? There’s a difference between using an LLM to help you refine what you’re going to write, and having the LLM write it for you.
I think the article covered this when it said to use spelling and grammar checkers but make sure that the writing was yours. There’s nothing wrong with using (human or mechanical) feedback tools to improve your writing, but make sure that the originality comes from you.
Apple has a straightforward reason to do this – they don’t care about the $99 fee, but by keeping out hobby apps, people are more likely to download very expensive commercial apps of which Apple gets a 30% cut. For example, I needed a GPS-based speedometer recently, and (seeing no free option) ended up with one that charges $10 a month! Probably thousands of people have done that. On Android these types of very simple hobbyist-level apps tend to be free.
Perhaps I’m lucky, but I’ve actually had pretty good luck finding them. Putting “open source” into the search bar helps, and if that fails there’s often a side-loadable one on GitHub.
My guess is that the actual rationale is a bit less cynical. By keeping out hobby apps — which aren’t subject to review — Apple is (trying to) optimize for end-user experience. And it’s no secret that Apple optimizes for end-user experience over basically everything else.
I can’t really blame them for taking this position. Apple devices are better and more “luxurious” than Android devices in the market, and I think this kind of stuff is a major reason why.
That’s true, but the number of iOS users that use privately-developed apps which aren’t available on the app store are statistically zero. Even among those users, the only ones that Apple cares about are those which will eventually publish their app to the app store, and for those users the relevant restrictions are non-issues. I guess?
Don’t forget about enterprise users, but I think they’re kinda not what you’re actually referring to here :)
(If you’re a BigCo Apple will give you an enterprise profile your users can put on your phone to run privately built apps by BigCo. This is how they did things when I was at Amazon.)
FYI: The definition of ‘BigCo’ is ‘more than 100 employees’ (from their docs). That puts it out of reach of small businesses but you don’t need to be Amazon-big for it to work.
Unfortunately, iOS is pretty bad for BYOD enterprise uses because there’s no isolation mechanism between the work and personal worlds. On Android, you can set up a work profile that runs in a different namespace, has a different encryption key for the filesystem encryption, and is isolated from any personal data (similarly, personal apps can’t access work data). iOS doesn’t have any mechanism like that, so there’s no way for a user to prevent work apps from accessing personal data and things like InTune run with sufficient privilege that they can access all of your personal data on iOS.
I’m investigating this myself (need to set up BYOD at a new job) and haven’t checked on Intune yet much beyond an acronym level (e.g., it knows about Apple Business Manager which makes enrollment easyish).
The iOS and Android approaches are quite different—Android is kind of like having two phones in one box, whereas iOS separates the data but not the apps. Microsoft puts a layer on top that requires app support but gets you finer-grained control over data transfer across the boundary (like, can I copy/paste between personal and work apps).
Whoa boy, folks with strong feelings are REALLY not gonna love this take :)
But I agree with you, I do think unoformity of UX is a sizable reason for the $99 fee. It’s not so much “Hate hobbyists” as “Making it easy to sideload means Jane Sixpack will do so, brick her device, and flame Apple”.
I just searched the App Store for
‘Speedometer’ and about 5 out of the top ~15 results don’t show anything about costing money, though perhaps they show ads.
The cost and difficulty grows exponentially as the coverage approaches 100%. IMHO it’s an extreme case that outweighs the benefits (unless you’re developing a Mars lander). It’s a badge of honor, but if you can settle on 95% coverage you get the same benefits, and get to skip the few bits that are not worth testing.
I thought so too, until I worked to get 100% branch coverage on a small-ish system at work (~12kSLOC Python). Some findings:
It wasn’t all that difficult. Nothing like exponentially more difficult the closer we got to 100%. More like “let’s check Stack Overflow for some tips before writing a test” difficult.
It detected several new bugs. Nothing major (since it didn’t show up in prod), but the sort of thing that could bite at any time. This was surprising at first, but in hindsight of course the code which is hard to test probably has more bugs.
A surprising benefit was that there’s no more asking whether there’s a missing test if something goes wrong in prod. Either something is covered but not actually tested (a hot tip here is to mutation test your code at least once in a while to detect misleading coverage numbers), or a test isn’t actually verifying what it says it’s supposed to verify.
Definitely doing more of this in future code bases!
Can mutation testing be automated? What I was thinking (with my limited imagination) was checking out the code, making some edits that should make the code incorrect, and checking that tests fail - is that it?
One problem with a gate on anything less than 100% is that it then becomes trivial to game by manipulating your code (eg add lines of otherwise dead code that get covered by a test to juke the calculation.) And more tempting to do so as the difficulty of the “right way” goes up.
It would be nice to think that we would all respect the intent of such rules and not do something like that… but my experience in systems with such a gate has been that it’s pretty common that eventually someone decides that their situation is important or urgent enough to do just enough to get by for now. (Likely with the intent, though seldom the follow through, of coming back to fix it later.) The end result is worse production code than you might have had without the requirement, and a statistic that isn’t really a meaningful representation of what it claims to be.
Can’t you just do the same thing with test coverage though? At least for ruby and simplecov, it only looks at whether or not a given line is traversed via some path in test code. You can make arbitrary tests to “cover” those lines without a meaningful test.
I mean ultimately you can game anything… it’s more about not creating/increasing perverse incentives to do things that are actively worse than the default.
I think your scenario is somewhat mitigated in that sense by:
Only/mainly affecting test code, not the code under test.
Arguably isn’t even worse, jut not good. Such a non-meaningful but covering test could be at least a step towards a meaningful one, it might be as easy as adding an assertion.
Tests have a cost for writing, managing, and maintaining. In return they have a benefit in catching errors. The usual rule is to add up your known costs of shipping that error, multiply by six for actual costs (that it is always six is a mystery of life), and target testing up to that cost to prevent the error.
Difficulty grows exponentially in testing. The really good testers write one truly annoying test a week. They figure out that your six character Unicode input field can overflow the 4K buffer you allocated, that your target aquisition can deadlock, or that system restarts mean your clients will ping in lockstep. None of these are one line fixes. Truly excellent testers are far more valuable than good engineers.
For me the interesting part of the second point is that those cases would not be caught by the 100% line/branch coverage. They require someone to think of system level edge cases.
This is basically what led me to write this post: https://lobste.rs/s/saaiyd/most_tests_should_be_generated. I’ve worked in codebases that mandated pretty high code coverage percentages, and it’s definitely effective for getting rid of silly errors. But bugs inevitably still creep through, and I was really curious why.
The short answer is: correctness is defined in terms of all data states, not just all code branches. And the number of data states in any program is practically infinite.
Well, they specifically changed the name to make it not actually Perl 6 anymore :) Maybe “Raku used to be called Perl 6” would be more accurate.
and maybe to get rid of the negative associations with Perl.
I don’t think the broad Perl community, including Wall, had negative associations with Perl :) If anything, I’d say the opposite, some people wanted to get rid of negative associations with Perl 6.
Perl 6 development took ages to the initial release (2000-2015) and by that time Python and Ruby already ate all of its cake. Anyone interested in upgrading their Perl 5 codebase was migrating to those two, instead of Perl 6. Many developers realized that Perl 6 is just not going to gain traction and wanted to continue developing Perl 5. One of the biggest reasons for the name change was to not sabotage their efforts - https://github.com/Raku/problem-solving/issues/81#issue-478241764
The whole Python 2 -> 3 ordeal probably also played a role in their thinking.
I tried to find out a little about why Perl was so popular once, but isn’t anymore.
One could argue that it’s precisely Perl 6 / Raku that killed it. Instead of gradually improving Perl 5, they focused all their efforts on an entirely new language and in the meantime people moved on to Python and Ruby.
So, Perl is dead, right?
Not entirely though. Perl 5 had enough momentum that it comes preinstalled with almost every UNIX-like system even today. Looking at GitHub, it’s probably more actively developed than Rakudo. It’s module repository CPAN claims to contain over 200k modules while Raku modules shows around 2k.
These days I wouldn’t start a new project in neither Perl 5 nor Raku, but if I had to choose, I’d probably bet on Perl, because of its maturity and wider adoption.
Edit: it’s also worth noting that some time ago they announced Perl 7, based on, and backwards compatible with Perl 5. For now they seem to be extremely careful about when it happens.
One could argue that it’s precisely Perl 6 / Raku that killed it. Instead of gradually improving Perl 5, they focused all their efforts on an entirely new language and in the meantime people moved on to Python and Ruby.
“They” didn’t though. That’s where a lot of the shouting came from: Perl 5 development never stopped. The groups of people working on p5 and p6/raku have always been about 90-95% disjoint. There was a bit of a lull in Perl 5 development around 2002-2006 which probably did have a bit to do with p6, but continuous gradual improvement and modernization has been exactly the strategy since 5.10, and there’s been a new release with performance improvements, language additions, and deprecations every single year since 2010, which is not a bad cadence at all, considering.
What really pissed certain people off was the fact that that effort existed at all! “Hey, why are you wasting your time improving that old thing instead of coming over here and helping us?” If you followed Python 2->3 it’s a very similar story except with a different outcome. Any maintenance effort, and especially any feature improvement, on the old version was seen as a distraction from the development of the new one — and, far worse, it was seen as encouraging people not to migrate. In Python land, the 3 crowd won and the guys with the “2.8” shirts got spat on and publicly denounced in standards-ese. In Perl land… well, arguably no one won. Perl 6 didn’t take over the way Python 3 did, but it succeeded in sowing enough doubt to contribute to Perl 5’s decline in adoption.
The name change was one of those compromises that made no one happy (least of all Larry), but it served as an acknowledgment of that parallel development. People used to ask “oh, you’re using Perl 5? When are you going to upgrade to 6?” The Raku name makes clear that that’s in the same class of question as “oh, you’re using Lisp? When are you going to upgrade to Scheme?”
Not entirely though. Perl 5 had enough momentum that it comes preinstalled with almost every UNIX-like system even today.
This is a big one, and I’ve wanted to learn Perl 5 for that exact reason. I work with a bunch of really stripped down embedded Linux systems, and the only scripting language runtimes on those systems are usually Busybox’s shell … and Perl 5.
May I suggest bookmarking this for when you come back to that ambition? It’s a talk I gave about dependency-bundling tools and some other relevant techniques for people in exactly the place you are.
I just read that entire GitHub issue discussion and it was a wild ride. Strongly reminiscent of a medium-sized Wikipedia internal discussion. The 2019 vintage means it’s long enough ago to feel distinct in terms of tone, but recent enough to feel familiar. Thanks for the link.
Now I feel tempted to go and try out one or both of Perl 5 and Raku.
I fully believe that no one should let themselves be intimidated out of learning or pursuing excellence but I sincerely and in good faith disagree with this:
I gave a talk in 2016 where my most important point was that people erroneously believe that you have to be an expert to write an RFC or change Rust, but that I wasn’t, and you don’t need to be one either.
I really want people who suggest changes to languages be experts in that language. It’s frustrating when changes are made to languages that haven’t considered the full gamut of cases that may benefit or be affected by the change. If the Rust community sincerely believes what I have quoted, then long term it will eventually corrupt itself.
I really want people who suggest changes to languages be experts in that language. It’s frustrating when changes are made to languages that haven’t considered the full gamut of cases that may benefit or be affected by the change. If the Rust community sincerely believes what I have quoted, then long term it will eventually corrupt itself.
How about this as a compromise: anyone at all can suggest changes, but changes are only accepted after lots and lots of thought by (many) people who are experts?
I’m not trying just to quibble. I like the idea of a community open to good ideas from anyone, but also resistant to making changes too quickly or without sufficient consideration.
anyone at all can suggest changes, but changes are only accepted after lots and lots of thought by (many) people who are experts?
Sure, but that’s just superficial pandering to the novices in the community. We should just be honest. Novices are at most qualified to express frustrations with a language, only experts are qualified to suggest and make changes to address those frustrations. This is similar to the product design philosophy that “customers don’t know what they want.” It’s up to skilled professionals to design solutions to the problems that users have.
Sure, but that’s just superficial pandering to the novices in the community.
And how exactly does the community become self-sustaining in the production of a steady stream of “experts” and “skilled professionals”, if “novices” are treated as you suggest?
Well, people don’t become experts in a language by proposing poorly informed and misguided changes anyway. ;)
They become experts by writing that language extensively, designing and maintaining libraries, contributing to the implementation… newbies can certainly participate in the discussion of proposals, but it’s really important to know the limits of one’s expertise. “This will break my workflow if accepted” is a useful, much-needed feedback that anyone can give. So is “this fails to account for my use case”. People can also get better at knowing how to see flaws in proposals by doing that.
Experts also must not dismiss proposals “because reasons”. The problem is that a rejection justification can take more time to write than a misguided proposal (a generalization of Brandolini’s law). Giving every newcomer the same amount of the time of current experts is thus counterproductive — there must be filters to weed out obviously bad proposals before they get to the level where the discussion is whether to actually add it to the language or not.
I’m not saying that newcomers cannot make language feature proposals. They (this includes me, in all areas where I’m a newcomer) just should be prepared to face the fact that their proposals are completely misguided and they may need to learn a lot more just to understand why they are misguided.
Giving every newcomer the same amount of the time of current experts is thus counterproductive
Plenty of projects allow newcomers to propose things. They aren’t guaranteed traction. They aren’t guaranteed detailed responses from “experts”. But they can still propose things, which is the difference between what I want, and what the parent comment suggested: “only experts are qualified to suggest and make changes”.
All novices should be treated with respect. Their concerns should be listened to and taken seriously. Their opinions on how the language should change should be taken with a grain of salt.
I don’t think allowing novices to take a driver’s seat in your language’s design is the best or only way for novices become skilled professionals in your language. They become skilled through experience with the language.
And how exactly does the community become self-sustaining in the production of a steady stream of “experts” and “skilled professionals”, if “novices” are treated as you suggest?
Why are experts, novices, and skilled professionals in quotes? Do you not believe that those categories exist? Do you not believe that there is a difference in understanding of a language between people who have different amounts of experience with that language?
The whole point of this article is that different people tend to be experts and novices all at once, in different fields. You can be someone with 15 years of experience in memory management, lang dev and compiler dev, and still be a novice in Rust. Similarly, you can be an expert in Rust and not be an expert in memory management, lang dev, compiler dev, whatever.
All novices should be treated with respect. Their concerns should be listened to and taken seriously. Their opinions on how the language should change should be taken with a grain of salt.
Your original proposal was that novices should only be allowed to “express frustrations”. And I don’t see how your attempt to walk that back in this comment is any more respectful than that:
I don’t think allowing novices to take a driver’s seat in your language’s design is the best or only way for novices become skilled professionals in your language.
Luckily there’s a wide range of additional options between putting “novices” in “a driver’s seat”, and not allowing them to make proposals at all.
Why are experts, novices, and skilled professionals in quotes? Do you not believe that those categories exist?
I am skeptical of your usage of these categories.
There’s a story often told from the Django community of a conference where several of the “core developers” (a group which has since been abolished in favor of a different approach to technical decision-making) were present at a post-conference sprint, which had the usual pile of “good first ticket” things set up for new people to work on. But one newcomer went straight for a truly gnarly problem deep in the guts of the ORM, and came up with a patch. It took two “experts” reviewing it together for much of an afternoon to understand that it was the correct fix. The newcomer ended up with a commit bit later on.
If Django had been run on your model, as I understand it, that never could have happened, because the “newcomer” would not have been taken seriously enough by the “experts”.
There are many cases when what seems like a simple fix doesn’t fix the root cause, fails to account for corner cases, or has non-obvious implications. It can certainly take a lot of time for very genuine experts to evaluate a solution.
Also, I don’t think anyone in this thread advocates for judging people solely by their time in that particular project. Experts tend to be relatively good at recognizing experts from related fields.
Also, I don’t think anyone in this thread advocates for judging people solely by their time in that particular project.
That appeared to be exactly the criterion advocated above – that not having enough time with a particular language should be an absolute bar to being allowed to suggest changes/improvements to the language.
Experts tend to be relatively good at recognizing experts from related fields.
The universally-loathed state of tech interviewing and hiring is a strong counterexample to this claim.
The standard reply until perhaps 10 years ago was that novices should be humble and take the heat until they are no longer novices. It’s possible that this is not the right answer, but it has served and keeps serving well in many fields.
We obviously need a more stringent way of selecting for competence, though. Otherwise, we wouldn’t keep asking why software is still slow despite improvements in hardware, with rants like this one being offered as answers. That link is uncomfortable for me to read because I worry that I’ll be unmasked as one of those impostors. But the incident with Casey Muratori and the Windows Terminal team, brought up by the author of that comment, shows that incompetence really is rampant in the industry.
We obviously need a more stringent way of selecting for competence,
Maybe take a cue from the embedded world? There are very many competent
people there, doing great engineering. This doesn’t have the prestige
among non-technical folks that working for FAANG has.
Another thing is that high salaries don’t always attract the most
capable people, and even when they do, they may not motivate the capable
people to do their best work. Because maybe the work they’re doing is
actually BS, they know it, and they’re just punching a clock.
If I keep rambling along these lines, I’m going to get really off into
the weeds, so I’ll stop now.
Otherwise, we wouldn’t keep asking why software is still slow despite improvements in hardware, with rants like this one being offered as answers.
Anyone who claims that “software is slow” because of incompetence is revealing their lack of competence in understanding why “software is slow”, which tends to be a multifaceted issue.
Becoming not a novice is generally a matter of self identity. People are humble and take the heat until they decide that they’re not novices. In my experience, the more competent the person is, the more likely that are to self identify as a novice, so you’re selecting for people who are confident in their ability, not people with actual ability. It works in other fields only where there is some rigorous external evaluation of competence (apprenticeships, training, and so on).
The approach, which I’ll name “lurk more”, leads to both false positives and false negatives.
People who are competent but lack self-confidence will feel like they’re not good enough to participate. Even if the regulars try to spot people like that and bring them into the community, they won’t get everyone.
People who have self-confidence but are incompetent will participate anyway. The regulars may try push back on this somehow, but again they won’t get everyone, it won’t work on everyone, and it’ll cause friction.
Overall, it may work, and it’s definitely less annoying for some people to be a regular in such communities (because they’ll deal with less novice questions/comments, and the novices will be “better behaved” - those are scare quotes). It might even be unqualifiedly better for communities where people can choose to participate, or there’s an alternative community that’s more welcoming.
The Rust community has, of course, emphatically come down against “lurk more” because they want a more welcoming community, and “lurk more” is not welcoming. As a self-centered person, I would prefer communities I’m not in to be welcoming, and those I’m already in to be “lurk more”. But as an altruistic person, that should persuade me that it’s better to use alternative means (e.g. invite-only groups) to control group behavior.
Novices are at most qualified to express frustrations with a language, only experts are qualified to suggest and make changes to address those frustrations
Definitely not true – I have gotten good suggestions on the Oil language from people who barely know it (which is most people, by definition)
There is simply not that hard a line between “novices” and “experts” when it comes to language design.
It really is true, just as the diagram in the blog post shows, that different people know different things.
Different people have different use cases, and those are valuable.
Nobody can possibly understand all the things a language is used for – somewhat famously Guido van Rossum has never used NumPy, and doesn’t really do scientific computing / linear algebra, yet there are lots of contributors that are not “language designers who have given input and shaped the language. (This isn’t always a smooth process, but it happens, and Python is the most popular language in that domain)
Sometimes the suggestions are impossible, or inconsistent with something that exists already, but those problems aren’t hard to uncover if there is a healthy dialogue
Note also that GvR’s discomfort with performance-oriented design is why CPython still doesn’t have tagged pointers and PyPy still isn’t an official PSF project. Years of wasted effort on forks of CPython were the result. There are similar stories about asynchronous I/O. When a language designer refuses to explore a field of computer science, then it tends to hobble the entire community.
Eh weird tangent, and also untrue on both counts. There are plenty of optimizations in merged into CPython, just not ones that would break all C extensions, like tagged pointers. Essentially every Python release has gotten faster.
Historically, PSF was basically for running Python conferences. PyPy is an amazing project, but I don’t see why the PSF would take it over. At one point >15 years ago there was the thought that it would replace CPython, but there are good reasons that that won’t happen.
You’re getting the point EXACTLY wrong – Guido is not a god and he doesn’t come up with up all ideas, and bless all efforts. Rather, if you care about tagged pointers and performance in CPython so much – YOU should work on it. Plenty of people have – e.g. I’ve noticed heroic efforts improving performance, with compatibility, from Victor Stinner.
Not to mention the recent huge speedups in Python 3.11 – which took outside FUNDING and organization (mostly from Microsoft it seems). PSF is not a wealthy organization.
I’d say that almost all suggestions based on a real problem have been useful, which is probably 80% of them.
Occasionally there is someone who has an “mind bug” idea they haven’t tried … e.g. “I read about some cool thing and I think it should apply to your project” without really doing anything other than writing an e-mail.
But yeah the overall point is that you need a many different viewpoints to make a good language. You also need people to synthesize and simplify them – i.e. the reason people criticize languages like shell, PHP, and C++ is that they seemingly have every viewpoint, which leads to a lot of inconsistency.
This is similar to the product design philosophy that “customers don’t know what they want.” It’s up to skilled professionals to design solutions to the problems that users have.
Community Representation. Andrew said in 2015 that he hoped the proposal process would “make the process more accessible to anybody who really wants to get involved in the design of Go.” We definitely get many more proposals from outside the Go team now than we did in 2015, so in that sense it has succeeded. On the other hand, we believe there are over a million Go programmers, but only 2300 different GitHub accounts have commented on proposal issues, or a quarter of one percent of users. If this were a random sample of our users, that might be fine, but we know the participation is skewed to English-speaking GitHub users who can take the time to keep up with the Go issue tracker. To make the best possible decisions we need to gather input from more sources, from a broader cross-section of the population of the Go community, by which I mean all Go users. (On a related note, anyone who describes “the Go community” as having a clear opinion about anything must have in mind a much narrower definition of that group: a million or more people can’t be painted with a single brush.)
I think we are more or less on the same page so I won’t belabor the point too much. I don’t disagree with Mr. Cox there but I would only add that I think he’s referring to an explicit feedback step of a larger proposal process, where the initial proposal was likely made by someone qualified to suggest a language change. I consider the feedback step necessary to any process that governs change.
What I meant by pandering to novices was that in the situation where “anyone at all can suggest changes, but changes are only accepted after lots and lots of thought by (many) people who are experts” it’s really just lip service to the initial suggester that they had meaningful input to the proposed changes when in the end it was taken up, shepherded, and likely significantly changed by the experts. That’s fine as a description of the process but I feel that description is just pandering. My personal style is to be more straightforward and set an explicit expectation that language change suggestions are unlikely to be taken seriously from novices, instead of the alternative expectation that anyone at all can contribute at a high level.
It works out okay. RFC is a request for comments, so they do get feedback (sometime too much) and the features are either shaped into something nice, or get postponed or rejected.
Not every feature is a PhD-level type juggling. Some proposals are just for new standard library functions, Cargo flags, or bikeshedding about syntax sugar.
Which, incidentally, means that all the documentation which describes UNIX time as “the number of seconds since 01/01/1970 UTC” is wrong. Wikipedia, for example, says that “It measures time by the number of seconds that have elapsed since 00:00:00 UTC on 1 January 1970”, which is incorrect. (Though the POSIX spec it links to seems vague enough to at least not be incorrect; it says the “Seconds since epoch” is “A value that approximates the number of seconds that have elapsed since the Epoch”.)
I spent a long time trying to figure out the best way to correctly convert between an ISO-8601 timestamp and a UNIX timestamp based on the assumption that UNIX time counted actual seconds since 01/01/1970 UTC, before I found through experimentation that everything I had read and thought I knew about UNIX time was wrong.
I would fix that Wikipedia article, but you (or the others in the discussion) seem to be better prepared to come up with correct wording, so I most humbly encourage someone to take a whack at it, in the spirit of encouraging people to get involved. Don’t worry, you won’t get reverted. (Probably. I’ll take a look if that happens.)
In Unix time, every day contains exactly 86400 seconds but leap seconds are accounted for. Each leap second uses the timestamp of a second that immediately precedes or follows it.
Pretty neat. I’ve been gently pushing to use it at work. We’re on YAML right now for human-readability, but we’re using tags and other delights, which makes me think what we really want is something more complex. (Significant whitespace is not a concern for my teammates.)
Proposing a Wikipedia policy change: giving more users the power to block other users. Before the pitchforks come out, it’s very narrowly scoped: only new users who are making unconstructive edits at a rate faster than 1 per minute, and an admin still has to review the block. This is just to respond more quickly to people doing lots of vandalism quickly - we had one user make 79 edits in half an hour last year before an admin could show up.
I must pedantically point out that just because it wasn’t installed for the reasons the story describes, doesn’t mean it didn’t behave as the story describes. XD
I recently worked on debugging what turned out to be a hardware issue. A microcontroller on an I/O board occasionally crashed and took other components with it, and the best way to trigger it intentionally was to touch the system’s power button. Not press, just touch. I think it was eventually found to be a process flaw: something in the way that the PCB and the parts on it were manufactured, soldered, cleaned with solvent, and assembled was causing a shielding layer of epoxy or something on the board to be damaged just enough to occasionally let current leak through to places it shouldn’t be going. The big capacitor that is your body touching the (metal) power button made enough electrons move around to crash the microcontroller. Apparently sometimes you didn’t even have to touch it, just get your finger near by. As far as I know, the fix was to use a different solvent during board assembly.
This reminds me of one of my favourite things about electronics. Sometimes an electronic engineer will design a circuit, test it, find out that it fails, try to debug it with an oscilloscope, find that it works reliably but only when the oscilloscope probes are in, and then finally give up and fix it permanently by adding a 22pF capacitor at each spot where the oscilloscope probes were connected, in order to simulate an oscilloscope being attached and thereby make the circuit behave how it did with the scope plugged in. :)
Yep, that sort of debugging happens everywhere. Building physical devices often has a “fit-and-finish” step, where you put everything together, try to make it work, and then by hand file down or smooth out any bits that don’t quite mesh together right. Getting your design and manufacturing process to the point where you don’t need this sort of manual fiddling all the time takes either a lot of hands-on experience or a lot of iteration or both. Sound familiar, software people designing complex systems?
But that sort of iteration is also necessary for automation and interchangable parts to work, which lets you make majillions of those devices, and so the high up-front cost pays itself off by letting you scale up. The thing about software is that you can do that sort of iteration very fast and in very small pieces, and now with the internet you get the benefits almost instantly, and so the cost-benefit scale is very different.
But that sort of iteration is also necessary for automation and interchangable parts to work, which lets you make majillions of those devices, and so the high up-front cost pays itself off by letting you scale up
This is why I think of engineering as the process of paying a very expensive person to make something else cheaper. :)
Oh, certainly! I found these new details a few weeks ago while looking up the story for our junior engineers, as this was always the classic lesson of “seemingly-unconnected devices can still influence each other at the electrical level”.
Or to put it another way, “next time, politely tell the internal helpdesk customer to turn off their desktop plasma globe when trying to use their company-issued yubikey”.
Interestingly, I’ve recently been thinking that Rust is redundant here. Most of the time, when I take a shortcut and use, eg, a tuple struct, I come to regret it later when I need to add an .4th field.
I now generally have the following rules of thumb:
Don’t use tuple structs or tuple enum variants. Rational: if you need to tuple at least two fields, you’d want to tuple three and four in the future, at which point refactoring to record syntax would be a chore. Starting with record syntax isn’t really all that verbose, especially now that we have field shorthand
For newtype’d structs, use record syntax with raw / repr as a field name: struct Miles { raw: u32 }. Rationale: x.0 looks ugly at the call-site, x.raw reads nicer. It also gives canonical name for the raw value.
“newtype” enum variatns are OK. Rational: you don’t .0 enum variant, there’s only pattern matching (trivia: originally, the only way to work with tuple-struct was to pattern-match it, the .0 “cute” hack was added later (which, several years down the line, required breaking lexer/parser boundary to make foo.0.1 work))
In fact, probably most enums which are not just a private impl detail of some module should contain only unit/newtype variants: Rational: no strong rational here, this is the highest FPR rule of all, but, for larger enums, often you’ll fiend that you need to work with a specific variant, and splitting off enum variant into a dedicatded type often messes-up naming/module structure.
My rule of thumb has been that tuples (struct or otherwise) should only very rarely cross module boundaries. However, if I’m just wiring a couple of private methods together, it’s often not worth the verbosity to definite a proper struct.
That’s also reasonable! My comment was prompted by today’s incident, where I had to add a bool field to a couple of tuple-variants to a private enum used in tests :)
My experience directly, almost word-for-word. I think the importance of having a convention for the “default” name instead of .0 makes all the difference and removes the biggest source of inertia when using a record struct instead of a tuple struct (you picked .raw, I picked .value - tomato tomato).
In fact, if I had to pick globally, I would take anonymous strongly-typed types (see C#) over tuples in a heartbeat. We just need a shorthand for how to express them as return types (some sort of impls?)
As an aside, again to steal prior art from C# (which, for its time and place is incredibly well-designed in many aspects) , record types and tuple types don’t have to be so different. Recent C# versions support giving names to tuple members that act as syntactic sugar for .0, .1, etc. If present at the point of declaration they supersede the numbered syntax (intellisense shows names rather than numbers) but you can still use .0, .1, etc if you want; they also remain “tuples” from a PLT aspect, eg the names don’t change the underlying type and you can assign (string s, int x) to a plain/undecorated (string, int) or a differently named (string MyString, int MyInt) (or vice-versa).
This illustrates for me the kinds of decisions Swift makes in the language that you would otherwise have to settle on by policy and diligence. If you can’t combine two features in a certain way because there’s a better way to solve all the relevant real world problems, they would consider that a benefit to the programmer. That’s especially true when safety is involved, like Rust, but also just ease of use, readability, etc. I think it’s partly a reaction to C++ in which you can do anything and footguns are plentiful.
Honestly, that’s my least favorite part of Rust. Since there’s always a bunch of ways to do the same thing, you have to be very careful to simultaneously avoid bike shedding and also avoid a wildly incident codebase.
I don’t think this kind of advice is good for docs: it’s very opinionated, and Rust’s philosophy is very pluralistic. It’s also a bit dangerous, as it needs nuance to not be misapplied.
I do find that I’ve accumulated a lot of similar rules of thumbs and heuristics which help clarify thinking and make decisions when coding in the small, perhaps I’ll compile them in some long read one day.
The distinction between “newtype” and “tuple” is an interesting one I hadn’t considered, and I agree with it. Just(T) seems fine; my example with (T, U, V) not so much. I don’t share the dislike for newtype structs, but I also get where you’re coming from; Foo { field } is fine for both destructuring in a pattern match and for constructing an instance, so I agree that it doesn’t add that much these days.
I’m not entirely sure I follow your last bullet point; the stated preference and the argument seem to contradict each other, but probably I’m just misreading you?
If you do the latter, you might find yourself wanting to refactor it later. But, again, this is very weak guideline, as adding an extra struct is very verbose. Eg, rust-analyzer doesn’t do this for expressions:
Ah, so basically your rule here would be “just do what Swift requires you to do here.” I understand but… well, see the original post; I obviously disagree. 😂 I get why, from the refactoring POV and the “it’s useful to name use the inner type sometimes” but find that to be much more of a judgment call on how the type is used. I think I would find that design constraint more compelling as an option if there were a way to limit where you can construct a type that gets inherently exposed that way, while still allowing it to be matched against. You’d need something like Swift’s ability to overload its ~= to make that actionable, though, I think. 🤔
To each their own. I personally hated having songs with no artist/album/etc when I did this before moving to a steaming platform a couple of years ago.
MusicBrainz Picard works pretty well in my experience. It can work with acoustic fingerprints, but almost always it is smart enough to find the right data just by looking at folders, filenames, grouping of files, etc.
I sort of disagree about ChatGPT. I used Claude to workshop a resume for a job I was offered. I didn’t ask Claude to write it though because LLMs are still worse writers than me. I used it as a writing coach to bounce ideas off of. Here are some of my prompts:
Basically, Claude just gave me some areas to think about when I was writing for myself.
‘… than I.’ ;)
I nominative Moonchild for an award …
You mean “a award”.
If that’s a joke/bait, disregard and laugh at me… but if not, it does depend on your accent, because I say “an award” as well. Actually, I can’t imagine an accent where it would be “a award”, but I’m not very imaginative.
It’s a joke.
I think I find than me somewhat more acceptable than than I, actually, although neither form strikes me (native speaker of American English) as outright incorrect. Assuming that it’s ungrammatical for pronominal complements of the preposition than to be in the objective case is a bad analysis of how English grammar actually works.
The rule is that “as” and “than” are followed by a nominative (because of the following verb, which can be omitted), and “like” (being a preposition) is followed by an objective.
However, most reasonable linguists will point out that “than me” makes sense when not followed by a verb, just like “than I” makes sense when considering the following verb (even if it is only implicit).
The one that is really dying out fast in English is the subjunctive form. Were it to disappear, not many tears will be shed.
https://www.merriam-webster.com/grammar/than-what-follows-it-and-why 😁
Isn’t this a big distinction though? There’s a difference between using an LLM to help you refine what you’re going to write, and having the LLM write it for you.
I think the article covered this when it said to use spelling and grammar checkers but make sure that the writing was yours. There’s nothing wrong with using (human or mechanical) feedback tools to improve your writing, but make sure that the originality comes from you.
I find https://github.com/rhysd/actionlint, which catches a heck of a lot of common errors, invaluable.
Google keep (will migrate off, someday…), google tasks (ditto), plain text files in dropbox
Apple has a straightforward reason to do this – they don’t care about the $99 fee, but by keeping out hobby apps, people are more likely to download very expensive commercial apps of which Apple gets a 30% cut. For example, I needed a GPS-based speedometer recently, and (seeing no free option) ended up with one that charges $10 a month! Probably thousands of people have done that. On Android these types of very simple hobbyist-level apps tend to be free.
Though good luck finding one that isn’t riddled with ads and asks for a bunch of inappropriate permissions.
The F-droid app store is catered specifically for this. (Yes the Google store is revolting)
Took me a few seconds.
That’s not on Apple.
Yes, that’s the point.
Perhaps I’m lucky, but I’ve actually had pretty good luck finding them. Putting “open source” into the search bar helps, and if that fails there’s often a side-loadable one on GitHub.
My guess is that the actual rationale is a bit less cynical. By keeping out hobby apps — which aren’t subject to review — Apple is (trying to) optimize for end-user experience. And it’s no secret that Apple optimizes for end-user experience over basically everything else.
I can’t really blame them for taking this position. Apple devices are better and more “luxurious” than Android devices in the market, and I think this kind of stuff is a major reason why.
I don’t understand. Who is the end user when a developer is trying to improve their own experience? There’s absolutely no distribution going on in OP.
That’s true, but the number of iOS users that use privately-developed apps which aren’t available on the app store are statistically zero. Even among those users, the only ones that Apple cares about are those which will eventually publish their app to the app store, and for those users the relevant restrictions are non-issues. I guess?
Don’t forget about enterprise users, but I think they’re kinda not what you’re actually referring to here :)
(If you’re a BigCo Apple will give you an enterprise profile your users can put on your phone to run privately built apps by BigCo. This is how they did things when I was at Amazon.)
FYI: The definition of ‘BigCo’ is ‘more than 100 employees’ (from their docs). That puts it out of reach of small businesses but you don’t need to be Amazon-big for it to work.
Unfortunately, iOS is pretty bad for BYOD enterprise uses because there’s no isolation mechanism between the work and personal worlds. On Android, you can set up a work profile that runs in a different namespace, has a different encryption key for the filesystem encryption, and is isolated from any personal data (similarly, personal apps can’t access work data). iOS doesn’t have any mechanism like that, so there’s no way for a user to prevent work apps from accessing personal data and things like InTune run with sufficient privilege that they can access all of your personal data on iOS.
I think you’re out of date on iOS capabilities. See https://www.apple.com/business/docs/resources/Managing_Devices_and_Corporate_Data.pdf#page4
Thanks. I vaguely remembered reading about that, but InTune didn’t support it and required full access. Has that improved?
I’m investigating this myself (need to set up BYOD at a new job) and haven’t checked on Intune yet much beyond an acronym level (e.g., it knows about Apple Business Manager which makes enrollment easyish).
The iOS and Android approaches are quite different—Android is kind of like having two phones in one box, whereas iOS separates the data but not the apps. Microsoft puts a layer on top that requires app support but gets you finer-grained control over data transfer across the boundary (like, can I copy/paste between personal and work apps).
Whoa boy, folks with strong feelings are REALLY not gonna love this take :)
But I agree with you, I do think unoformity of UX is a sizable reason for the $99 fee. It’s not so much “Hate hobbyists” as “Making it easy to sideload means Jane Sixpack will do so, brick her device, and flame Apple”.
How many people have ever sued Google because a sideloaded Android app bricked their device?
i’d be curious to see actual data on that.
Open Google Maps and it will automatically show you your speed.
https://support.google.com/maps/answer/9356324?hl=en
The option mentioned in the support FAQ you linked doesn’t appear to exist on Google Maps iOS.
You could’ve bought a cheap Android device instead and it would’ve paid for itself in a few months.
I just searched the App Store for ‘Speedometer’ and about 5 out of the top ~15 results don’t show anything about costing money, though perhaps they show ads.
This one looks simple and says it has no ads: https://apps.apple.com/gb/app/quick-speedometer/id1564657242
Did I find something different from what you were looking for?
The cost and difficulty grows exponentially as the coverage approaches 100%. IMHO it’s an extreme case that outweighs the benefits (unless you’re developing a Mars lander). It’s a badge of honor, but if you can settle on 95% coverage you get the same benefits, and get to skip the few bits that are not worth testing.
I thought so too, until I worked to get 100% branch coverage on a small-ish system at work (~12kSLOC Python). Some findings:
Definitely doing more of this in future code bases!
I really should get around to that blog post…
Can mutation testing be automated? What I was thinking (with my limited imagination) was checking out the code, making some edits that should make the code incorrect, and checking that tests fail - is that it?
Yes, that’s basically it and there are tools that do just that for you.
Sure, I’ve so far only used mutmut for Python, but there’s probably tools out there for all major languages.
Yeah, the error messages look like “Inverting this if condition does not cause any tests to fail”
One problem with a gate on anything less than 100% is that it then becomes trivial to game by manipulating your code (eg add lines of otherwise dead code that get covered by a test to juke the calculation.) And more tempting to do so as the difficulty of the “right way” goes up.
It would be nice to think that we would all respect the intent of such rules and not do something like that… but my experience in systems with such a gate has been that it’s pretty common that eventually someone decides that their situation is important or urgent enough to do just enough to get by for now. (Likely with the intent, though seldom the follow through, of coming back to fix it later.) The end result is worse production code than you might have had without the requirement, and a statistic that isn’t really a meaningful representation of what it claims to be.
Can’t you just do the same thing with test coverage though? At least for ruby and simplecov, it only looks at whether or not a given line is traversed via some path in test code. You can make arbitrary tests to “cover” those lines without a meaningful test.
I mean ultimately you can game anything… it’s more about not creating/increasing perverse incentives to do things that are actively worse than the default.
I think your scenario is somewhat mitigated in that sense by:
You touch upon the two keystones!
For me the interesting part of the second point is that those cases would not be caught by the 100% line/branch coverage. They require someone to think of system level edge cases.
This is basically what led me to write this post: https://lobste.rs/s/saaiyd/most_tests_should_be_generated. I’ve worked in codebases that mandated pretty high code coverage percentages, and it’s definitely effective for getting rid of silly errors. But bugs inevitably still creep through, and I was really curious why.
The short answer is: correctness is defined in terms of all data states, not just all code branches. And the number of data states in any program is practically infinite.
My thinking is that withing 5 years we will have llms that get you to %95 and us humans will fo the last %5.
Well, they specifically changed the name to make it not actually Perl 6 anymore :) Maybe “Raku used to be called Perl 6” would be more accurate.
I don’t think the broad Perl community, including Wall, had negative associations with Perl :) If anything, I’d say the opposite, some people wanted to get rid of negative associations with Perl 6.
Perl 6 development took ages to the initial release (2000-2015) and by that time Python and Ruby already ate all of its cake. Anyone interested in upgrading their Perl 5 codebase was migrating to those two, instead of Perl 6. Many developers realized that Perl 6 is just not going to gain traction and wanted to continue developing Perl 5. One of the biggest reasons for the name change was to not sabotage their efforts - https://github.com/Raku/problem-solving/issues/81#issue-478241764
The whole Python 2 -> 3 ordeal probably also played a role in their thinking.
One could argue that it’s precisely Perl 6 / Raku that killed it. Instead of gradually improving Perl 5, they focused all their efforts on an entirely new language and in the meantime people moved on to Python and Ruby.
Not entirely though. Perl 5 had enough momentum that it comes preinstalled with almost every UNIX-like system even today. Looking at GitHub, it’s probably more actively developed than Rakudo. It’s module repository CPAN claims to contain over 200k modules while Raku modules shows around 2k.
These days I wouldn’t start a new project in neither Perl 5 nor Raku, but if I had to choose, I’d probably bet on Perl, because of its maturity and wider adoption.
Edit: it’s also worth noting that some time ago they announced Perl 7, based on, and backwards compatible with Perl 5. For now they seem to be extremely careful about when it happens.
“They” didn’t though. That’s where a lot of the shouting came from: Perl 5 development never stopped. The groups of people working on p5 and p6/raku have always been about 90-95% disjoint. There was a bit of a lull in Perl 5 development around 2002-2006 which probably did have a bit to do with p6, but continuous gradual improvement and modernization has been exactly the strategy since 5.10, and there’s been a new release with performance improvements, language additions, and deprecations every single year since 2010, which is not a bad cadence at all, considering.
What really pissed certain people off was the fact that that effort existed at all! “Hey, why are you wasting your time improving that old thing instead of coming over here and helping us?” If you followed Python 2->3 it’s a very similar story except with a different outcome. Any maintenance effort, and especially any feature improvement, on the old version was seen as a distraction from the development of the new one — and, far worse, it was seen as encouraging people not to migrate. In Python land, the 3 crowd won and the guys with the “2.8” shirts got spat on and publicly denounced in standards-ese. In Perl land… well, arguably no one won. Perl 6 didn’t take over the way Python 3 did, but it succeeded in sowing enough doubt to contribute to Perl 5’s decline in adoption.
The name change was one of those compromises that made no one happy (least of all Larry), but it served as an acknowledgment of that parallel development. People used to ask “oh, you’re using Perl 5? When are you going to upgrade to 6?” The Raku name makes clear that that’s in the same class of question as “oh, you’re using Lisp? When are you going to upgrade to Scheme?”
This is a big one, and I’ve wanted to learn Perl 5 for that exact reason. I work with a bunch of really stripped down embedded Linux systems, and the only scripting language runtimes on those systems are usually Busybox’s shell … and Perl 5.
May I suggest bookmarking this for when you come back to that ambition? It’s a talk I gave about dependency-bundling tools and some other relevant techniques for people in exactly the place you are.
I just read that entire GitHub issue discussion and it was a wild ride. Strongly reminiscent of a medium-sized Wikipedia internal discussion. The 2019 vintage means it’s long enough ago to feel distinct in terms of tone, but recent enough to feel familiar. Thanks for the link.
Now I feel tempted to go and try out one or both of Perl 5 and Raku.
Subway Tooter ride or die. Its look and feel is… brutally functional, but I love it. Sorta like Reddit is Fun (is fun).
Exactly what I’ve been looking for! Fantastic. Will try using it now.
I fully believe that no one should let themselves be intimidated out of learning or pursuing excellence but I sincerely and in good faith disagree with this:
I really want people who suggest changes to languages be experts in that language. It’s frustrating when changes are made to languages that haven’t considered the full gamut of cases that may benefit or be affected by the change. If the Rust community sincerely believes what I have quoted, then long term it will eventually corrupt itself.
How about this as a compromise: anyone at all can suggest changes, but changes are only accepted after lots and lots of thought by (many) people who are experts?
I’m not trying just to quibble. I like the idea of a community open to good ideas from anyone, but also resistant to making changes too quickly or without sufficient consideration.
Sure, but that’s just superficial pandering to the novices in the community. We should just be honest. Novices are at most qualified to express frustrations with a language, only experts are qualified to suggest and make changes to address those frustrations. This is similar to the product design philosophy that “customers don’t know what they want.” It’s up to skilled professionals to design solutions to the problems that users have.
And how exactly does the community become self-sustaining in the production of a steady stream of “experts” and “skilled professionals”, if “novices” are treated as you suggest?
Well, people don’t become experts in a language by proposing poorly informed and misguided changes anyway. ;)
They become experts by writing that language extensively, designing and maintaining libraries, contributing to the implementation… newbies can certainly participate in the discussion of proposals, but it’s really important to know the limits of one’s expertise. “This will break my workflow if accepted” is a useful, much-needed feedback that anyone can give. So is “this fails to account for my use case”. People can also get better at knowing how to see flaws in proposals by doing that.
Experts also must not dismiss proposals “because reasons”. The problem is that a rejection justification can take more time to write than a misguided proposal (a generalization of Brandolini’s law). Giving every newcomer the same amount of the time of current experts is thus counterproductive — there must be filters to weed out obviously bad proposals before they get to the level where the discussion is whether to actually add it to the language or not.
I’m not saying that newcomers cannot make language feature proposals. They (this includes me, in all areas where I’m a newcomer) just should be prepared to face the fact that their proposals are completely misguided and they may need to learn a lot more just to understand why they are misguided.
Plenty of projects allow newcomers to propose things. They aren’t guaranteed traction. They aren’t guaranteed detailed responses from “experts”. But they can still propose things, which is the difference between what I want, and what the parent comment suggested: “only experts are qualified to suggest and make changes”.
All novices should be treated with respect. Their concerns should be listened to and taken seriously. Their opinions on how the language should change should be taken with a grain of salt.
I don’t think allowing novices to take a driver’s seat in your language’s design is the best or only way for novices become skilled professionals in your language. They become skilled through experience with the language.
Why are experts, novices, and skilled professionals in quotes? Do you not believe that those categories exist? Do you not believe that there is a difference in understanding of a language between people who have different amounts of experience with that language?
The whole point of this article is that different people tend to be experts and novices all at once, in different fields. You can be someone with 15 years of experience in memory management, lang dev and compiler dev, and still be a novice in Rust. Similarly, you can be an expert in Rust and not be an expert in memory management, lang dev, compiler dev, whatever.
We are all novices.
Your original proposal was that novices should only be allowed to “express frustrations”. And I don’t see how your attempt to walk that back in this comment is any more respectful than that:
Luckily there’s a wide range of additional options between putting “novices” in “a driver’s seat”, and not allowing them to make proposals at all.
I am skeptical of your usage of these categories.
There’s a story often told from the Django community of a conference where several of the “core developers” (a group which has since been abolished in favor of a different approach to technical decision-making) were present at a post-conference sprint, which had the usual pile of “good first ticket” things set up for new people to work on. But one newcomer went straight for a truly gnarly problem deep in the guts of the ORM, and came up with a patch. It took two “experts” reviewing it together for much of an afternoon to understand that it was the correct fix. The newcomer ended up with a commit bit later on.
If Django had been run on your model, as I understand it, that never could have happened, because the “newcomer” would not have been taken seriously enough by the “experts”.
There are many cases when what seems like a simple fix doesn’t fix the root cause, fails to account for corner cases, or has non-obvious implications. It can certainly take a lot of time for very genuine experts to evaluate a solution.
Also, I don’t think anyone in this thread advocates for judging people solely by their time in that particular project. Experts tend to be relatively good at recognizing experts from related fields.
That appeared to be exactly the criterion advocated above – that not having enough time with a particular language should be an absolute bar to being allowed to suggest changes/improvements to the language.
The universally-loathed state of tech interviewing and hiring is a strong counterexample to this claim.
The standard reply until perhaps 10 years ago was that novices should be humble and take the heat until they are no longer novices. It’s possible that this is not the right answer, but it has served and keeps serving well in many fields.
The problem with this approach is that it selects for arrogance, not competence.
We obviously need a more stringent way of selecting for competence, though. Otherwise, we wouldn’t keep asking why software is still slow despite improvements in hardware, with rants like this one being offered as answers. That link is uncomfortable for me to read because I worry that I’ll be unmasked as one of those impostors. But the incident with Casey Muratori and the Windows Terminal team, brought up by the author of that comment, shows that incompetence really is rampant in the industry.
Maybe take a cue from the embedded world? There are very many competent people there, doing great engineering. This doesn’t have the prestige among non-technical folks that working for FAANG has.
Another thing is that high salaries don’t always attract the most capable people, and even when they do, they may not motivate the capable people to do their best work. Because maybe the work they’re doing is actually BS, they know it, and they’re just punching a clock.
If I keep rambling along these lines, I’m going to get really off into the weeds, so I’ll stop now.
Anyone who claims that “software is slow” because of incompetence is revealing their lack of competence in understanding why “software is slow”, which tends to be a multifaceted issue.
I’m not sure that’s the case…can you elaborate?
Becoming not a novice is generally a matter of self identity. People are humble and take the heat until they decide that they’re not novices. In my experience, the more competent the person is, the more likely that are to self identify as a novice, so you’re selecting for people who are confident in their ability, not people with actual ability. It works in other fields only where there is some rigorous external evaluation of competence (apprenticeships, training, and so on).
The approach, which I’ll name “lurk more”, leads to both false positives and false negatives.
Overall, it may work, and it’s definitely less annoying for some people to be a regular in such communities (because they’ll deal with less novice questions/comments, and the novices will be “better behaved” - those are scare quotes). It might even be unqualifiedly better for communities where people can choose to participate, or there’s an alternative community that’s more welcoming.
The Rust community has, of course, emphatically come down against “lurk more” because they want a more welcoming community, and “lurk more” is not welcoming. As a self-centered person, I would prefer communities I’m not in to be welcoming, and those I’m already in to be “lurk more”. But as an altruistic person, that should persuade me that it’s better to use alternative means (e.g. invite-only groups) to control group behavior.
Definitely not true – I have gotten good suggestions on the Oil language from people who barely know it (which is most people, by definition)
There is simply not that hard a line between “novices” and “experts” when it comes to language design.
It really is true, just as the diagram in the blog post shows, that different people know different things.
Different people have different use cases, and those are valuable.
Nobody can possibly understand all the things a language is used for – somewhat famously Guido van Rossum has never used NumPy, and doesn’t really do scientific computing / linear algebra, yet there are lots of contributors that are not “language designers who have given input and shaped the language. (This isn’t always a smooth process, but it happens, and Python is the most popular language in that domain)
Sometimes the suggestions are impossible, or inconsistent with something that exists already, but those problems aren’t hard to uncover if there is a healthy dialogue
Note also that GvR’s discomfort with performance-oriented design is why CPython still doesn’t have tagged pointers and PyPy still isn’t an official PSF project. Years of wasted effort on forks of CPython were the result. There are similar stories about asynchronous I/O. When a language designer refuses to explore a field of computer science, then it tends to hobble the entire community.
Eh weird tangent, and also untrue on both counts. There are plenty of optimizations in merged into CPython, just not ones that would break all C extensions, like tagged pointers. Essentially every Python release has gotten faster.
Historically, PSF was basically for running Python conferences. PyPy is an amazing project, but I don’t see why the PSF would take it over. At one point >15 years ago there was the thought that it would replace CPython, but there are good reasons that that won’t happen.
You’re getting the point EXACTLY wrong – Guido is not a god and he doesn’t come up with up all ideas, and bless all efforts. Rather, if you care about tagged pointers and performance in CPython so much – YOU should work on it. Plenty of people have – e.g. I’ve noticed heroic efforts improving performance, with compatibility, from Victor Stinner.
Not to mention the recent huge speedups in Python 3.11 – which took outside FUNDING and organization (mostly from Microsoft it seems). PSF is not a wealthy organization.
It might be helpful to know the rough ratio between useful versus nonuseful feature requests for both experienced and brand new oilers.
I’d say that almost all suggestions based on a real problem have been useful, which is probably 80% of them.
Occasionally there is someone who has an “mind bug” idea they haven’t tried … e.g. “I read about some cool thing and I think it should apply to your project” without really doing anything other than writing an e-mail.
But yeah the overall point is that you need a many different viewpoints to make a good language. You also need people to synthesize and simplify them – i.e. the reason people criticize languages like shell, PHP, and C++ is that they seemingly have every viewpoint, which leads to a lot of inconsistency.
I’m familiar with this approach, but I’m not a fan. I prefer something like this paragraph from Russ Cox, in a thread about Go’s proposal process. I don’t think it’s (actually or intended to be) pandering.
Also, for whatever it’s worth, I started a thread on Cox’s posts about the Go proposal process. It’s an important issue for any community.
I think we are more or less on the same page so I won’t belabor the point too much. I don’t disagree with Mr. Cox there but I would only add that I think he’s referring to an explicit feedback step of a larger proposal process, where the initial proposal was likely made by someone qualified to suggest a language change. I consider the feedback step necessary to any process that governs change.
What I meant by pandering to novices was that in the situation where “anyone at all can suggest changes, but changes are only accepted after lots and lots of thought by (many) people who are experts” it’s really just lip service to the initial suggester that they had meaningful input to the proposed changes when in the end it was taken up, shepherded, and likely significantly changed by the experts. That’s fine as a description of the process but I feel that description is just pandering. My personal style is to be more straightforward and set an explicit expectation that language change suggestions are unlikely to be taken seriously from novices, instead of the alternative expectation that anyone at all can contribute at a high level.
It works out okay. RFC is a request for comments, so they do get feedback (sometime too much) and the features are either shaped into something nice, or get postponed or rejected.
Not every feature is a PhD-level type juggling. Some proposals are just for new standard library functions, Cargo flags, or bikeshedding about syntax sugar.
But a request for comment is designed so experts can weigh in and help improve proposals, no?
A related useful fact I’ve learned recently:
Conversion from a Unix timestamp (a number) to date/time in UTC (year-month-day h:m:s) is a pure function, it needn’t think about leap seconds.
As a corollary, software can use human-readable times in config without depending on OS timezone information.
https://blog.reverberate.org/2020/05/12/optimizing-date-algorithms.html
Which, incidentally, means that all the documentation which describes UNIX time as “the number of seconds since 01/01/1970 UTC” is wrong. Wikipedia, for example, says that “It measures time by the number of seconds that have elapsed since 00:00:00 UTC on 1 January 1970”, which is incorrect. (Though the POSIX spec it links to seems vague enough to at least not be incorrect; it says the “Seconds since epoch” is “A value that approximates the number of seconds that have elapsed since the Epoch”.)
I spent a long time trying to figure out the best way to correctly convert between an ISO-8601 timestamp and a UNIX timestamp based on the assumption that UNIX time counted actual seconds since 01/01/1970 UTC, before I found through experimentation that everything I had read and thought I knew about UNIX time was wrong.
I would fix that Wikipedia article, but you (or the others in the discussion) seem to be better prepared to come up with correct wording, so I most humbly encourage someone to take a whack at it, in the spirit of encouraging people to get involved. Don’t worry, you won’t get reverted. (Probably. I’ll take a look if that happens.)
Quoting from that article:
Well, that’s certainly one way to handle them…
Yeah, exactly the same story here.
My favourite versions of these functions are on my blog: broken-down date to day number and day number to broken-down date. Including the time as well as the date (in POSIX or NTP style) is comparatively trivial :-)
Good article. Reads almost like a Code Complete chapter.
Pretty neat. I’ve been gently pushing to use it at work. We’re on YAML right now for human-readability, but we’re using tags and other delights, which makes me think what we really want is something more complex. (Significant whitespace is not a concern for my teammates.)
I’m going to write a new tool to count how many edits a Wikipedia user has made. The current state of the art can’t handle more than 500k edits, which is unhelpful given that the top 42 users by number of edits are all over that threshold (https://en.wikipedia.org/wiki/Wikipedia:List_of_Wikipedians_by_number_of_edits#1%E2%80%931000).
They Might Be Giants concert!
Proposing a Wikipedia policy change: giving more users the power to block other users. Before the pitchforks come out, it’s very narrowly scoped: only new users who are making unconstructive edits at a rate faster than 1 per minute, and an admin still has to review the block. This is just to respond more quickly to people doing lots of vandalism quickly - we had one user make 79 edits in half an hour last year before an admin could show up.
Just found myself to think hard when I acutually used WP the last time.
About march, in lockdown, to decipher an acronyme.
Sadly, the “more magic” story seems to have been debunked by Tom Knight himself in 1996.
That is, the part about crashing the computer and not the actual switch itself.
I must pedantically point out that just because it wasn’t installed for the reasons the story describes, doesn’t mean it didn’t behave as the story describes. XD
I recently worked on debugging what turned out to be a hardware issue. A microcontroller on an I/O board occasionally crashed and took other components with it, and the best way to trigger it intentionally was to touch the system’s power button. Not press, just touch. I think it was eventually found to be a process flaw: something in the way that the PCB and the parts on it were manufactured, soldered, cleaned with solvent, and assembled was causing a shielding layer of epoxy or something on the board to be damaged just enough to occasionally let current leak through to places it shouldn’t be going. The big capacitor that is your body touching the (metal) power button made enough electrons move around to crash the microcontroller. Apparently sometimes you didn’t even have to touch it, just get your finger near by. As far as I know, the fix was to use a different solvent during board assembly.
This reminds me of one of my favourite things about electronics. Sometimes an electronic engineer will design a circuit, test it, find out that it fails, try to debug it with an oscilloscope, find that it works reliably but only when the oscilloscope probes are in, and then finally give up and fix it permanently by adding a 22pF capacitor at each spot where the oscilloscope probes were connected, in order to simulate an oscilloscope being attached and thereby make the circuit behave how it did with the scope plugged in. :)
Yep, that sort of debugging happens everywhere. Building physical devices often has a “fit-and-finish” step, where you put everything together, try to make it work, and then by hand file down or smooth out any bits that don’t quite mesh together right. Getting your design and manufacturing process to the point where you don’t need this sort of manual fiddling all the time takes either a lot of hands-on experience or a lot of iteration or both. Sound familiar, software people designing complex systems?
But that sort of iteration is also necessary for automation and interchangable parts to work, which lets you make majillions of those devices, and so the high up-front cost pays itself off by letting you scale up. The thing about software is that you can do that sort of iteration very fast and in very small pieces, and now with the internet you get the benefits almost instantly, and so the cost-benefit scale is very different.
This is why I think of engineering as the process of paying a very expensive person to make something else cheaper. :)
Oh, certainly! I found these new details a few weeks ago while looking up the story for our junior engineers, as this was always the classic lesson of “seemingly-unconnected devices can still influence each other at the electrical level”.
Or to put it another way, “next time, politely tell the internal helpdesk customer to turn off their desktop plasma globe when trying to use their company-issued yubikey”.
That’s so cool that we can see the switch!
Interestingly, I’ve recently been thinking that Rust is redundant here. Most of the time, when I take a shortcut and use, eg, a tuple struct, I come to regret it later when I need to add an
.4
th field.I now generally have the following rules of thumb:
raw
/repr
as a field name:struct Miles { raw: u32 }
. Rationale:x.0
looks ugly at the call-site,x.raw
reads nicer. It also gives canonical name for theraw
value..0
enum variant, there’s only pattern matching (trivia: originally, the only way to work with tuple-struct was to pattern-match it, the.0
“cute” hack was added later (which, several years down the line, required breaking lexer/parser boundary to makefoo.0.1
work))My rule of thumb has been that tuples (struct or otherwise) should only very rarely cross module boundaries. However, if I’m just wiring a couple of private methods together, it’s often not worth the verbosity to definite a proper struct.
That’s also reasonable! My comment was prompted by today’s incident, where I had to add a
bool
field to a couple of tuple-variants to a private enum used in tests :)My experience directly, almost word-for-word. I think the importance of having a convention for the “default” name instead of .0 makes all the difference and removes the biggest source of inertia when using a record struct instead of a tuple struct (you picked .raw, I picked .value - tomato tomato).
In fact, if I had to pick globally, I would take anonymous strongly-typed types (see C#) over tuples in a heartbeat. We just need a shorthand for how to express them as return types (some sort of impls?)
As an aside, again to steal prior art from C# (which, for its time and place is incredibly well-designed in many aspects) , record types and tuple types don’t have to be so different. Recent C# versions support giving names to tuple members that act as syntactic sugar for .0, .1, etc. If present at the point of declaration they supersede the numbered syntax (intellisense shows names rather than numbers) but you can still use .0, .1, etc if you want; they also remain “tuples” from a PLT aspect, eg the names don’t change the underlying type and you can assign
(string s, int x)
to a plain/undecorated(string, int)
or a differently named(string MyString, int MyInt)
(or vice-versa).This illustrates for me the kinds of decisions Swift makes in the language that you would otherwise have to settle on by policy and diligence. If you can’t combine two features in a certain way because there’s a better way to solve all the relevant real world problems, they would consider that a benefit to the programmer. That’s especially true when safety is involved, like Rust, but also just ease of use, readability, etc. I think it’s partly a reaction to C++ in which you can do anything and footguns are plentiful.
Honestly, that’s my least favorite part of Rust. Since there’s always a bunch of ways to do the same thing, you have to be very careful to simultaneously avoid bike shedding and also avoid a wildly incident codebase.
Could we put this in the docs somewhere?
I don’t think this kind of advice is good for docs: it’s very opinionated, and Rust’s philosophy is very pluralistic. It’s also a bit dangerous, as it needs nuance to not be misapplied.
I do find that I’ve accumulated a lot of similar rules of thumbs and heuristics which help clarify thinking and make decisions when coding in the small, perhaps I’ll compile them in some long read one day.
Huh, if only a tool existed that made refactoring these things easy and quick! (Jk, I bet rust-analyzer can do it!)
The distinction between “newtype” and “tuple” is an interesting one I hadn’t considered, and I agree with it.
Just(T)
seems fine; my example with(T, U, V)
not so much. I don’t share the dislike for newtype structs, but I also get where you’re coming from;Foo { field }
is fine for both destructuring in a pattern match and for constructing an instance, so I agree that it doesn’t add that much these days.I’m not entirely sure I follow your last bullet point; the stated preference and the argument seem to contradict each other, but probably I’m just misreading you?
Yeah, that’s confusing, more directly:
If you do the latter, you might find yourself wanting to refactor it later. But, again, this is very weak guideline, as adding an extra struct is very verbose. Eg, rust-analyzer doesn’t do this for expressions:
https://github.com/rust-lang/rust-analyzer/blob/2836dd15f753a490ea5b89e02c6cfcecd2f32984/crates/hir-def/src/expr.rs#L175-L179
But it does this for a whole bunch of other structs, to the point where we have a dedicated macro there:
https://github.com/rust-lang/rust-analyzer/blob/2836dd15f753a490ea5b89e02c6cfcecd2f32984/crates/stdx/src/macros.rs#L22-L47
Ah, so basically your rule here would be “just do what Swift requires you to do here.” I understand but… well, see the original post; I obviously disagree. 😂 I get why, from the refactoring POV and the “it’s useful to name use the inner type sometimes” but find that to be much more of a judgment call on how the type is used. I think I would find that design constraint more compelling as an option if there were a way to limit where you can construct a type that gets inherently exposed that way, while still allowing it to be matched against. You’d need something like Swift’s ability to overload its
~=
to make that actionable, though, I think. 🤔I see it’s gotten a bunch more chapters since I last looked. Very nice.
To each their own. I personally hated having songs with no artist/album/etc when I did this before moving to a steaming platform a couple of years ago.
I’ve been wanting a script to just Shazam (or similar) a bunch of metadata-less music files and write in appropriate metadata. Might be tricky.
MusicBrainz Picard works pretty well in my experience. It can work with acoustic fingerprints, but almost always it is smart enough to find the right data just by looking at folders, filenames, grouping of files, etc.
Winamp used to have this feature. Winamp is dead now. Shazam has a public API though, might be achievable.