I’m not a programmer by trade but I wrote for myself a wiki-like note management system. I made a lot of what must seem like weird, backward decisions from the outset and figured that if the project ever took off (it hasn’t), I would get a lot of questions about why I did (or didn’t do) things a certain way. Like, why doesn’t it support tags, or page hierarchies, or namespaces? Why not just dump everything into flat files? And so on. So I wrote up all those reasons: https://blog.bityard.net/articles/2022/December/the-design-of-silicon-notes-with-cartoons
Not that people are going to magically start reading docs before asking questions, but at least if it ever comes up I can point to the answers instead of typing them over and over again.
I agree, as the author of this blog, I tried everything to remove it, even using their setting pages to inject some code through the google analytic tags (unfortunatly it’s well escaped, well at least they are good at their job :)).
For once I wanted to write more than managing a blog platform, so I went to substack.
But little by little, I’m trying to decoralate from it. First, I moved to a custom domain.
Then, I will likely move to self-hosted.
However, self-hosted is a lot of work, and time I cannot invest in writing, so I will do it very slowly. It will probably a take a year.
Meanwhile, I will loose a lot of readers, because your reaction is very common. I regularly get downvoted to oblivion on reddit and ignored on HN or lobster because of this.
And I get it, as a reader I feel the same when I face those subscribe walls.
One reasonably quick way to turn this into a self-hosted site would be to use GitHub Actions to read that file and turn it into flat files in a repo, then publish the results using GitHub Pages.
It means if any of you want to bypass the whole stubstack shenanigan, you can use the RSS feed to read it comfortably on your machine, which is quite nice.
Well worth reading from an anthropological point of view. It’s a great example of toxic gatekeeping! Of a a very specific kind with a very deliberate logic behind it, that is described explicitly:
I’m a bastard. I have absolutely no clue why people can ever think otherwise. Yet they do. People think I’m a nice guy, and the fact is that I’m a scheming, conniving bastard who doesn’t care for any hurt feelings or lost hours of work if it just results in what I consider to be a better system. […]
And I’m not just saying that. I’m really not a very nice person.
Kinda amazing that Linux ever got popular tbh. On the flip side, this kind of attitude is great at clamping down on bikesheds, useless design debates and other social antipatterns that make project management more difficult. If you take this attitude and someone is still willing to tell you you’re wrong about some technical thing, then a) they better be very sure and b) they better really know what they’re talking about. Ruthlessly weeding out the weak and ineffective (along with anyone unwilling or unable to argue with you) does get you a community with a certain kind of acumen and low overhead, as long as you have a steady stream of candidates lining up for the firing squad. It just, you know, does lots of other things too, like destroy people with high potential but low confidence, or normalize abuse based on certain criteria into abuse on whatever criteria one might feel like.
And if you take this kind of management approach, you better really know what you’re talking about as well, and you really, really better be right as often as you think you are. Otherwise you’re nothing but a demagogue who talks the talk but has no idea how to walk the walk, as is the case of plenty of recent popular figures I won’t bother naming.
(I am reminded of stories of Admiral King, the head of the US Navy for most of World War 2. My favorite comment about his personality was that he metaphorically “shaved with a blowtorch”. Incredibly competent and effective guy, largely because he generally had at least three other incredibly competent and effective people doing diplomacy and spin-control for him at any given time to keep up with all the people he pissed off along the way.)
Linus has since publicly acknowledged that his online communication skills were often more harmful than helpful. He has softened his tone tremendously and while he is still blunt about things he doesn’t like, he doesn’t really insult contributors anymore. This could partially be because he’s mostly a release manager these days.
The reason Linus kept a following at all was that even though he was a jerk in a lot of ways, he was almost always right. And people like a confident leader. I’m sure he (and other contributors at the time) justified the somewhat toxic culture by telling themselves that the quality of the kernel code was improved if contributors had (or quickly learned to develop) a thick skin against critical code reviews. Was it justified? IMO, probably not, since they could always refuse to merge a patch they think is dogshit without spewing insults. “The overall quality of this code is currently below our minimum standards, please feel free to resubmit after improvements are made.” Not sure this has the same ring to it.
People who know or talked to him in real life always say he is actually quite soft-spoken and nice to talk to, it was just his online presence that was abrasive.
From my recollection of that era, a lot of the credit goes to folks like Alan Cox. He has a high tolerance for Linus’ nonsense, but is a genuinely helpful and positive person. A lot of contributions were merged via his tree and folks working with him almost certainly had a much more positive experience than anyone working directly with Linus. As I recall, he was already at Red Hat then, which is probably a part of the reason they were able to have so much control over the young Linux ecosystem.
One aspect of this is that Linus has always meant for the group of people he directly interacts with to be small. There was a long period of development, contemporaneous with this mailing list post, where most users of linux were not using the mainline kernel. Distros all kept their own forks and those were what most people used. Many of the people who maintained those forks were also subsystem maintainers and they would funnel things up to the mainline. I’m not a Linus biographer but I’ve never read anything from him expressing a desire for that to end. I think that was a big part of why Linux got popular.
And if you take this kind of management approach, you better really know what you’re talking about as well, and you really, really better be right as often as you think you are.
I’d argue that this would be generally preferable, but I of course also know that this is not how it works and certainly not how people end up in management (or other) positions a lot of the time.
It’s a great example of toxic gatekeeping!
Out of curiosity. How would you define that? Do you differentiate between gatekeeping and toxic gatekeeping? I don’t know enough about it, so I am curious about “good” or “non-toxic” gatekeeping and whether that exists.
Well worth reading from an anthropological point of view.
I have another take then. To me the topic of “I am a bastard” and “I’m really not a very nice person.” feel horrendously unrelated to the topic, so not really like an argument, because it doesn’t seem to relate to the topic at all. The part about the debuggers being about details and not the overall picture, as well as learning to be careful kind of makes sense to me. I don’t know if I agree or disagree (I’d assume the best would be to be careful and be able to use a debugger), but it does relate to the topic, wheres Linus talking about his character traits feels out of place.
How would you define that? Do you differentiate between gatekeeping and toxic gatekeeping?
Good question! I hadn’t really thought about it in those terms. I suppose gatekeeping could be charitably defined as “preventing useless people from wasting my time” (and likely theirs as well), while toxic gatekeeping would be “gatekeeping by abusing the people I want to avoid until they go away”.
To me the topic of “I am a bastard” and “I’m really not a very nice person.” feel horrendously unrelated to the topic…
There are people out there that consider any form of gatekeeping toxic (I’m not one of them). In my view, “gatekeeping” is something like “That idea won’t work because …” or “That’s an interesting idea, do you have a proof-of-concept we can look at?”. Borderline “toxic gatekeeping” would be “That’s a stupid idea, because … “ (you’re not calling the person stupid, just the idea, but some people can’t or won’t see the distinction). “Toxic gatekeeping” would be “What are you? Stupid? That’s a horrible idea!” and end the discussion.
I wouldn’t call those first examples gatekeeping at all. When people talk about gatekeeping, they are usually talking about the exclusion of people from a community whose ideas or demographic don’t align with those of the group.
Bringing it back on-topic, the LKML used to be a somewhat toxic place but there was never any gatekeeping. Anyone was welcome to contribute improvements to the kernel no matter who they were, the only qualification being that you understand the kernel and the code you are writing, because if either fell short, you would certainly hear about it. That’s still somewhat true, but it’s a nicer place now.
We actually do the inverse of this where I work. Copying SSH keys to all of the hosts that hundreds of users may need to access was a chore and very error prone. So I wrote a web UI that users can (securely) log into and upload their SSH key. There’s also a bit of access management stuff but the net result is that each SSH server looks up user keys from the host via AuthorizedKeysCommand. (Rather than just curl, this is actually a small Python program that caches the key as well so that if the key server goes down, people can still log in.) It works really well.
Regarding the difficulties of SSH certificates, there are basically two kinds: host certs and user certs. Host certs are actually pretty easy to use if they are part of your host provisioning process. Users then only need to add one line to their ~/.ssh/known_hosts file to trust all hosts signed by that cert. If that’s too much work, they can just keep calm and TOFU on.
User certs are the reverse of course, with the added step that users have to install the signed certificate to their .ssh directory. Where I work, we have some non-technical users that must use SSH for various tasks so that is a non-starter in this particular environment. But I’m told it scales very well in places where all SSH users are technical enough to configure their own SSH client.
Really glad to hear about improvements on the multi-monitor front. It was precisely this that turned me off of Plasma. Gonna make this the next Software Thing I try out.
For the last decade or so, I would give KDE a try roughly once a year, and kept going back to XFCE or GNOME because although they were annoying in their own ways, they were at least stable enough to sit in the background and not annoy me too much while I went about my day. Usually the issues with KDE were glaring bugs in my workflow that nobody cared to fix (especially around multi-monitor support, docking/undocking my laptop, and sleep/resume), crazy desgin decisions (why I can’t I delete that stupid cashew?), or outright daily crashes.
However, KDE is actually pretty good now. I switched to it in October for both my personal and work machines and am pleased to report that basically all of the things that pushed me away previously all work fine now. I have been running KDE on Debian bookworm and multi-monitor support in particular works just great. There are occasional crashes/hangs but they are infrequent enough that they don’t bug me too much, and some of that is to be expected from running an unstable distribution.
I have been in the industry long enough that I agree with both points of view, but for different things. At the end of the day, naming things is hard and you’ll never be 100% happy with any method you pick in the long run. There are always trade-offs, one-offs, and gross hacks. The only reasonable goal is to seek to minimize them.
Giving hosts and devices cute names is perfect for small, self-contained deployments (like my home lab). But they fall down in large heterogeneous environments like a datacenter. The biggest problem I have seen is that a box with a specific role (say, a firewall) might get a cute name like “thor”. Eventually “thor” becomes obsolete and is replaced with a new, more powerful firewall which is named “zeus” but nobody bothers to update any of the docs. A new admin is brought in and a few weeks after they start, zeus stops forwarding packets over a critical weekend and now the poor admin is trying to understand the docs… they talk about thor being the firewall but they have only known zeus… are they both firewalls for different things? Is thor supposed to be online, but isn’t, and that is part of the problem? Only a phone call with someone who has been around longer than the admin could clarify this, but it’s 3 a.m. and the manager is on PTO…
Yes, the problem is ultimately poor documentation and onboarding, but I’ve seen it often enough as a contributing factor to a larger problem that I actively oppose cute names for critical infrastructure if I have any say in it.
On the flip side, I have seen environments that try to package as much metadata as possible into a hostname, which leads you to end up having to log into a host called ‘web9-c23-r1-det.eng.lab.example.com’. Whoops sorry, you accidentally rebooted web9-c22-r1-det.eng.lab.example.com.
My personal opinion is that if once you start encoding more than two pieces of metadata into a hostname, that is the point at which you should just give up, assign the thing a randomly-generated hostname, keep your metadata in some kind of DCIM, and then build your tooling around the DCIM.
HELO blog.bityard.net
MAIL FROM:<[email protected]>
RCPT TO:<[email protected]>
DATA
From: [email protected]
To: [email protected]
Subject: Originally, I didn't like having a beard.
I hate it when websites do that. Yes I have Javascript disabled, why do you ask?
Interesting! I had no idea that would happen. The site itself doesn’t use any Javascript but it is hosted on Cloudflare Pages which apparently injects its own Javascript to obfuscate email addresses.
I expect these kinds of problems are simply a result of the complexity of modern hardware. This is basically what you get when the hardware is poorly (or at least incomprehensibly) designed or implemented, and there are so many combinations of peripherals and configurations that they can’t all be tested. USB and Thunderbolt docks, to pick an extreme example, have been around for years and they are still a compatibility and interoperability shitshow on every OS, not just Linux.
Linux devs do their best to paper over poor design decisions while not inadvertently breaking existing setups, but it’s a perilous line to walk.
I didn’t watch the video, but I read the article. Or as much of it as I could: the images are pretty distracting and many didn’t seem to have anything to do with the content. It also seems weird to spend so much time bashing C when Linux (and pretty much every other major modern OS) is written in C. It’s like going to France and complaining about all the oddities of French.
PAM is complex because it is powerful. It has a learning curve just like lots of other things to do with systems administration and programming. I will grant that the documentation of PAM is lacking, but one of the most well-regarded books on the topic is PAM Mastery by Michael Lucas. I didn’t see any reference to it, so I assume the author didn’t come across any reference to it while researching PAM (which would be a little surprising) or didn’t read it.
Basically the article can be summed up as, “meh, PAM sucks.” Frankly, this comes off as a just a rant about the kinds of things I end up dealing with every day at work. I feel like this talk/article was a missed opportunity. It would be far more interesting to talk about not only PAM’s weaknesses but how they might be improved, even incrementally. Designing a replacement for PAM would be a massive challenge given its decades of inertia, but even just a rough outline or thought experiment for what one might look like would be better than nothing. There is an unlimited number of things in this world to complain about, complaints have little to no value. But well thought-out solutions do.
I agree the documentation can be lacking some, but I never found it a deal breaker. I never found PAM to be all that hard to handle and I’ve written a few PAM auth plugins.
insanely detailed post, i have administrated the matrix homeserver for cyberia.club + maintained the matrix marketplace app on DO for quite awhile, and i just learned a ton about the ecosystem. thanks a lot for writing this!
it really does feel like matrix, and the folks who develop it, are severely stunted by how many projects they have going. the golang SDK, the rust SDK, dendrite, synapse, mjolnir, element (web, android, iphone), hydrogen, etc. in my opinion, they ought to focus on a few things (acknowledging their pitfalls) and work on the speed of the darn thing - matrix still feels laggy and slow compared to any modern chat app, and especially compared to IRC. read receipts and online detection cause a ton of CPU usage out of the box on synapse, and imho they should be turned off / deprecated entirely.
in short, i feel like there’s way too much cruft in matrix right now, it’s hard to see a future where the weight of their projects doesn’t simply crush them. i hope for their sake that i’m wrong!
If they can manage to do just these two things, the community would handle all the rest. That’s how IRC worked and that’s how XMPP kinda worked. Buut I’m guessing they also wanna make some money so they have to branch out a bit.
Not only that, but it’s always felt to me like it’s being pushed into too many directions, many of which seem to conflict with each other. On the one hand, they want full decentralization and federation. Anyone can run a matrix server, even one they wrote themselves. But on the other hand, they want strong privacy. And on the third hand, moderation controls. I applaud them for trying to tackle all of these at once but I have my doubts that it’s even possible.
At least with IRC, the implications are clear. Servers do not typically store messages (but they could!), and once your message hits the client of everyone in the channel, there’s no way to redact it. You have to assume that anyone could be logging anything you say, even in private messages. All technical measures to enforce the ability to redact messages and cancel users would be theatrics at best because at the end of the day, anyone can make a screenshot.
I had never heard of a phone line simulator, but I really like
this setup and have dreamt of doing something similar (having
found a new old stock rotary dial phone) at home.
If you want to take it a step further and merge the old and the new, you can buy an ATA (Analog Telephone Adapter) that turns any regular phone into a Voice-over-IP phone. With a VoIP server like Asterisk installed a Raspberry Pi, you can do all kinds of wacky things.
The hilarity of this (and it is hilarious) is kinda ruined by the first few frames of the GIF showing the (assumed) developer in underwear (even though it is blurry for some reason).
I ran out of patience to finish reading the article due to the exasperated tone throughout and the author’s steadfast insistence that they know better than decades of system programmers that came before them, but I’ll state what I think should be the obvious:
If you want to build something on a system with a legacy design, you shouldn’t be too surprised that you have to use legacy tools and interfaces to get the job done.
The author states several times that the people who wrote C and C-based OSes made bad choices and invented bad designs, which is simply not true. Those people were not idiots, they were designing things according to existing constraints, concerns, and goals all colored by the state of the art at the time. Where the state of the art was generally, “Oh, you want that program written for this computer to run on that other computer too? Have fun re-writing the whole thing from scratch!”
The person who wrote this article is missing entire decades of context, and without that context, it’s very easy to dismiss mistakes in design as obvious oversights or incompetence. I look forward to the day that someone looks at the author’s code a few years from now and says, “wow, what an flaming pile of yuck!”
Granted, hindsight is 20/20, and we should not chastise past effort just because they lacked our hindsight.
But.
Hindsight is 20/20, how about exploiting this? While it is normal for legacy systems to suck by current standard, they still suck by current standards. Insisting that we’d be nice to legacy designers turn our attention away from the fact that they lacked our hindsight. Insisting that legacy systems used to be good, hides the fact that they are now bad.
If we want to have a chance of disentangling ourselves from legacy crap, we first need to recognise that it is legacy crap, and build up the emotional energy necessary to take action and try & make it better. I don’t care that past giants did the best they could. The best they could is no longer enough, and we should stand on their shoulders and do better.
The best they could is no longer enough, and we should stand on their shoulders and do better.
Nobody is saying we shouldn’t! The way I see it, the problems here are obvious to those paying attention. Complaining about the problems, whether the tone is dispassionate or angry, doesn’t actually help form a solution, and becoming angry over the problems helps even less by emotionally exhausting everyone.
As @andyc alluded to in their sibling comment, this is a common pattern in computing. Much like the C ABI creates a form of crufty legacy glue between applications, HTTP and TCP have formed a similar bottleneck in networking. Every couple weeks another internet loud-person comes to the realization in anger that the reason why so much stuff gets piped over HTTP is because it’s the least common denominator let through by middleboxes. And as much as I’m sympathetic when another person comes to this well-known conclusion in anger, it doesn’t change the reality: I can’t use SCTP because of Middleboxes; I’m stuck on port 443 because of Middleboxes; Latency is really high on my video call because I’m NATed/Middleboxes. You can be angry at the middleboxes or accept/try to work with reality, it’s your choice.
The way I see it, the problems here are obvious to those paying attention. Complaining about the problems, whether the tone is dispassionate or angry, doesn’t actually help form a solution
Not everyone is paying attention. Complaining raises awareness, which is a necessary step towards forming a solution. If no one complains, few will ever know. If no one knows, no one will care. If no one cares, the problem does not get fixed.
Important problems need to be complained about.
Much like the C ABI creates a form of crufty legacy glue between applications, HTTP and TCP have formed a similar bottleneck in networking.
There’s a huge difference between C and HTTP. C is basically the only way for languages to talk to each other in the same process. It’s bad and crufty and legacy, but it’s also all we have.
HTTP on the other hand is not the only thing we have. We have IP. We have UDP. We have TCP. And those middle boxes are forcing me to use complex HTTP tunnels where I could have sent UDP packets instead. In many cases this kills performance to such an extent that some programs that would have been possible with UDP, simply cannot be done with the tunnel. And bottleneck wise, IP, TCP, and UDP are much narrower than HTTP.
You can be angry at the middleboxes or accept/try to work with reality, it’s your choice.
I’m not sure you realise how politically charged this statement is. What you just wrote suspiciously sounds like “There is no alternative”. Middle boxes aren’t like gravity. Humans put them there, and humans can remove them. If they’re a problem, complaining about them can raise awareness, and hopefully cause people to make better middle boxes.
On the other hand, if everyone thinks middle boxes are “reality”, and that the only choice is to work with them, that will make it so. I can’t have that, so I’ll keep complaining whenever I have to do some base64-JSON->deflate->HTTP insanity just to copy some Protobuf from one machine to another (real story).
Not everyone is paying attention. Complaining raises awareness, which is a necessary step towards forming a solution. If no one complains, few will ever know. If no one knows, no one will care. If no one cares, the problem does not get fixed.
The people with the knowledge and ability to fix the situation, or to provide workarounds, are often the people who are aware of the problem. I firmly believe that the endless complaining in technical circles on the internet doesn’t actually help raise awareness to folks unaware of or uninterested in the issue; once you become aware you understand the issue fairly quickly. I’ve always viewed it as a form of venting rather than an honest attempt to fix things, all about healing the self and not about fixing the problem.
Reality has a surprising amount of detail and I can guarantee you I can find a domain expert in any domain who can breathlessly fire off a list of things broken about their domain. Yet you or I who are in no position to change those things nor really have much more than a surface-level interest in them don’t need to be aware of every one of those problems. If every problem was shouted from every rooftop, I’m pretty sure humanity would go deaf.
I’m not sure you realise how politically charged this statement is. What you just wrote suspiciously sounds like “There is no alternative”. Middle boxes aren’t like gravity. Humans put them there, and humans can remove them. If they’re a problem, complaining about them can raise awareness, and hopefully cause people to make better middle boxes.
I don’t mean to draw any parallel to world politics, though humans being human there will always be overlap. That being said, I think understanding why it’s not trivial to remove middleboxes from the equation is a very important part of understanding the problem here and exactly why I find so many rants unhelpful. The reality is that hardware manufacturers are trying to cut costs and hire cheap, understaffed development teams who make crappy middleboxes which are then used by ISPs who will attempt to use a middlebox forever until it either breaks or them or someone threatens them with legal action, because margins are so low. This is exacerbated by the ecosystem of ISPs in an area. There’s more, it’s a complicated topic, but all of that gets lost if you’re angrily ranting. It’s helpful to understand the incentives/problems that created this broken state so we don’t inadvertently create another set of broken incentives when the time/opportunity comes to fix them. In networking that time is around the corner as QUIC/HTTP3 is increasingly being proposed as the way forward to allow the sorts of applications that the old Internet envisioned. Understanding the problem well here is key so we don’t run into yet more ossification.
On the other hand, if everyone thinks middle boxes are “reality”, and that the only choice is to work with them, that will make it so. I can’t have that, so I’ll keep complaining whenever I have to do some base64-JSON->deflate->HTTP insanity just to copy some Protobuf from one machine to another (real story).
Accepting that something is “reality” doesn’t stop folks from trying to improve the situation. You don’t constantly need to beat the drum of how broken something is just to fix it. Accepting reality is to also empathize with the past decisions that brought us into the current state. For example, I think IPv6 should be adopted by everyone and everywhere; NAT is a silly crutch stopping middleboxes from having to leave IPv4 addresses (and raising the value of existing blocks owned by certain entities, but I digress.) But writing a long, angry rant about how NAT sucks doesn’t help anyone; it doesn’t help my family overcome their NATs nor does it help me come up with a less complicated network topology. In the meantime Wireguard, Zerotier, and Yggdrassil are taking matters into their own hands and helping bring the full internet back despite middleboxes. That doesn’t mean I’ll ever stop pushing netops to support IPv6 nor will I stop pushing netops to let non-TCP and non-UDP traffic through their middleboxes. But there’s something to be said about actually solving a problem and not just complaining about it. In fact, I’d say that trying to solve a problem despite the broken state of the problem is perhaps the strongest statement on how broken things are. “Look at thing!” I say, “it sucks so much I had to route around it”.
Having said that, I run up against my own screed. Programmers love to complain and rant, moreso than any other domain that I’m familiar with. I need to accept the reality that this hasn’t changed in the past and will not change going forward. Still, I voice my opinion from time-to-time about the fact. Overall I’m happy that this site has a rant tag because I can filter out the rants from my headspace and only view them when I want to (like now.)
The people with the knowledge and ability to fix the situation, or to provide workarounds, are often the people who are aware of the problem.
Political leaders are often the only folks who can make short term decisions on foreign policy. Yet their decisions are often influenced by what they believe their people will think of their decision. If they anticipate that a given decision will be unpopular, they are more likely to not do it. Thus, discussing foreign policy in a bar or on online forums does influence foreign policy. The effect is very very diffuse, but it’s real.
People with knowledge and ability to fix the situation, if they’re not psychopaths, are likely to empathise with whatever they believe the “normies” would feel about it, making it a similar situation to politicians.
Accepting that something is “reality” doesn’t stop folks from trying to improve the situation.
The choice of words is important. You didn’t just say “accept reality”, you also said “work with reality”, which generally implies not only accepting what reality is, but also accepting that you’re powerless to change it. Directed at someone else, it also tend to chastise them for being idealistic fools.
Thus, discussing foreign policy in a bar or on online forums does influence foreign policy. The effect is very very diffuse, but it’s real.
This is where we disagree. You think it’s real but I think it’s not. I think the world is full of people being unhappy by things and without a concerted political front you’ll just be that person on their soapbox ranting at crowds; the silent majority ignores the soapbox ranter. Anyway this is straying out of technology into politics so I’ll stop here.
Directed at someone else, it also tend to chastise them for being idealistic fools.
Fools no, but idealistic, yes. I know that’s anathema here on Lobsters where everyone wants to resonate with their code and have their personal values reflected in and pushed by their work, but I’m comfortable with that not being the case for myself. I’m very happy not having opinions about most things and accepting that there’s a Chesterton’s Fence to most issues in reality.
Yes definitely, I agree we should try to do better but not denigrate the work of the past …
Although in thinking about this more, I think there is a pretty important difference between networking in software. The incentives are mixed in both cases, but I’d say:
In networking the goal is to interoperate … so people make it happen, even the companies trying to make money.
In software interoperability is often an anti-goal. There is a big incentive to create walled gardens
But yeah overall I really hope everyone writing software thinks about the system as a whole, the ecosystem, and how to interoperate. Not just the concerns of their immediate work
The person who wrote this article is missing entire decades of context, and without that context, it’s very easy to dismiss mistakes in design as obvious oversights or incompetence. I look forward to the day that someone looks at the author’s code a few years from now and says, “wow, what an flaming pile of yuck!”
I can’t express enough how much I agree with this. C was designed for a specific problem and solved it well. Now, 40 years, later people complain about its deficiencies, yet barely question the fact that we (as software developers) haven’t come up with any usable and widely accepted alternative to binary interfaces. Apparently this industry isn’t as innovative as it likes to perceive itself…
Yes. Today we have byte addressable 2s complement machines, but back when C was first designed? There were computers with addressable units from 9 to 66 bits in size and the C compilers that K&R put out were retargetted (by others) for such machines. By the time 1989 rolled around, the standards committee didn’t want to break any existing C code, so we got the standard we got. It was a different time back then.
This is pretty neat! I may never need a spreadsheet again.
Edit: I was going to contribute some trivial grammar fixes (in the Tips & Tricks) but it doesn’t look like the full app is on GitHub, is that true or am I just missing something? And there’s no index.js?
Author here. Agreed, it’s entirely useless. I threw this post together for fun one day a few years ago, and it’s meant entirely as tongue in cheek. I don’t use it myself because it’s pointless, and one extra thing I’d need to set up on a new system for no gain.
Wacky dual-head display stuff is one of the main things that drove me to just sticking with GNOME plus some UI tweaks instead of spending hours crafting my own bespoke desktop environment. GNOME 2 from waaay back in the day had excellent multi-monitor support for the time (even better than windows and Mac) and GNOME 3 had its issues over the past few years but is now pretty tolerable for my day-to-day stuff.
Even better, don’t write a Dockerfile at all. Use one of the existing official Node images which allow you to both specify what Debian and what Node version you want.
Those images have node set as the CMD, which means it will open the node REPL instead of a shell. You can either do docker run -it node:16-buster-slim /bin/bash to execute bash (or another shell of your choice) instead, or you can make a Dockerfile using the node image as your FROM and add an ENTRYPOINT or CMD instead to eliminate the need to invoke the shell.
Incidentally, to follow up as I remembered to write this, one reason that it’s common for images to use CMD in this way is that it makes it easier to use docker run as sort-of-drop-in replacements of uncontained CLI tools.
With an appropriate WORKDIR set, you can do stuff like
alias docker-node='docker run --rm -v $PWD:/pwd -it my-node-container node
alias docker-npm='docker run --rm -v $PWD:/pwd -it my-node-container npm
and you’d be able to use them just like they were node/npm commands restricted to the current directory, more or less. It wouldn’t preserve stuff like cache and config between runs, though.
I have to agree with this. I tend toward “OS” docker images (debian and ubuntu usually) for most things because installing dependencies via apt is just too damn convenient. But for something like a node app, all of your (important) deps are coming from npm anyway so you might as well use the images that were made for this exact use case.
It creates 3 layers instead of one. You can only have 127 layers in a given docker image so it’s good to combine multiple RUN statements into one where practical.
Also the 3 layers will take unnecessary space. You can follow the docker best practices and remove the cache files and apt lists afterwards - that will ensure your container doesn’t have to carry them at all.
I was more into Python at the time, but I read _why’s Poignant Guide to Ruby just for the entertainment. I know quite a few people who got their successful development careers started from that guide. (Usually coming from helpdesk roles, or systems/network administration.) And I exist in a relatively small bubble, so I’m sure the number of lives he markedly improved is well into the tens or hundreds of thousands.
I wish there were more funny and inspirational guides not just for programming, but all technical topics.
I’m not a programmer by trade but I wrote for myself a wiki-like note management system. I made a lot of what must seem like weird, backward decisions from the outset and figured that if the project ever took off (it hasn’t), I would get a lot of questions about why I did (or didn’t do) things a certain way. Like, why doesn’t it support tags, or page hierarchies, or namespaces? Why not just dump everything into flat files? And so on. So I wrote up all those reasons: https://blog.bityard.net/articles/2022/December/the-design-of-silicon-notes-with-cartoons
Not that people are going to magically start reading docs before asking questions, but at least if it ever comes up I can point to the answers instead of typing them over and over again.
I really don’t like the “subscribe to my newsletter” begging pop-up in the middle of the article and didn’t finish reading it.
I agree, as the author of this blog, I tried everything to remove it, even using their setting pages to inject some code through the google analytic tags (unfortunatly it’s well escaped, well at least they are good at their job :)).
For once I wanted to write more than managing a blog platform, so I went to substack.
But little by little, I’m trying to decoralate from it. First, I moved to a custom domain.
Then, I will likely move to self-hosted.
However, self-hosted is a lot of work, and time I cannot invest in writing, so I will do it very slowly. It will probably a take a year.
Meanwhile, I will loose a lot of readers, because your reaction is very common. I regularly get downvoted to oblivion on reddit and ignored on HN or lobster because of this.
And I get it, as a reader I feel the same when I face those subscribe walls.
Have you tried buttondown? That’s what I use for my newsletter, and there’s no pop-up wall.
Substack offer a full RSS feed of content from https://www.bitecode.dev/feed.rss
One reasonably quick way to turn this into a self-hosted site would be to use GitHub Actions to read that file and turn it into flat files in a repo, then publish the results using GitHub Pages.
I have at least part of that setup here: https://github.com/simonw/simonwillisonblog-backup/blob/6fd9222dc34693171a68a0820b91b017c6dc235a/.github/workflows/backup.yml#L70-L86
Just saw that the rss contains the full article.
It means if any of you want to bypass the whole stubstack shenanigan, you can use the RSS feed to read it comfortably on your machine, which is quite nice.
Its still great content, maybe switch to a different blogging provider, but please keep writing
Thanks, even if I’m one of those lucky person that enjoys writing, having readers that appreciate your work makes it a hundred times better.
I recently moved to bearblog.dev (after 14 years of self-hosted/GH pages Jekyll), which is paid and seems no nonsense.
No affiliation of any kind, just discovered them recently and I am happy so far.
You don’t need to self host necessarily. You can use gohugo, ci/cd, and github pages instead. It really takes very little time to set up.
This annoys the hell out of me, i mostly leave the site after that. I dislike substack for that, reminds me of medium
Well worth reading from an anthropological point of view. It’s a great example of toxic gatekeeping! Of a a very specific kind with a very deliberate logic behind it, that is described explicitly:
Kinda amazing that Linux ever got popular tbh. On the flip side, this kind of attitude is great at clamping down on bikesheds, useless design debates and other social antipatterns that make project management more difficult. If you take this attitude and someone is still willing to tell you you’re wrong about some technical thing, then a) they better be very sure and b) they better really know what they’re talking about. Ruthlessly weeding out the weak and ineffective (along with anyone unwilling or unable to argue with you) does get you a community with a certain kind of acumen and low overhead, as long as you have a steady stream of candidates lining up for the firing squad. It just, you know, does lots of other things too, like destroy people with high potential but low confidence, or normalize abuse based on certain criteria into abuse on whatever criteria one might feel like.
And if you take this kind of management approach, you better really know what you’re talking about as well, and you really, really better be right as often as you think you are. Otherwise you’re nothing but a demagogue who talks the talk but has no idea how to walk the walk, as is the case of plenty of recent popular figures I won’t bother naming.
(I am reminded of stories of Admiral King, the head of the US Navy for most of World War 2. My favorite comment about his personality was that he metaphorically “shaved with a blowtorch”. Incredibly competent and effective guy, largely because he generally had at least three other incredibly competent and effective people doing diplomacy and spin-control for him at any given time to keep up with all the people he pissed off along the way.)
It’s also worth noting a few other things…
Linus has since publicly acknowledged that his online communication skills were often more harmful than helpful. He has softened his tone tremendously and while he is still blunt about things he doesn’t like, he doesn’t really insult contributors anymore. This could partially be because he’s mostly a release manager these days.
The reason Linus kept a following at all was that even though he was a jerk in a lot of ways, he was almost always right. And people like a confident leader. I’m sure he (and other contributors at the time) justified the somewhat toxic culture by telling themselves that the quality of the kernel code was improved if contributors had (or quickly learned to develop) a thick skin against critical code reviews. Was it justified? IMO, probably not, since they could always refuse to merge a patch they think is dogshit without spewing insults. “The overall quality of this code is currently below our minimum standards, please feel free to resubmit after improvements are made.” Not sure this has the same ring to it.
People who know or talked to him in real life always say he is actually quite soft-spoken and nice to talk to, it was just his online presence that was abrasive.
From my recollection of that era, a lot of the credit goes to folks like Alan Cox. He has a high tolerance for Linus’ nonsense, but is a genuinely helpful and positive person. A lot of contributions were merged via his tree and folks working with him almost certainly had a much more positive experience than anyone working directly with Linus. As I recall, he was already at Red Hat then, which is probably a part of the reason they were able to have so much control over the young Linux ecosystem.
One aspect of this is that Linus has always meant for the group of people he directly interacts with to be small. There was a long period of development, contemporaneous with this mailing list post, where most users of linux were not using the mainline kernel. Distros all kept their own forks and those were what most people used. Many of the people who maintained those forks were also subsystem maintainers and they would funnel things up to the mainline. I’m not a Linus biographer but I’ve never read anything from him expressing a desire for that to end. I think that was a big part of why Linux got popular.
I’d argue that this would be generally preferable, but I of course also know that this is not how it works and certainly not how people end up in management (or other) positions a lot of the time.
Out of curiosity. How would you define that? Do you differentiate between gatekeeping and toxic gatekeeping? I don’t know enough about it, so I am curious about “good” or “non-toxic” gatekeeping and whether that exists.
I have another take then. To me the topic of “I am a bastard” and “I’m really not a very nice person.” feel horrendously unrelated to the topic, so not really like an argument, because it doesn’t seem to relate to the topic at all. The part about the debuggers being about details and not the overall picture, as well as learning to be careful kind of makes sense to me. I don’t know if I agree or disagree (I’d assume the best would be to be careful and be able to use a debugger), but it does relate to the topic, wheres Linus talking about his character traits feels out of place.
Good question! I hadn’t really thought about it in those terms. I suppose gatekeeping could be charitably defined as “preventing useless people from wasting my time” (and likely theirs as well), while toxic gatekeeping would be “gatekeeping by abusing the people I want to avoid until they go away”.
If you don’t look at the context, sure.
There are people out there that consider any form of gatekeeping toxic (I’m not one of them). In my view, “gatekeeping” is something like “That idea won’t work because …” or “That’s an interesting idea, do you have a proof-of-concept we can look at?”. Borderline “toxic gatekeeping” would be “That’s a stupid idea, because … “ (you’re not calling the person stupid, just the idea, but some people can’t or won’t see the distinction). “Toxic gatekeeping” would be “What are you? Stupid? That’s a horrible idea!” and end the discussion.
I wouldn’t call those first examples gatekeeping at all. When people talk about gatekeeping, they are usually talking about the exclusion of people from a community whose ideas or demographic don’t align with those of the group.
Bringing it back on-topic, the LKML used to be a somewhat toxic place but there was never any gatekeeping. Anyone was welcome to contribute improvements to the kernel no matter who they were, the only qualification being that you understand the kernel and the code you are writing, because if either fell short, you would certainly hear about it. That’s still somewhat true, but it’s a nicer place now.
We actually do the inverse of this where I work. Copying SSH keys to all of the hosts that hundreds of users may need to access was a chore and very error prone. So I wrote a web UI that users can (securely) log into and upload their SSH key. There’s also a bit of access management stuff but the net result is that each SSH server looks up user keys from the host via
AuthorizedKeysCommand
. (Rather than justcurl
, this is actually a small Python program that caches the key as well so that if the key server goes down, people can still log in.) It works really well.Regarding the difficulties of SSH certificates, there are basically two kinds: host certs and user certs. Host certs are actually pretty easy to use if they are part of your host provisioning process. Users then only need to add one line to their
~/.ssh/known_hosts
file to trust all hosts signed by that cert. If that’s too much work, they can just keep calm and TOFU on.User certs are the reverse of course, with the added step that users have to install the signed certificate to their
.ssh
directory. Where I work, we have some non-technical users that must use SSH for various tasks so that is a non-starter in this particular environment. But I’m told it scales very well in places where all SSH users are technical enough to configure their own SSH client.Managing authorized_keys is provided by e.g. FreeIPA & sssd.
I would hope that this sort of host key via well-known mechanism would help when connecting to hosts that aren’t managed by the same org.
Really glad to hear about improvements on the multi-monitor front. It was precisely this that turned me off of Plasma. Gonna make this the next Software Thing I try out.
For the last decade or so, I would give KDE a try roughly once a year, and kept going back to XFCE or GNOME because although they were annoying in their own ways, they were at least stable enough to sit in the background and not annoy me too much while I went about my day. Usually the issues with KDE were glaring bugs in my workflow that nobody cared to fix (especially around multi-monitor support, docking/undocking my laptop, and sleep/resume), crazy desgin decisions (why I can’t I delete that stupid cashew?), or outright daily crashes.
However, KDE is actually pretty good now. I switched to it in October for both my personal and work machines and am pleased to report that basically all of the things that pushed me away previously all work fine now. I have been running KDE on Debian bookworm and multi-monitor support in particular works just great. There are occasional crashes/hangs but they are infrequent enough that they don’t bug me too much, and some of that is to be expected from running an unstable distribution.
Now is a great time to give KDE another try.
I have been in the industry long enough that I agree with both points of view, but for different things. At the end of the day, naming things is hard and you’ll never be 100% happy with any method you pick in the long run. There are always trade-offs, one-offs, and gross hacks. The only reasonable goal is to seek to minimize them.
Giving hosts and devices cute names is perfect for small, self-contained deployments (like my home lab). But they fall down in large heterogeneous environments like a datacenter. The biggest problem I have seen is that a box with a specific role (say, a firewall) might get a cute name like “thor”. Eventually “thor” becomes obsolete and is replaced with a new, more powerful firewall which is named “zeus” but nobody bothers to update any of the docs. A new admin is brought in and a few weeks after they start, zeus stops forwarding packets over a critical weekend and now the poor admin is trying to understand the docs… they talk about thor being the firewall but they have only known zeus… are they both firewalls for different things? Is thor supposed to be online, but isn’t, and that is part of the problem? Only a phone call with someone who has been around longer than the admin could clarify this, but it’s 3 a.m. and the manager is on PTO…
Yes, the problem is ultimately poor documentation and onboarding, but I’ve seen it often enough as a contributing factor to a larger problem that I actively oppose cute names for critical infrastructure if I have any say in it.
On the flip side, I have seen environments that try to package as much metadata as possible into a hostname, which leads you to end up having to log into a host called ‘web9-c23-r1-det.eng.lab.example.com’. Whoops sorry, you accidentally rebooted web9-c22-r1-det.eng.lab.example.com.
My personal opinion is that if once you start encoding more than two pieces of metadata into a hostname, that is the point at which you should just give up, assign the thing a randomly-generated hostname, keep your metadata in some kind of DCIM, and then build your tooling around the DCIM.
Big release! A lot of good stuff there. I’m excited to try it out. However, …blogspot? I thought that was killed
Nope, Google even uses it for a lot of their own project announcements.
I hate it when websites do that. Yes I have Javascript disabled, why do you ask?
Interesting! I had no idea that would happen. The site itself doesn’t use any Javascript but it is hosted on Cloudflare Pages which apparently injects its own Javascript to obfuscate email addresses.
I don’t know if it will work any better for you, but you can also try reading the article directly on GitHub: https://github.com/cu/blog/blob/master/content/testing-smtp.md
You can disable that in cloudflare
I expect these kinds of problems are simply a result of the complexity of modern hardware. This is basically what you get when the hardware is poorly (or at least incomprehensibly) designed or implemented, and there are so many combinations of peripherals and configurations that they can’t all be tested. USB and Thunderbolt docks, to pick an extreme example, have been around for years and they are still a compatibility and interoperability shitshow on every OS, not just Linux.
Linux devs do their best to paper over poor design decisions while not inadvertently breaking existing setups, but it’s a perilous line to walk.
You get used to it since most OEMs don’t care about Linux support, even the ones that “love” Linux.
I didn’t watch the video, but I read the article. Or as much of it as I could: the images are pretty distracting and many didn’t seem to have anything to do with the content. It also seems weird to spend so much time bashing C when Linux (and pretty much every other major modern OS) is written in C. It’s like going to France and complaining about all the oddities of French.
PAM is complex because it is powerful. It has a learning curve just like lots of other things to do with systems administration and programming. I will grant that the documentation of PAM is lacking, but one of the most well-regarded books on the topic is PAM Mastery by Michael Lucas. I didn’t see any reference to it, so I assume the author didn’t come across any reference to it while researching PAM (which would be a little surprising) or didn’t read it.
Basically the article can be summed up as, “meh, PAM sucks.” Frankly, this comes off as a just a rant about the kinds of things I end up dealing with every day at work. I feel like this talk/article was a missed opportunity. It would be far more interesting to talk about not only PAM’s weaknesses but how they might be improved, even incrementally. Designing a replacement for PAM would be a massive challenge given its decades of inertia, but even just a rough outline or thought experiment for what one might look like would be better than nothing. There is an unlimited number of things in this world to complain about, complaints have little to no value. But well thought-out solutions do.
I agree the documentation can be lacking some, but I never found it a deal breaker. I never found PAM to be all that hard to handle and I’ve written a few PAM auth plugins.
I’ve never felt terror when dealing with PAM.
insanely detailed post, i have administrated the matrix homeserver for cyberia.club + maintained the matrix marketplace app on DO for quite awhile, and i just learned a ton about the ecosystem. thanks a lot for writing this!
it really does feel like matrix, and the folks who develop it, are severely stunted by how many projects they have going. the golang SDK, the rust SDK, dendrite, synapse, mjolnir, element (web, android, iphone), hydrogen, etc. in my opinion, they ought to focus on a few things (acknowledging their pitfalls) and work on the speed of the darn thing - matrix still feels laggy and slow compared to any modern chat app, and especially compared to IRC. read receipts and online detection cause a ton of CPU usage out of the box on synapse, and imho they should be turned off / deprecated entirely.
in short, i feel like there’s way too much cruft in matrix right now, it’s hard to see a future where the weight of their projects doesn’t simply crush them. i hope for their sake that i’m wrong!
And those few things include these two:
If they can manage to do just these two things, the community would handle all the rest. That’s how IRC worked and that’s how XMPP kinda worked. Buut I’m guessing they also wanna make some money so they have to branch out a bit.
Not only that, but it’s always felt to me like it’s being pushed into too many directions, many of which seem to conflict with each other. On the one hand, they want full decentralization and federation. Anyone can run a matrix server, even one they wrote themselves. But on the other hand, they want strong privacy. And on the third hand, moderation controls. I applaud them for trying to tackle all of these at once but I have my doubts that it’s even possible.
At least with IRC, the implications are clear. Servers do not typically store messages (but they could!), and once your message hits the client of everyone in the channel, there’s no way to redact it. You have to assume that anyone could be logging anything you say, even in private messages. All technical measures to enforce the ability to redact messages and cancel users would be theatrics at best because at the end of the day, anyone can make a screenshot.
Even IRC is moving in the direction where matrix is going. Check out IRCcloud and IRCv3
That implies IRC is moving - IRCv3 features never got much adoption in servers and clients outside of IRCcloud.
I had never heard of a phone line simulator, but I really like this setup and have dreamt of doing something similar (having found a new old stock rotary dial phone) at home.
If you want to take it a step further and merge the old and the new, you can buy an ATA (Analog Telephone Adapter) that turns any regular phone into a Voice-over-IP phone. With a VoIP server like Asterisk installed a Raspberry Pi, you can do all kinds of wacky things.
I DID NOT NEED TO KNOW ABOUT THIS WONDERFUL IDEA
The hilarity of this (and it is hilarious) is kinda ruined by the first few frames of the GIF showing the (assumed) developer in underwear (even though it is blurry for some reason).
;)
I ran out of patience to finish reading the article due to the exasperated tone throughout and the author’s steadfast insistence that they know better than decades of system programmers that came before them, but I’ll state what I think should be the obvious:
If you want to build something on a system with a legacy design, you shouldn’t be too surprised that you have to use legacy tools and interfaces to get the job done.
The author states several times that the people who wrote C and C-based OSes made bad choices and invented bad designs, which is simply not true. Those people were not idiots, they were designing things according to existing constraints, concerns, and goals all colored by the state of the art at the time. Where the state of the art was generally, “Oh, you want that program written for this computer to run on that other computer too? Have fun re-writing the whole thing from scratch!”
The person who wrote this article is missing entire decades of context, and without that context, it’s very easy to dismiss mistakes in design as obvious oversights or incompetence. I look forward to the day that someone looks at the author’s code a few years from now and says, “wow, what an flaming pile of yuck!”
Granted, hindsight is 20/20, and we should not chastise past effort just because they lacked our hindsight.
But.
Hindsight is 20/20, how about exploiting this? While it is normal for legacy systems to suck by current standard, they still suck by current standards. Insisting that we’d be nice to legacy designers turn our attention away from the fact that they lacked our hindsight. Insisting that legacy systems used to be good, hides the fact that they are now bad.
If we want to have a chance of disentangling ourselves from legacy crap, we first need to recognise that it is legacy crap, and build up the emotional energy necessary to take action and try & make it better. I don’t care that past giants did the best they could. The best they could is no longer enough, and we should stand on their shoulders and do better.
Nobody is saying we shouldn’t! The way I see it, the problems here are obvious to those paying attention. Complaining about the problems, whether the tone is dispassionate or angry, doesn’t actually help form a solution, and becoming angry over the problems helps even less by emotionally exhausting everyone.
As @andyc alluded to in their sibling comment, this is a common pattern in computing. Much like the C ABI creates a form of crufty legacy glue between applications, HTTP and TCP have formed a similar bottleneck in networking. Every couple weeks another internet loud-person comes to the realization in anger that the reason why so much stuff gets piped over HTTP is because it’s the least common denominator let through by middleboxes. And as much as I’m sympathetic when another person comes to this well-known conclusion in anger, it doesn’t change the reality: I can’t use SCTP because of Middleboxes; I’m stuck on port 443 because of Middleboxes; Latency is really high on my video call because I’m NATed/Middleboxes. You can be angry at the middleboxes or accept/try to work with reality, it’s your choice.
Not everyone is paying attention. Complaining raises awareness, which is a necessary step towards forming a solution. If no one complains, few will ever know. If no one knows, no one will care. If no one cares, the problem does not get fixed.
Important problems need to be complained about.
There’s a huge difference between C and HTTP. C is basically the only way for languages to talk to each other in the same process. It’s bad and crufty and legacy, but it’s also all we have.
HTTP on the other hand is not the only thing we have. We have IP. We have UDP. We have TCP. And those middle boxes are forcing me to use complex HTTP tunnels where I could have sent UDP packets instead. In many cases this kills performance to such an extent that some programs that would have been possible with UDP, simply cannot be done with the tunnel. And bottleneck wise, IP, TCP, and UDP are much narrower than HTTP.
I’m not sure you realise how politically charged this statement is. What you just wrote suspiciously sounds like “There is no alternative”. Middle boxes aren’t like gravity. Humans put them there, and humans can remove them. If they’re a problem, complaining about them can raise awareness, and hopefully cause people to make better middle boxes.
On the other hand, if everyone thinks middle boxes are “reality”, and that the only choice is to work with them, that will make it so. I can’t have that, so I’ll keep complaining whenever I have to do some base64-JSON->deflate->HTTP insanity just to copy some Protobuf from one machine to another (real story).
The people with the knowledge and ability to fix the situation, or to provide workarounds, are often the people who are aware of the problem. I firmly believe that the endless complaining in technical circles on the internet doesn’t actually help raise awareness to folks unaware of or uninterested in the issue; once you become aware you understand the issue fairly quickly. I’ve always viewed it as a form of venting rather than an honest attempt to fix things, all about healing the self and not about fixing the problem.
Reality has a surprising amount of detail and I can guarantee you I can find a domain expert in any domain who can breathlessly fire off a list of things broken about their domain. Yet you or I who are in no position to change those things nor really have much more than a surface-level interest in them don’t need to be aware of every one of those problems. If every problem was shouted from every rooftop, I’m pretty sure humanity would go deaf.
I don’t mean to draw any parallel to world politics, though humans being human there will always be overlap. That being said, I think understanding why it’s not trivial to remove middleboxes from the equation is a very important part of understanding the problem here and exactly why I find so many rants unhelpful. The reality is that hardware manufacturers are trying to cut costs and hire cheap, understaffed development teams who make crappy middleboxes which are then used by ISPs who will attempt to use a middlebox forever until it either breaks or them or someone threatens them with legal action, because margins are so low. This is exacerbated by the ecosystem of ISPs in an area. There’s more, it’s a complicated topic, but all of that gets lost if you’re angrily ranting. It’s helpful to understand the incentives/problems that created this broken state so we don’t inadvertently create another set of broken incentives when the time/opportunity comes to fix them. In networking that time is around the corner as QUIC/HTTP3 is increasingly being proposed as the way forward to allow the sorts of applications that the old Internet envisioned. Understanding the problem well here is key so we don’t run into yet more ossification.
Accepting that something is “reality” doesn’t stop folks from trying to improve the situation. You don’t constantly need to beat the drum of how broken something is just to fix it. Accepting reality is to also empathize with the past decisions that brought us into the current state. For example, I think IPv6 should be adopted by everyone and everywhere; NAT is a silly crutch stopping middleboxes from having to leave IPv4 addresses (and raising the value of existing blocks owned by certain entities, but I digress.) But writing a long, angry rant about how NAT sucks doesn’t help anyone; it doesn’t help my family overcome their NATs nor does it help me come up with a less complicated network topology. In the meantime Wireguard, Zerotier, and Yggdrassil are taking matters into their own hands and helping bring the full internet back despite middleboxes. That doesn’t mean I’ll ever stop pushing netops to support IPv6 nor will I stop pushing netops to let non-TCP and non-UDP traffic through their middleboxes. But there’s something to be said about actually solving a problem and not just complaining about it. In fact, I’d say that trying to solve a problem despite the broken state of the problem is perhaps the strongest statement on how broken things are. “Look at thing!” I say, “it sucks so much I had to route around it”.
Having said that, I run up against my own screed. Programmers love to complain and rant, moreso than any other domain that I’m familiar with. I need to accept the reality that this hasn’t changed in the past and will not change going forward. Still, I voice my opinion from time-to-time about the fact. Overall I’m happy that this site has a rant tag because I can filter out the rants from my headspace and only view them when I want to (like now.)
Political leaders are often the only folks who can make short term decisions on foreign policy. Yet their decisions are often influenced by what they believe their people will think of their decision. If they anticipate that a given decision will be unpopular, they are more likely to not do it. Thus, discussing foreign policy in a bar or on online forums does influence foreign policy. The effect is very very diffuse, but it’s real.
People with knowledge and ability to fix the situation, if they’re not psychopaths, are likely to empathise with whatever they believe the “normies” would feel about it, making it a similar situation to politicians.
The choice of words is important. You didn’t just say “accept reality”, you also said “work with reality”, which generally implies not only accepting what reality is, but also accepting that you’re powerless to change it. Directed at someone else, it also tend to chastise them for being idealistic fools.
This is where we disagree. You think it’s real but I think it’s not. I think the world is full of people being unhappy by things and without a concerted political front you’ll just be that person on their soapbox ranting at crowds; the silent majority ignores the soapbox ranter. Anyway this is straying out of technology into politics so I’ll stop here.
Fools no, but idealistic, yes. I know that’s anathema here on Lobsters where everyone wants to resonate with their code and have their personal values reflected in and pushed by their work, but I’m comfortable with that not being the case for myself. I’m very happy not having opinions about most things and accepting that there’s a Chesterton’s Fence to most issues in reality.
Yes definitely, I agree we should try to do better but not denigrate the work of the past …
Although in thinking about this more, I think there is a pretty important difference between networking in software. The incentives are mixed in both cases, but I’d say:
But yeah overall I really hope everyone writing software thinks about the system as a whole, the ecosystem, and how to interoperate. Not just the concerns of their immediate work
I can’t express enough how much I agree with this. C was designed for a specific problem and solved it well. Now, 40 years, later people complain about its deficiencies, yet barely question the fact that we (as software developers) haven’t come up with any usable and widely accepted alternative to binary interfaces. Apparently this industry isn’t as innovative as it likes to perceive itself…
Yes. Today we have byte addressable 2s complement machines, but back when C was first designed? There were computers with addressable units from 9 to 66 bits in size and the C compilers that K&R put out were retargetted (by others) for such machines. By the time 1989 rolled around, the standards committee didn’t want to break any existing C code, so we got the standard we got. It was a different time back then.
https://numbr.dev - web version of soulver. Calculator with currency rates.
This is pretty neat! I may never need a spreadsheet again.
Edit: I was going to contribute some trivial grammar fixes (in the Tips & Tricks) but it doesn’t look like the full app is on GitHub, is that true or am I just missing something? And there’s no index.js?
Thanks! Right now only core is published. UI is still in development so will release it later)
It wasn’t neglected at all. Insults are the definition of useless bloat.
This is what is known as “dry humor”
Yeah, I thought that was obvious when I was writing it, but apparently not. Clarified my own opinion on it in another comment.
Touché
Author here. Agreed, it’s entirely useless. I threw this post together for fun one day a few years ago, and it’s meant entirely as tongue in cheek. I don’t use it myself because it’s pointless, and one extra thing I’d need to set up on a new system for no gain.
Every time someone creates a new Markdown variant, a kitten dies :(
You monster!
Wacky dual-head display stuff is one of the main things that drove me to just sticking with GNOME plus some UI tweaks instead of spending hours crafting my own bespoke desktop environment. GNOME 2 from waaay back in the day had excellent multi-monitor support for the time (even better than windows and Mac) and GNOME 3 had its issues over the past few years but is now pretty tolerable for my day-to-day stuff.
Congrats on writing a Dockerfile.
A few suggestions:
Even better, don’t write a Dockerfile at all. Use one of the existing official Node images which allow you to both specify what Debian and what Node version you want.
I tried this but I didn’t get a shell, it would be nice to get it working.
Those images have
node
set as theCMD
, which means it will open the node REPL instead of a shell. You can either dodocker run -it node:16-buster-slim /bin/bash
to execute bash (or another shell of your choice) instead, or you can make a Dockerfile using the node image as yourFROM
and add anENTRYPOINT
orCMD
instead to eliminate the need to invoke the shell.Incidentally, to follow up as I remembered to write this, one reason that it’s common for images to use
CMD
in this way is that it makes it easier to usedocker run
as sort-of-drop-in replacements of uncontained CLI tools.With an appropriate WORKDIR set, you can do stuff like
alias docker-node='docker run --rm -v $PWD:/pwd -it my-node-container node
alias docker-npm='docker run --rm -v $PWD:/pwd -it my-node-container npm
and you’d be able to use them just like they were node/npm commands restricted to the current directory, more or less. It wouldn’t preserve stuff like cache and config between runs, though.
I have to agree with this. I tend toward “OS” docker images (debian and ubuntu usually) for most things because installing dependencies via apt is just too damn convenient. But for something like a node app, all of your (important) deps are coming from npm anyway so you might as well use the images that were made for this exact use case.
what problems?
It creates 3 layers instead of one. You can only have 127 layers in a given docker image so it’s good to combine multiple
RUN
statements into one where practical.Also the 3 layers will take unnecessary space. You can follow the docker best practices and remove the cache files and apt lists afterwards - that will ensure your container doesn’t have to carry them at all.
Check out the apt-get section in the best practice guide: https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
I was more into Python at the time, but I read _why’s Poignant Guide to Ruby just for the entertainment. I know quite a few people who got their successful development careers started from that guide. (Usually coming from helpdesk roles, or systems/network administration.) And I exist in a relatively small bubble, so I’m sure the number of lives he markedly improved is well into the tens or hundreds of thousands.
I wish there were more funny and inspirational guides not just for programming, but all technical topics.
You might enjoy Julia Evans’ zines! Not quite what you’re describing, but seems closer than most other reference material.