I owned one of these as my first work laptop and I cannot agree, it’s a decent laptop but not the best one by far. What I disliked the most was it’s abysmal display, dark, low resolution, bad color reproduction. As usual with Lenovo, it’s a lottery game with the screen and from the model number you cannot infer what manufacturer the screen is from. The keyboard was pretty good though, even though it had a lot of flex and feels pretty cheap compared to what you get nowadays. Also, I don’t get the point of carrying another battery pack, to swap it out you need to power down the machine. HP’s elitebook 8460[w/p] models could be configured with a 9-cell battery and an optional battery slice which gave them almost a full day of battery life. Those elitebooks were built like a tank but at the same time very heavy. Compared to the X220 they’re the better laptops in my opinion.
However, the best laptop is an Apple silicon MacBook Air. It’s so much better than what else is available that it’s almost unfair. No fan noise, all day battery life, instant power on and very powerful. It would be great if it could run any Linux distribution though, but macOS just works and is good enough for me.
I totally disagree, and I have both an X220 and an M1 MacBook Air.
I much prefer to the X220. In fact, I have 2 of them, and I only have the MBA because work bought me one. I would not pay for it myself.
I do use the MBA for travel sometimes, because at a conference it’s more important to have something very portable, but it is a less useful tool in general.
I am a writer. The keyboard matters more than almost anything else. The X220 has a wonderful keyboard and the MBA has a terrible keyboard, one of the worst on any premium laptop.
Both my X220s have more RAM, 1 or 2 aftermarket SSDs, and so on. That is impossible with the MBA.
My X220s have multiple USB 2, multiple USB 3, plus Displayport plus VGA. I can have it plugged in and still run 3 screeens, a keyboard, a mouse, and still have a spare port. On the MBA this means carrying a hub and thus its thinness and lightness goes away.
I am 6’2”. I cannot work on a laptop in a normal plane seat. I do not want to have to carry my laptop on board. But you cannot check in a laptop battery. The X220 solves this: I can just unplug its battery in seconds, and take only the battery on board. I can also carry a charged spare, or several.
The X220 screen is fine. I am 55. I owned 1990s laptops. I remember 1980s laptops. I remember greyscale passive-matrix LCDs and I know why OSes have options to help you find the mouse cursor. The X220 screen is fine. A bit higher-res would be fine but I cannot see 200 or 300ppi at laptop screen range so I do not need a bulky GPU trying to render invisibly small pixels. It is a work tool; I do not want to watch movies on it.
I have recently reviewed the X13S Arm Thinkpad, and the Z13 AMD Thinkpad, and the X1 Carbon gen 12.
My X220 is better than all of them, and I prefer it to all of those and to the MacBook Air.
I say all this not to say YOU ARE WRONG because you are entitled to your own opinions and choices. I am merely trying to clearly explain why I do not agree with them.
… And why it really annoys me that you and your choices have so limited the market that I have to use a decade-old laptop to get what I want in a laptop because your choices apparently outweigh mine and nobody makes a laptop that does what I want in a laptop any more, including the makers of my X220.
Probably because your requirements are very specific and “developer” laptops are niche market.
I don’t think my requirements are very specific.
I am not a developer, and I don’t know what a “developer” laptop is meant to be.
I don’t think it’s that niche:
a. Mine is a widely-held view
b. The fact there is such a large aftermarket in classic Thinkpads and parts for them, even upgrade motherboards, falsifies this claim.
It was not a specialist tool when new; it was a typical pro-grade machine. It’s not a niche product.
This change in marketing is not about ignoring niche markets. It’s about two things: reducing cost, and thus increasing margin; and about following trends and not doing customer research.
Comparison: I want a phone with a removable battery, a headphone socket, physical buttons I can use with gloves on, and at least 2 SIM slots plus a card slot. These are all simple easy requirements which were ubiquitous a decade ago, but are gone now, because everyone copies the market leaders, without understanding what makes them the market leader.
Whereas ISTM that your argument amounts to “if people wanted that they’d buy it, so if they don’t, they mustn’t want it”. Which is trivially falsified: this does not work if there is no such product to buy.
But there used to be, same as I used to have a wide choice of phones with physical buttons, headphone sockets, easily augmented storage, etc.
In other markets, companies are thriving by supplying products that go counter to industry trends. For instance, the Royal Enfield company supplies inexpensive, low-powered motorcycles that are easily maintained by their owners, which goes directly counter to the trend among Japanese motorcycles of constantly increasing power, lowering weight, and removing customer-maintainability by making highly-integrated devices with sealed, proprietary electronics controlling them.
Framework laptops are demonstrating some of this for laptops.
When I say major brands are lacking innovation, derivative, and copy one another, this is hardly even a controversial statement. Calling it a conspiracy theory is borderline offensive and I am not happy with that.
Margins in the laptop business are razor-thin. Laptops are seen as a commodity. The biggest buyers are businesses who simply want to provide their employees with a tool to do their jobs.
These economic facts do tend to converge available options towards a market-leader sameness, but that’s simply how the market works.
Motorcycles are different. They’re consumer/lifestyle products. You don’t ride a Royal Enfield because you need to, you do it because you want to, and you want to signal within the biker community what kind of person you are.
This is the core point. For instance, my work machine, which I am not especially fond of, is described in reviews as being a standard corporate fleet box.
I checked the price when reviewing the newer Lenovos, and it was about £800 in bulk.
These are, or were when new, all ~£2000 premium devices, some significantly more.
And yet, my budget-priced commodity fleet Dell has more ports than any of them, even the flagship X1C – that has 4 USB ports, but the Dell, at about a third of the price, has all those and HDMI and Ethernet.
This is not a cost-cutting thing at the budget end of the market. These are premium devices.
And FWIW I think you’re wrong about the Enfields, too. The company is Indian, and survived decades after the UK parent company died, outcompeted by cheaper, better-engineered Japanese machines.
Enfield faded from world view, making cheap robust low-spec bikes for a billion Indian people who couldn’t afford cars. Then some people in the UK noticed that they still existed, started importing them, and the company made it official, applied for and regained the “Royal” prefix and now exports its machines.
But the core point that I was making was that in both cases, it is the budget machines at the bottom of the market which preserve the ports. It is the expensive premium models which are the highly-integrated, locked-down sealed units.
This is not cost-cutting; it is fashion-led. Like womens’ skirts and dresses without pockets, it is designed for looks not practicality, and sold for premium prices.
Basically, what I am reading from your comments is that Royal Enfield motorcycles (I knew about the Indian connection, btw, but didn’t know they’d made a comeback in the UK) and chunky black laptops with a lot of ports is for people with not a lot of money, or who prefer to not spend a lot of money for bikes or laptops.
Why there are not more products aimed at this segment of the market is left as an exercise to the reader.
ISTM that you are adamantly refusing to admit that there is a point here.
Point Number 1:
This is not some exotic new requirement. It is exactly how most products used to be, in laptops, in phones, in other sectors. Some manufacturers cut costs, sold it as part of a “fashionable” or “stylish” premium thing, everyone else followed along like sheep… And now it is ubiquitous, and some spectators, unable to follow the logic of cause and effect, say “ah well, it is like that because nobody wants those features any more.”
And no matter how many of us stand up and say “BUT WE WANT THEM!” apparently we do not count for some reason.
Point Number 2:
more products aimed at this segment of the market
That’s the problem. Please, I beg you, give me links to any such device available in the laptop market today, please.
I don’t doubt there are people who want these features. They’re vocal enough.
But there are not enough of them (either self-declared, or found via market research) for a manufacturer to make the bet that they will make money making products for this market.
It’s quite possible that an new X220-like laptop would cost around $5,000. Would such a laptop sell enough to make money back for the manufacturer?
“Probably because your requirements are very specific and “developer” laptops are niche market.”
I’d suggest an alternate reason. Yes, developer laptops are a niche market. But I’d propose that laptops moving away from the X220 is a result of chasing “thinner and lighter” above all else, plus lowering costs. And the result when the majority of manufacturers all chase the same targets, you get skewed results.
Plus: User choice only influences laptop sales so much. I’m not sure what the split is, but many laptops are purchased by corporations for their workforce. You get the option of a select few laptops that business services / IT has procured, approved, and will support. If they are a Lenovo shop or a Dell shop and the next generation or three suck, it has little impact on sales because it takes years before a business will offer an alternative. If they even listen to user complaints.
And if I buy my own laptop, new, all the options look alike - so there’s no meaningful way to buy my preference and have that influence product direction.
“Neither I, nor anyone else who bought an Apple product is responsible for your choice of a laptop.”
Mostly true. The popularity of Apple products has caused the effect I described above. When Apple started pulling ahead of the pack, instead of (say) Lenovo saying “we’ll go the opposite direction” the manufacturers chased the Apple model. In part due to repeated feedback that users want laptops like the Air, so we get the X1 Carbons. And ultimately all the Lenovo models get crappy chicklet keyboards, many get soldiered RAM, fewer ports, etc. (As well as Dell, etc.)
(Note I’m making some pretty sweeping generalizations here, but my main point is that the market is limited not so much because the OP’s choices are “niche” but because the market embraces trends way too eagerly and blindly.)
Mostly true. The popularity of Apple products has caused the effect I described above. When Apple started pulling ahead of the pack, instead of (say) Lenovo saying “we’ll go the opposite direction” the manufacturers chased the Apple model. In part due to repeated feedback that users want laptops like the Air, so we get the X1 Carbons. And ultimately all the Lenovo models get crappy chicklet keyboards, many get soldiered RAM, fewer ports, etc. (As well as Dell, etc.)
This reminds me a great deal of my recurring complaint that it’s hard to find a car with a manual transmission anymore. Even down to the point that, last time I was shopping, I looked at German-designed/manufactured vehicles, knowing that the prevailing sentiment last time I visited Germany was that automatic transmissions were for people who were elderly and/or disabled.
This, but Asahi still has a long, long way to go before it can be considered stable enough to be a viable replacement for macOS.
For the time being, you’re pretty much limited to running macOS as a host OS and then virtualize Linux on top of it, which is good enough for 90% of use cases anyway. That’s what I do and it works just fine, most of the time.
Out of curiosity, what are you using for virtualization? The options for arm64 virtualization seemed slim last I checked (UTM “works” but was buggy. VMWare Fusion only has a tech preview, which I tried once and also ran into problems). Though this was a year or two ago, so maybe things have improved.
VMware and Parallels have full versions out supporting Arm now, and there are literally dozens of “light” VM runners out now, using Apple’s Virtualisation framework (not to be confused with the older, lower level Hypervisor.framework)
I’m using UTM to run FreeBSD and also have Podman set up to run FreeBSD containers (with a VM that it manages). Both Podman (open source) and Docker Desktop (free for orgs with fewer than, I think, 250 employees) can manage a Linux VM for running containers. Apple exposes a Linux binary for Rosetta 2 that Docker Desktop uses, so can run x86 Linux containers.
I’m not speaking for @petar, but I use UTM when I need full fat Linux. (For example, to mount an external LUKS-encrypted drive and copy files.) That said, I probably don’t push it hard enough to run into real bugs. But the happy path for doing something quick on a Ubuntu or Fedora VM has not caused me any real headaches.
It feels like most of the other things I used to use a Linux VM for work well in Docker desktop. I still have my ThinkPad around (with a bare metal install) in case I need it, but I haven’t reached for it very often in the past year.
I’ve interacted with the LLVM project only once (an attempt to add a new clang diagnostic), and my experience with Phabricator was a bit painful (in particular, the arcanist tool). Switching to GitHub will certainly reduce friction for (new) contributors.
However, it’s dismaying to see GitHub capture even more critical software infrastructure. LLVM is a huge and enormously impactful project. GitHub has many eggs in their basket. The centralization of so much of the software engineering industry into a single mega-corp-owned entity makes me more than a little uneasy.
There are so many alternatives they could have chosen if they wanted the pull/merge request model. It really is a shame they ended up where they did. I’d love to delete my Microsoft GitHub account just like I deleted my Microsoft LinkedIn account, but the lock-ins all of these projects takes means to participate in open source, I need to keep a proprietary account training on all of our data, upselling things we don’t need, & making a code forge a social media platform with reactions + green graphs to induce anxiety + READMEs you can’t read anymore since it’s all about marketing (inside their GUI) + Sponsors which should be good but they’re skimming their cut of course + etc..
It really is a shame they ended up where they did.
If even 1% of the energy that’s spent on shaming and scolding open-source maintainers for picking the “wrong” infrastructure was instead diverted into making the “right” infrastructure better, this would not be a problem.
Have you used them? They’re all pretty feature complete. The only difference really is alternatives aren’t a social network like Microsoft GitHub & don’t have network effect.
It’s the same with chat apps—they can all send messages, voice/video/images, replies/threads. There’s no reason to be stuck with WhatsApp, Messenger, Telegram, but people do since their network is there. So you need to get the network to move.
The only difference really is alternatives aren’t a social network like Microsoft GitHub & don’t have network effect.
And open-source collaboration is, in fact, a social activity. Thus suggests an area where alternatives need to be focusing some time and effort, rather than (again) scolding and shaming already-overworked maintainers who are simply going where the collaborators are.
Breaking out the word “social” from “social media” isn’t even talking about the same thing. It’s social network ala Facebook/Twitter with folks focusing on how many stars, how green their activity bars are, how flashy their RENDERME.md file is, scrolling feeds, avatars, Explore—all to keep you on the platform. And as a result you can hear anxiety in many developers on how their Microsoft GitHub profile looks—as much as you hear folks obsessing about their TikTok or Instagram comments. That social anxiety should have little place in software.
Microsoft GitHub’s collaboration system isn’t special & doesn’t even offer a basic feature like threading, replying to a inline-code comment via email puts a new reply on the whole merit request, and there are other bugs. For collaboration, almost all of alternatives have a ticketing system, with some having Kanban, & additional features—but even then, a dedicated (hopefully integrated) ticketing system, forum, mailing list, or libre chat option can offer a better, tailored experience.
Suggesting open source dogfood on open source leads to better open source & more contributions rather than allowing profit-driven entities to try to gobble up the space. In the case of these closed platforms you as a maintainer are blocking off an entire part of your community that values privacy/freedom or those blocked by sanctions while helping centralization. The alternatives are in the good-to-good-enough category so there’s nothing to lose and opens up collaboration to a larger audience.
But I’ll leave you with a quote
Choosing proprietary tools and services for your free software project ultimately sends a message to downstream developers and users of your project that freedom of all users—developers included—is not a priority.
In the case of these closed platforms you as a maintainer are blocking off an entire part of your community that values privacy/freedom or those blocked by sanctions while helping centralization.
The population of potential collaborators who self-select out of GitHub for “privacy/freedom”, or “those blocked by sanctions”, is far smaller than the population who actually are on GitHub. So if your goal is to make an appeal based on size of community, be aware that GitHub wins in approximately the same way that the sun outshines a candle.
And even in decentralized protocols, centralization onto one, or at most a few, hosts is a pretty much inevitable result of social forces. We see the same thing right now with federated/decentralized social media – a few big instances are picking up basically all the users.
But I’ll leave you with a quote
There is no number of quotes that will change the status quo. You could supply one hundred million billion trillion quadrillion octillion duodecillion vigintillion Stallman-esque lectures per femtosecond about the obvious moral superiority of your preference, and win over zero users in doing so. In fact, the more you moralize and scold the less likely you are to win over anyone.
If you genuinely want your preferred type of code host to win, you will have to, sooner or later, grapple with the fact that your strategy is not just wrong, but fundamentally does not grasp why your preferences lost.
Some folks do have a sense of morality to the decisions they make. There are always trade offs, but I fundamentally do not agree that the tradeoffs for Microsoft GitHub outweigh the issue of using it. Following the crowd is less something I’m interested in than being the change I & others would like to see. Sometimes I have ran into maintainers who would like to switch but are afraid of if folks would follow them & are then reassured that the project & collaboration will continue. I see a lot of positive collaboration on SourceHut ‘despite’ not having the social features and doing collaboration via email + IRC & it’s really cool. It’s possible to overthrow the status quo—and if the status quo is controlled by a US megacorp, yeah, let’s see that change.
Sometimes I have ran into maintainers who would like to switch but are afraid of if folks would follow them & are then reassured that the project & collaboration will continue.
But this is a misleading statement at best. Suppose that on Platform A there are one million active collaborators, and on Platform B there are ten. Sure, technically “collaboration will continue” if a project moves to Platform B, but it will be massively reduced by doing so.
And many projects simply cannot afford that. So, again, your approach is going to fail to convert people to your preferred platforms.
I don’t see caring about user privacy/freedoms & shunning corporate control as merely a preference like choosing a flavor of jam at the market. And if folks aren’t voicing an opinion, then the status quo would remain.
the social network part is more harmful than good.
I think you underestimate the extent to which social features get and keep people engaged, and that the general refusal of alternatives to embrace the social nature of software development is a major reason why they fail to “convert” people from existing popular options like GitHub.
To clarify, are you saying that social gamification features like stars and colored activity bars are part of the “social nature of software development” which must be embraced?
Assuming they wanted to move specifically to Git & not a different DVCS, LLVM probably would have the resources to run a self-hosted Forgejo instance (what ‘powers’ Codeberg). Forgejo supports that pull/merge request model—and they are working on the ForgeFed protocol which would as a bonus allow federation support which means folks wouldn’t even have to create an account to open issues & participate in merge requests which is a common criticism of these platforms (i.e. moving from closed, proprietary, megacorp Microsoft GitHub to open-core, publicly-traded, VC-funded GitLab is in many ways a lateral move at the present even if self-hosted since an account is still required). If pull/merge request + Git isn’t a requirement, there are more options.
(i.e. moving from closed, proprietary, megacorp Microsoft GitHub to open-core, publicly-traded, VC-funded GitLab is in many ways a lateral move at the present even if self-hosted since an account is still required)
How do they manage to require you to make an account for self-hosted GitLab? Is there a fork that removes that requirement?
Self-hosting GitLab does not require any connection to GitLab computers. There is no need to create an account at GitLab to use a self-hosted GitLab instance. I’ve no idea where this assertion comes from.
One does need an account to contribute on a GitLab instance. There is integration with authentication services.
Alternatively, one could wait for the federated protocol.
In my personal, GitHub-avoiding, experience, I’ve found that using mail to contribute usually works.
One does need an account to contribute on a GitLab instance.
That’s what I meant… account required for the instance. With ForgeFed & mailing lists, no account on the instance is required. But news 1–2 weeks ago was trying to get some form of federation to GitLab. It was likely a complaint about needing to create accounts on all of the self-hosted options.
However, it’s dismaying to see GitHub capture even more critical software infrastructure. LLVM is a huge and enormously impactful project. GitHub has many eggs in their basket. The centralization of so much of the software engineering industry into a single mega-corp-owned entity makes me more than a little uneasy.
I think the core thing is that projects aren’t in the “maintain a forge” business, but the “develop a software project” business. Self-hosting is not something they want to be doing, as you can see by the maintenance tasks mentioned the article.
Of course, then the question is, why GitHub instead of some other managed service? It might be network effect, but honestly, it’s probably because it actually works mostly pretty well - that’s how it grew without a network effect in the first place. (Especially on a UX level. I did not like having to deal with Phabricator and Gerrit last time I worked with a project using those.)
I would not be surprised if GitHub actively courted them as hostees. It’s a big feather in GH’s cap and reinforces the idea that GH == open source development.
I think the move started on our side, but GitHub was incredibly supportive. They added a couple of new features that were deal breakers and they waived the repo size limits.
There is Codeberg & others running Forgejo/Gitea as well as SourceHut & GitLab which are all Git options without needing Microsoft GitHub or self-hosting. There are others for non-Git DVCSs. The Microsoft GitHub UI is slow, breaks all my browser shortcuts, and has upsell ads all throughout. We aren’t limited to if not MicrosoftGitHub then SelfHost.
Call me sceptical, but I actually don’t believe in any privacy policy.
I’ve worked in 30-employee startups and FAANGs, and the privacy policies were always just random text with no meaning.
In small start-ups it’s written by one of the execs with a law degree, for example the CEO or the COO. They’re totally disconnected from the tech team. And one day, the tech team goes “oh… we have this data! We could use it for this feature.” or “oh… we could log this type of activity for this feature.” This is because nobody is checking features against the privacy policy which was written years ago, and the COO is totally disconnected from technical decisions, they’re just trying to grow the company.
In FAANGs, the privacy policy is written by some random lawyer from the lawyer team. But it was never checked if it’s actually the case, because there are hundreds of 10-engineer teams working on the overall product. Unless you want to spend the next 5 years trying to figure out wether everything is in order, they just talk to a few managers and team-leads and hope for the best. New features and launches need “legal approval”, but the legal review will be done by a totally different lawyer, which will take a look at it, get pressured by the product manager to give their seal of approval, then the feature will go through with little checks on whether it respected the privacy policy or not.
Maybe there is some way? If they would store a date up to which you have paid as a signed cookie (for users that have the unlimited plan), they wouldn’t need to have you logged in just to support non-customized search service. In fact some of the configs could be also stored client side probably.
Even customized, I don’t see why a system similar to Mullvad’s billing wouldn’t work. Only information retained is an account number, a “paid up until” date and billing info for the duration that retention is required (only really needed for credit cards).
Mullvad makes sense, because all they do is take money and route (mostly) HTTPs traffic.
The issue with a search engine is that the search engine needs to know what I’m searching in cleartext.
They can see one is searching for “indian restaurant metropolis” and the next month “japanese restaurant metropolis”, so they can assume this person lives in Metropolis. And then a year later this same person (they know it from the account number) is searching “retirement home smallville” or “geriatric hospital near smallville”, so they are able to deduce this person lives in Metropolis and, most likely, have an old relative in Smallville. They can then pinpoint this person is Clark Kent, even though it’s just account number 01234565.
And that would be keeping logs on queries (which kagi claims not to do)… Just like mullvad claims not to keep traffic logs. “just use HTTPS” doesn’t hide everything, access times, inbound and outbound IP addresses, and SNI headers alone are enough to deanonymize all traffic… Obviously there is no way that the public can verify that any VPN or Kagi don’t log either…
Kagi is in the business of selling search, not the business of selling your data. That is what the subscription is for and it is what ensures the alignment of incentives.
You can always use a fake email address (Kagi does not care of verify) and pay with Bitcoin/Lightning if you want to ensure to be completely anonymous and still enjoy the benefits of better search.
Kagi is in the business of selling search, not the business of selling your data. That is what the subscription is for and it is what ensures the alignment of incentives.
You can always use a fake email address (Kagi does not care of verify) and pay with Bitcoin/Lightning if you want to ensure to be completely anonymous and still enjoy the benefits of better search.
If you know the search terms of an individual you can always pinpoint who there are. I mean I search stuff about my current large city, and then a few day later, when my mom has an issue and ask for my help. Then I’m searching stuff related to the small town where I’m from. I would imagine there is only one person in the world currently leaving in a big eastern German city and searching about some a small 1k-inhabitant small town in western France.
Forgive the off-topic question, but I couldn’t find an answer on the site itself. Why is the “th” digraph represented as ð in some places and þ in others?
ð is the ‘th’ in the, this, that, other (voiced dental fricative)
þ is the ‘th’ in both, thing, thought, three (voiceless dental fricative)
Historically English used both symbols interchangeably, but most words (that aren’t “with”) don’t use the sounds interchangeably. This setup is also how Icelandic uses Ð & Þ. If English were to reintroduce these symbols (personally in favor), I would prefer seeing this setup as it disambiguates the sounds for ESL speakers/readers and gives English a strong typographic identity (think how ñ makes you immediately think Spanish) for one of it’s unique characteristics: using dental fricatives.
Noteworthy: English has words like Thailand, Thomas, Thames that have ‘th’ that aren’t dental fricatives which helps disambiguate those as well before we get another US president saying “Thighland” based on spelling.
Additionally ‘ð’ on its own is “the” allowing the definite article to have a single-symbol representation like the indefinite article “a”. (I made this up, but I like the symmetry).
A more historically authentic way to compress “the” into a single column would be to put the “e” atop the “th”-symbol… although I don’t know that that would render legibly on an eth, as opposed to overlying the eth’s ascender.
Yes. 😅 Historically “&” was a part of the alphabet, but throwing even more symbols onto a keyboard makes less sense if it can be helped. I suppose a twin-barred “ð” could work, but at smaller resolutions, good luck. I would still value ð being the base tho, since it is voiced & I think following the ð/þ has more benefit than choosing þ to have both voiced & voiceless sounds.
If curious, you can try to read that linked post where the whole content uses ð & þ. It doesn’t take long for it to ‘click’ & personally I think it reads smoothly, but for a broad audience (which that post is not), I wouldn’t put such a burden on the copy. But around the periphery & in personal stuff, I don’t mind being the change I would like to see.
I now have a slightly ridiculous desire to build a “shadow page” into my in-progress site generator that rewrites English to use this so that every post has a version available in this mode. It is surprisingly delightful!
It could get tricky maintaining because you’d it’s not as simple as s/th/þ/g. I’m actually a bit surprised someone enjoyed let alone preferred reading like that. I figured most would be annoyed.
Yeah, the fun part would be building some actual understanding of the underlying language issues. The dumb-but-probably-actually-works version is just a regex over words, where you can add to it over time. The smarter version would actually use some kind of stronger language-aware approach that actually has the relevant linguistic tokens attached. Fun either way!
(I suspect the number of people who appreciate this is indeed nearly zero, but… I and three of my friends are the kind of people to love it. 😂)
In modern English, th has two different sounds (think vs this) but before that we used proper letters to distinguish those two sounds. It would be þink and ðis if we still used them.
Is this really “making illegal states unrepresentable”? Maybe I have a different interpretation of that phrase, but I’ve always understood it to mean that “illegal states” will not even compile, which is why usually it’s said in the context of languages with very strong type systems (Rust included).
However, in this example all of the “illegal states” are checked at runtime. But why is that any different from traditional error checking and validation? I can do everything done in this example in C++:
class Username {
public:
Username(const std::string &name) {
if (name.empty()) {
throw std::runtime_error{"name cannot be empty"};
}
name_ = name;
}
private:
std::string name_;
};
Side note: author of “Parse, don’t validate” published another text: https://lexi-lambda.github.io/blog/2020/11/01/names-are-not-type-safety/ . There she tells us that described Rust approach with Username is wrong (and I agree with her, i. e. yes, in Haskell one should try to use truly type-level guarantees, but I think that in Rust her “no newtypes” approach is too abstract)
Perhaps the post is a misuse of the phrase, but I understood it as having different utility: letting you write downstream functions that assume things about your data - and have that be guaranteed when they are used.
Here’s an example of some Python code I might’ve written in times past.
def cool_username(username):
return username[0].islower()
def validate_username(username):
return len(username) > 0
def main():
username = input('enter your username')
if not valid_username(username):
exit('invalid username')
print('Your username is', 'cool' if cool_username(username) else 'uncool')
But let’s say my application is more complex and I forget to put in the validation in one path. Then I would run into a runtime error if I got an empty username - or worse, the code would successfully use an empty username.
I could solve this for example by taking a boolean in cool_username that tells us if the username is valid
def cool_username(username, is_valid):
if is_valid:
return username[0].islower()
else:
exit('invalid username')
or by maybe adding a validate_username check in the function itself. But that’s kludgy.
What I interpreted this post as saying is to enforce safety by requiring functions that return untrusted data to instead return a Result - and guarantee that the Ok case is going to be valid.
I don’t know how this looks in Python so this is probably invalid code:
def get_username():
username = input('enter your username')
if username.len() > 0:
return Ok(username)
else:
return Err('invalid username')
def main():
username_result = get_username()
match username_result:
case Ok(username):
print('Your username is', 'cool' if cool_username(username) else 'uncool')
case Err(error):
exit(error)
When you require that any unstrusted function returns a Result, then you can only use your downstream code on usernames that pass validation. In either case you have to do runtime validation - but the former method statically enforces that you don’t make any mistakes by passing an unvalidated username to code that expects a valid one.
I imagine your interpretation of “make illegal states unrepresentable” is something like NonEmptyString which would enforce statically that the string you are accessing is not empty. Which I think we would agree is even more useful for downstream functions because then they can only be called on valid strings no matter where the strings come from. But I would argue that something like Result is still useful because it might be cumbersome to encode all validation checks into the type system.
It’s sometimes easy to forget that all of the software that we use is built and maintained by individuals. Even tools that feel like they’ve been around for ages. (Relevant XKCD: https://xkcd.com/2347/)
Does anyone know what this means for the project and community? Are other people able to take over management or was it a one man show? I remember an argument for neovim being that it was community driven whereas vim had a BDFL.
Christian is a long time contributor (and co-maintainer) of Vim and has commit access to the GitHub repo. It looks like he will step up as lead maintainer.
It has been a very long time since Bram was the sole developer on Vim, many of the changes are contributed by the community (with some notable exceptions). Bram would always commit the patches, but I would be very surprised if Bram’s passing significantly impacts the project.
I’m glad that people have access and can continue the work. Very sad that Christian didn’t get a chance to meet him in person though. You never know how much time you have with people.
Right now the EU is slowly starting to acknowledge this problem and pours a bit of cash into critical projects that are underfunded / maintained by a few.
Interestingly, a foundation called NLnet has an important role in selecting projects that get funded. AFAIK, Bram was an employee of NLnet.
Seeing multi-object iteration make it into the language is nice, as well as the other changes that make Zig more ergonomic to use. (Still waiting for a better syntax for anonymous functions, though.)
Async not making it into 0.11 was disappointing, but understandable. My 0.9.1 code can wait another year I suppose :)
All in all, awesome work Zig devs. I’m excited to see Zig slowly work its way to a 1.0 and a stable language.
Did you know Zig had bound functions?
I, uh, hope this change doesn’t affect fields that contain functions? E.g.:
struct {
f: fn (foo, bar) baz,
}
Because I tend to use that pattern extensively for gamedev purposes (allowing items and weapons to carry references to their own hooks/triggers/callbacks is really useful and much easier than using an enum).
Ah, true, I forgot 0.10 made that change. It’s quite trivial to fix, though, and wasn’t really the point of my question – I was checking that fields that hold function pointers were still allowed. Good to know that’s the case.
Note that this particular issue (returning a pointer to stack allocated memory) has been much discussed in Zig’s issue tracker, and it looks like this will likely be something that will be caught either by Zig’s compiler or via runtime safety checks at some point:
Any software that lasts long enough eventually accumulates some number of bugs which critics will endlessly post “You haven’t fixed that yet?!??!?! And how long has it been open??!?!!???!!??!??!?!?!” comments about.
At this point I take such comments as a sign of successful mature software projects.
I never said these bugs don’t necessarily have significant user impact.
Just that any sufficiently long-lived/mature software project will have some number of these, to such an extent that I tend to increase my confidence in a piece of software if I see this type of complaint comment about it.
Next to agreeing with the notion that these can just be learned I think there’s another problem.
People are scared of using other tools. If you want to use something centralized, go for SVN, if you want something that might be more your mindset and had everything built in use Fossil, if you don’t want to use relational database use one of the many alternatives.
Because I think but doing so leads to surgery bad thing which is putting huge amounts of confusing abstractions on top of it. The world doesn’t need more ORMs nor more Git GUIs where merge has a completely different meeting than it git merge.
I’m sure there’s many good versions but the concept of layering abstractions to not learn what you could and should learn is something that causes a lot of bad software/projects/products to be out there.
This also goes into topics like security or also annoying discussions. Whenever you have some technical and interesting discussion on containers, deployment, systems, kubernetes, etc. online you are bound to have someone creep in and basically argue that their lack of knowledge is proof that a system is bad. I’m usually (not always) on the contra-sids of these things, I think often because I have enough experience to have seen them fail. But any comment drowns in “but it’s hard to learn” comments. The other side of that is “it’s best practice”.
And here I think lies the problem with Git. There’s a lot of people that feel like Git is somehow best practice or somehow the best or only serioi way to do think yet, they hop from whatever the trendiest tool is and never even understand the distributed part nor why it’s even called Pull Request on GitHub. Oh conflating Git and GitHub is another topic.
That said. Git (just like PostgreSQL) has amazing official documentation.. High quality and up to date. Please make use of it.
Or like I said before. Maybe see if there’s a better solution for you than Git.
For sure, don’t use Git. That’s great advice that people often forget is an option, even when they’re tech lead. There’s a reason successful large software development businesses still pay for perforce
What is that reason, exactly? The main argument I’ve seen is that Perforce or Subversion can act as a system of record, but any centralized repository can do that, including several git-oriented workflows.
(Also, which large firm are you thinking of, exactly? I think of Google, which developed Android using git instead of their monolithic Perforce installation.)
I have read (but don’t have direct experience) that Perforce is more common in some industries (such as game development) because it handles tracking binary files/assets better than git does.
In particular, p4 can ingest large files without choking, and it has a locking mechanism so that an artist can take ownership of a file while editing it - necessary because there is no merge algorithm so optimistic concurrency doesn’t work.
I am not convinced that some 9f the advantages are actually net wins. For example, no LSP configuration. So how does it know which clangd to start? I have several versions installed on my dev machine and none of the, is both called clangd and in my default PATH. The same applies to the ‘config is not code’. For example, one of the things I have in my Vim config tells ALE to use a different clangd to the default one when I am editing files in any of the places I have clones of CHERIoT-RTOS because I want a version that knows about the extra attributes and won’t be confused by the command line flags that control this target. This is pretty common for clang-format configuration, because the output is not stable and so most projects that use it in CI require a specific version. Hopefully the no-code config can be extended with scripting?
The multi-selection feature sounds great. Faking this a little bit is why I use vim in a terminal even locally and turn off mouse mode: so mouse selections and the system clipboard don’t interfere with visual mode selections and the vim clipboard. It sounds as if Helix actually does this properly. The other examples of this looked great.
One of the reasons I stick with vim is that my muscle memory works elsewhere. For example, bash and zsh have vi modes (and even VS Code has a reasonable vim mode now). Is anyone working on a Helix mode for any shell?
There’s a workaround for this specific problem. Helix allows you to specify a language server command in the config, so having a per-project config where needed would work.
Your point stands though, the more niche your need is, the less likely it is to be supported, and there is no scripting. There are plans for a plugin interface in the future that will allow you to extend the editor without patching the source, but it’s at an early stage, and progress is slow.
I think the lack of a scripting/plugin system in these early stages of helix’s development has actually been a good thing though, as it’s pushed people to contribute to making helix support a wide array of use cases well out of the box, instead of writing a script and being done with it. My personal pet peeve right now is the lack of session history, in neovim I have a hack for getting it to behave the way I want to, but it looks like helix will soon support it in a way that doesn’t require a fragile script.
There’s a workaround for this specific problem. Helix allows you to specify a language server command in the config, so having a per-project config where needed would work.
That sounds like an exciting security hole. What happens if I put a helix config in my git repo that tells it that the LSP command is rm -rf ~ and you open a file from a git clone of my project in helix? Other editors used to have this functionality and removed it for precisely this reason.
I think the lack of a scripting/plugin system in these early stages of helix’s development has actually been a good thing though, as it’s pushed people to contribute to making helix support a wide array of use cases well out of the box, instead of writing a script and being done with it.
The flip side of this is that everyone has different requirements. You either add a lot of bloat to support features that no one will use (except the people who love the editor for having that feature), or you add an extension interface that allows everyone to implement their own bits and risk the situation where a load of people implement almost the same thing and duplicate work. Neither approach is ideal and starting with a solid core is probably the right approach early on.
I don’t know about helix, but emacs and direnv both solve the issue by prompting the user to allow the config file to be “executed”, and will do so every time it changes.
For example, the law of diminishing returns means that at some point “it’s not worth it” is guaranteed to be true, but it’s highly context dependent exactly when. It could only be “ridiculous” if people are saying “it’s never worth worrying about performance”, but no one is saying that.
The same is true of most of the these “excuses” - the weak form of them is always true at some point, and no one is actually saying the strong form.
EDIT: another example
“Performance only matters in small, isolated sectors of the software industry. If you don’t work on game engines or embedded systems, you don’t have to care about performance, because it doesn’t matter in your industry.”
The weak form of this is “most software does not have to care about performance to the same degree as niches like gaming”, and this is true. There are millions of devs writing business apps and web sites who can happily not worry about the cost of virtual methods calls and all the other things that upset Casey.
The strong form is “only gaming has to care about performance at all”, and no one is really saying that - every web developer knows that a web page that loads in 10sec is not acceptable.
every web developer knows that a web page that loads in 10sec is not acceptable.
And yet it is still a frequent occurrence.
If you pay attention to how non technical people talk about computers (or at least, consumer facing software), you start to notice a trend. Typically it is complaints that the computer is fickle, unreliable, and slow. “The computer is thinking” is a common idiom. And yet I would guess that something like 90% of the software that most people use on a daily basis is not IO or compute intensive; that is, it has no basis for being slow. And it’s slow anyway.
I hear quite frequently colleagues and collaborators say things like “premature optimization is the root of all evil”, “computers are so fast now, we don’t need to worry about how fast this is”, or responding to any form of performance suggestion with “let’s wait until we do some profiling before we make performance decisions”. But rarely do I ever see that profiling take place. These phrases are all used as cover to say, essentially, “we don’t need to care about performance”. So while no one comes out and says it out loud, in practice people’s actions often do say “it’s never worth worrying about performance”.
I appreciate Casey’s takes, even if they are a little hot sometimes, because it’s good to have the lone voice in the wilderness that counterbalances the broader industry’s tendency towards performance apathy.
every web developer knows that a web page that loads in 10sec is not acceptable.
And yet it is still a frequent occurrence
I would submit that most of such web sites are made by amateur web designers who just cobble together a site from Wordpress plugins and/or businesses who choose to use bottom-of-the-barrel cheap shared hosting. I’ve certainly seen this in practice with friends who can design and decided to build websites for other friends, and there’s little you can do about it besides advising them to ask a proper web development agency to build the site and pay more for hosting (which small businesses might not want to do).
I would submit that most of such web sites are made by amateur web designers who just cobble together a site from Wordpress plugins and/or businesses who choose to use bottom-of-the-barrel cheap shared hosting.
Except Atlassian. Jira and Confluence - big websites backed by a big budget - still manage to frustrate me with how slow they are on a daily basis.
Jira is Windows, though, although to a lesser extent. Yeah, it kinda sucks, and it’s full of just plain weird behavior, but it’s also kinda impressive in how it serves a billion use cases that most people never heard about, and is basically essential to a whole bunch of industries, and the first thing is kind of s consequence of the second.
The weak form of this is “most software does not have to care about performance to the same degree as niches like gaming”, and this is true. There are millions of devs writing business apps and web sites who can happily not worry about the cost of virtual methods calls and all the other things that upset Casey.
As I bring up every time Casey goes on a rant, and have already brought up elsewhere in this thread, game developers are empirically at least no better than other fields of programming, and often are worse because gamers are on average much more willing to buy top-end hardware and stay on a fast upgrade treadmill. So they can more easily just tell users to buy a faster SSD, buy more RAM, buy the latest video card, etc. rather than actually set and stick to a performance budget. There’s perhaps an argument that console game dev does better with this just because the hardware upgrade treadmill is slower there, but modern console titles have an iffy track record on other measures of quality (like “does the supposed release build actually work at all or does it require a multi-gigabyte patch on launch day”).
The difference for game developers, I suspect, is the binary nature of performance failures. If a game runs at under a certain frame rate and jitter rate, you cannot play it. If another desktop application pauses periodically, you can still use it, it’s just annoying. I use a few desktop apps that regularly pause for no obvious reason (yes, Thunderbird, I’m looking at you doing blocking IO on the main thread), if these things were games then I just couldn’t use them.
This probably gives people a skewed opinion because they never play games that fail to meet the required performance bar for their hardware, whereas they do use other kinds of program that fail to meet the desired performance target. For consoles, this testing is easy and a game that can’t meet the perf target for a particular console is never supported on that console. Or, in quite a few cases I’ve seen recently, is launched on the console a year after the PC version once they’ve made it fast enough.
As to thinking about performance in other domains, I have a couple of anecdotes that I think contradict Casey’s world view:
Many years ago now, I was working on a Smalltalk compiler and writing some GUI apps using it. For debugging, I added a simple AST interpreter. To improve startup times, I moved the JIT to a shared library so that it could be loaded after process start and we could shift over to the JIT’d code later. At some point, I had a version mismatch in the shared library that prevented it from loading. For about two weeks, all of my code was running in the slow (probably two orders of magnitude slower than the JIT) interpreter. I did not notice, performance was fine. This was on a 1 GHz Celeron M.
When I got the ePub version of my first book back, I realised that they’d lost all of the semantic markup on their conversion so I wrote a tool for my second book that would parse the LaTeX subset that I use and generate good HTML. I intentionally wrote this in a clear and easy to debug style, aiming to optimise it later. The first time I ran it, it took around 200ms to process the entire book (typesetting it in LaTeX to generate the PDF took about two minutes). I did find one loop where a load of short-lived objects were created and stuck an autorelease pool around it, which dropped peak memory usage by about 90%, but I never bothered to do anything to improve performance beyond that.
The difference for game developers, I suspect, is the binary nature of performance failures.
People always say things like this, and then I go look again at Minecraft, which is the best-selling video game of all time, and I scratch my head a bit. It has a whole third-party industry of mods whose sole purpose is to bring the game’s performance up to basic playable levels, because Minecraft’s performance on average hardware (i.e., not “gaming rigs”) is so abysmal.
So I still don’t really buy into the idea that there’s some unique level of caring-about-performance in game dev.
These are good rules. We use all of them in my current job which involves writing flight software.
Re: rule 7: annoyingly, neither Clang nor GCC has a compiler flag to issue a warning for ignoring function return values universally (there is -Wunused-result, but this only issues warnings for functions with an explicit “must use the return value” annotation).
That’s an interesting thought. It would be far too noisy on a lot of C/C++ code, but would be very useful for safety-critical things (I have some places that would probably enable it if it existed). It would be fairly easy to add to clang, you just change the check for [[nodiscard]] to also check for the flag and diagnose if that warning is enabled. Probably a total of around ten lines of code to add (most of it plumbing the flag through), and a test that’s about as long as the code.
I’d be happy to review this if someone wants to try it - it’s a good first-clang-patch thing.
Awesome, thanks! This machine doesn’t seem to have my phabricator credentials, so I’ll do a proper review later. Other folks will probably want to bikeshed the flag name.
The only thing that’s obviously missing is the negative cases in the test: do we get warnings without the flag passed?
In C++, collection templates in the standard library are parameterized by an allocator. You specify what type of allocator to use at compile time, when the collection template is specialized. This isn’t normally visible in C++ code because the allocator parameter defaults to the standard global allocator, and this default is rarely overridden. The contrast for Zig is that the allocator is specified when you perform an operation on the collection (if needed), and not when you declare the collection variable.
Put another way, the allocator is not part of the type of the collection in Zig, it is simply a struct field. Whereas in C++ (and I guess Rust, though I don’t have experience there), the type of allocator used is part of the type of the collection itself.
That is an interesting choice since it means that the calls to allocator can’t be inlined. That would I expect have non trivial overhead when used with simple/fast allocators like for example bump allocator (would that be FixedBufferAllocator in zig?).
comptime_int seems very odd; it looks like Rust’s {number} pseudo-type, but {number} is like the one place that Rust will silently convert a number into a concrete type for you. Having var x = 0; infer as comptime_int by default then seems like the opposite of what you usually want. Anyone have any explanation for why Zig requires this stricter type?
I am not sure how it could’ve worked differently. 0 is a comptime_known expression, it needs to evaluate to something at compile time, and evaluating to the natural number 0 seems like the most reasonable choice.
Then, in var x = { some natural number which is a result of comptime computation } you need to pick a type for runtime value of x on the stack. You can’t use HM-style “collect constraints, then solve equations” inference to get the intended type from the use of x — the type of x very much affects what type constraints you would get. There’s no static function body — the actual code is always the result of some compile-time evaluation, where types of things are usual values. You could default to i32 or something, but that’d be horrible it seems.
Besides the “can’t”, there’s also “shouldn’t” angle: Zig is all about (say it with me) “Communicating Intent Precisely!”. Agnostic of the way var x = 0 could be made to work, that very much does not communicate the intended type of x at all.
I think one thing that Zig could’ve done in the current design is adding literals of specific numeric types, like var x = 0u128. My guess is that here we hit Zig’s minimalism. Why would you want that, when you already have one to many ways to spell this, var x: u128 = 0 and var x = @as(u128, 0)?
It infers it from the surrounding function, and yells at you with an error if it can’t infer it down to a single concrete type. So in section 9 with comptime_int messing up the test:
fn add(a: i64, b: i64) i64 {
return a + b;
}
test "add" {
try std.testing.expectEqual(5, add(2, 3));
}
In Rust this would be:
fn add(a: i64, b: i64) -> i64 {
return a + b;
}
#[test]
fn test_add() {
assert_eq(5, add(2, 3)); // pretending it is a function assert_eq<T>(lhs: T, rhs: T) instead of a macro for simplicity
}
Rust looks at the 5 in test_add(), says “it’s {number}”, looks at the return type of add(), says “it’s i64”, then looks at the arg types for assert_eq() and says “are this {number} and this i64 the same type? They are now!” So the type of 5 becomes i64.
This is a special case of subtyping, really, and subtyping in general is really good at breaking type inference, but in this one case it seems to work out pretty well. If Rust did C-style implicit promotion of numbers to wider types, it would probably be horrible to code and horrible to use.
I was specifically referring to your example var i = 0; where the variable is not passed to another function (eg when it is used as a loop counter). There is no information on what the type should be, so the programmer has to specify explicitly.
It’s inferred from the surrounding constraints (mostly the sink). Though IIRC rust ends up defaulting it i32 if it cannot constraint to a single concrete type.
There are also cases where it refuses to do that but I’m less clear on those, I know that traits contexts are a common issue (e.g. sum require an annotation of some sort if a concrete type can’t be inferred).
is the problem really Alpine, or musl? i mean yea, Alpine uses musl, but it’s even mentioned in the article that DNS over TCP isn’t enabled by design, why not explore that a little more in the article?
It’s a flaw in musl, but using musl outside Alpine is … extremely rare, as far as I can tell.
The real question in my mind is why people continue to use musl and Alpine when it has such a serious flaw in it. Are they unaware of the problem, or do they just not care?
I don’t know that I’d call it a “flaw” rather than a “design choice”.
The DNS APIs in libc (getaddrinfo and gethostbyname) are poorly designed for the task of resolving DNS names (they are blocking and implementation-defined). musl implements these in a simple manner for simple use cases, but for anything more involved the recommendation of the musl maintainers is to use a dedicated DNS resolver library.
This article goes into a bit more depth, but at the end of the day I think it’s a reflection of the different philosophy behind musl more generally (which is why I call it a “design choice” instead of a “flaw”).
Better is different doesn’t imply that different is better. The getaddrinfo function is the only moderately good way of mapping names to hosts without embedding knowledge of the lookup mechanism in the application. Perhaps a modern Linux system could have a DBUS service to do this, but that would add a lot more to containers (if containers had a sane security model, this is how it would work, code outside the container would do the lookup, and the container would not be able to create sockets except by asking this service, but I digress).
The suggestion to use a DNS library misses the point: DNS should be an implementation detail. The application should not know if the name is resolved via a hosts file, a DNS, WINS, or something custom for micro service deployments. The decision on Alpine means that you need to encode that as custom logic in every program.
The decision on Alpine means that you need to encode that as custom logic in every program.
I think that’s a bit dramatic. Most applications won’t do a query that returns a DNS response bigger than 512 bytes because setting up TCP takes at least three times as longer than the UDP response, and that pisses off most users, so most sites try to make sure this isn’t necessary to show a website to sell people things, so very very few people outside of the containerverse will ever see it happen.
Most applications just do a gethostbyname and connect to whatever the first thing is. There’s no reason for that to take more than 512 bytes, and so it’s hard to lament: Yes yes, if you want 200 IP addresses for your service, you’ll need more than 512 byte packets, but 100 IP addresses will fit, and I absolutely wonder about the design of a system that wants to use gethostbyname to get more than 100 IP addresses.
The reason why, is because gethostbyname isn’t parallel, so an application that wants to use it in parallel service will need to use threads. Many NSS providers behave badly when threaded, so desktop applications that want to connect to multiple addresses in parallel (e.g. the happy eyeballs protocol used by chrome, firefox, curl, etc) avoid the NSS api completely and either implement DNS directly or use a non-blocking DNS client library.
Most applications won’t do a query that returns a DNS response bigger than 512 bytes
Most software that I’ve written that does any kind of name lookup takes address inputs that are not hard coded into the binary. As a library or application developer, I don’t know the maximum size of a host or domain name that users of my code are going to use. I don’t know if they’re going to use DNS at all, or whether they’re going to use host files, WINS via Samba, or something else. And the entire point of NSS is that I don’t have to know or care. If they want to use some exciting Web3 Blockchain nonsense that was invented after I wrote my code for looking up hosts, they can as long as they provide an NSS plugin. If I have to care about how host names provided by the user are mapped to network addresses as a result of using your libc, your libc has a bug.
Most applications just do a gethostbyname and connect to whatever the first thing is.
Hopefully not, anything written in the last 20 years should be using getaddrinfo and then it doesn’t have to care what network protocol it’s using for the connection. It may be IPv6, it may be something legacy like IPX (in which case the lookup definitely won’t be DNS!), it may be something that hasn’t been invented yet.
The reason why, is because gethostbyname isn’t parallel, so an application that wants to use it in parallel service will need to use threads.
That is a legitimate concern, and I’d love to see an asynchronous version of getaddrinfo.
As a library or application developer, I don’t know the maximum size of a host or domain name that users of my code are going to use.
Yes you do, because we’re talking about Alpine and Alpine use-cases, and in those use-cases where it tunnels DNS into the NSS API. RFC 1035 is clear on this. It’s 250 “bytes”.
There’s absolutely nothing you or any of your users who are using Alpine can do on a LAN serving a single A or AAAA record to get over 512 bytes.
It may be IPv6, it may be something legacy like IPX (in which case the lookup definitely won’t be DNS!),
No it won’t be IPX because we’re talking about Alpine and Alpine use-cases. Alpine users don’t use IPX.
it may be something that hasn’t been invented yet.
No it won’t. That’s not how anything works. First you write the code, then you can use it.
Yes you do, because we’re talking about Alpine and Alpine use-cases
I don’t write my code for Alpine, I write it to work on a variety of operating systems and on a variety of use cases. Alpine breaks it. I would hazard a guess that the amount of code written specific targeting Alpine, rather than targeting Linux/POSIX and being run on Alpine, is a rounding error above zero.
I do not write my code assuming that the network is IPv4 or IPv6. I do not write my code assuming that the name lookup is a hosts file, that it’s DNS, WINS, or any other specific mechanism. I write my code over portable abstractions that let the user select the name resolution mechanism and let the name resolution mechanism select to transport protocol.
No it won’t. That’s not how anything works. First you write the code, then you can use it.
That is literally how the entire Berkeley socket API was designed: to allow code to be written without any knowledge of the network protocol and to move between them as required. This is how you wrote code 20-30 years ago that worked over DECNET, IPX, AppleTalk, or IP. The getaddrinfo function was only added about 20 years ago, so is relatively young, but added host resolution to this. Any code that was written using it and the rest of the sockets APIs was able to move to IPv6 with no modification (or recompile), to support mDNS when that was introduced, and so on.
These APIs were specifically designed to be future proof, so that when a new name resolution mechanism came along (e.g. DNS over TCP), or a new transport protocol, it could be transparently supported. If a new name lookup mechanism using a distributed consensus algorithm instead of hierarchical authority comes along, code using these APIs will work on any platform that decides that name resolution mechanism is sensible. If IPv7 comes along, as long as it offers stream and datagram connections, any code written using these APIs will be able to adopt it as soon as the kernel does, without a recompile.
Can you show a single example of a real-world environment that is broken by what Alpine is doing, and that isn’t some idiot trying to put more than one or two addresses in a response?
I don’t know if I agree or disagree with anything else you’re trying to say. I certainly would never say Alpine is “broken” because its telnet can’t reach IPX hosts on my lan, but you can’t be complaining about that because that’d be moronic. Some of the futuristic protocols you mention sound nice, but they can tunnel their responses in DNS too and will work on Alpine just fine. If you don’t want to use Alpine, don’t use Alpine, but switching to it saved me around 60gb of ram, so I was willing to make some changes to support Alpine. This is not one of the changes I had to make.
You have no good options for DNS on Linux. You can’t static link the glibc resolver, so you can either have your binaries break every time anything changes, or use musl and have very slightly broken DNS.
There are some standalone DNS libraries but they’re enormous and have nasty APIs and don’t seem worth using.
There are a great many things I dislike about glibc, but binary compatibility is one thing that they do exceptionally well. I think glibc was the first library to properly adopt ELF symbol versioning and their docs are what everyone else refers to. If they need to make an ABI-breaking change, they add new versioned symbols and keep the old ones. You can easily run a binary that was created with a 10-year-old glibc with the latest glibc shared object. As I recall, the last time glibc broke backwards binary compat was when they introduced symbol versioning.
The glibc resolver is site-specific, which means it’s designed to be customised by the system administrator and static-linking would prevent it’s primary use-case. It also has nothing to do with DNS except that it ships with a popular “fallback” configuration is to try looking up hosts on the Internet if they aren’t managed by the local administrator.
you can either have your binaries break every time anything changes
Nonsense: I upgrade glibc and my existing binaries still work. You’re doing something else wrong.
glibc has a quite stable ABI - it’s a major event when it breaks any sort of backwards compatibility. Sure, it’s not as stable as the Linux userspace ABI, but it’s still extremely rare to encounter breakage.
The real question in my mind is why people continue to use musl and Alpine when it has such a serious flaw in it. Are they unaware of the problem, or do they just not care?
I suppose I don’t care. I might even think of it as an anti-feature: I don’t want my kubernetes containers asking the Internet where my nodes are. It’s slow, it’s risky, and there’s no point, Kubernetes already has perfect knowledge anyway:
If you bind-mount an /etc/hosts file (or hosts.dbm or hosts.sqlite or whatever) that would be visible instantly to every client. This is a trivial controller that anyone can put in their cluster and it solves this “problem” (if you think it’s a problem) and more:
DNS introduces extra failure-modes I don’t need and don’t want, and having /etc/resolv.conf point to trash allows me to effectively prevent containers from doing DNS. DNS can be used to smuggle information in-and-out of the container, so having a policy of no-DNS-but-Internet-DNS makes audit much easier.
I’ve seen people suggest that installing bind-tools with apk will magically solve the problem, but this doesn’t make sense to me… unless there’s some fallback to using host or dig for DNS lookups… ?
BUT, it’s really odd to me that seemingly so many people use Alpine for smaller containers, but no one has bothered to fix the issue. Have people “moved on”? Is there another workaround people use?
I have a shell script that whips up a hosts file and bind-mounts it into the container. This prevents all DNS lookups (and failure cases), is faster, and allows me to disable DNS on any container that doesn’t need access to the Internet (a cheap way to slow down attackers). It uses kubectl get -w to wait for updates so it isn’t spinning all the time.
I can’t think the only advantage of Kubernetes abusing DNS for service discovery, and maybe there is one with Windows containers or something else I don’t use, but there are substantial performance and security disadvantages that I don’t even bother with it.
If you’re interested in PLATO, The Friendly Orange Glow is a great book to read, although I felt like it was a bit more rosy eyed than technical. It’s definitely an incredibly interesting and capable system.
It’s an absolute underdog — a Mac-only GUI IDE - but I love it and have loved it since it launched at the end of 2020, and it’s been receiving regular and substantial updates since then.
It uses a small amount of memory and energy, zippy, opens big files happily
Has many high-quality extensions, many of which make use of an excellent built-in LSP client.
Thoughtful UI design that feels totally idiomatic to macOS.
A built-in debugger
A git GUI which handles the most common git tasks really well
Built-in preview browser
A licensing model I’m happy with (pay the softawer + a year of updates, use as long as you like)
vim mode
And there’s so many tiny great things, like how you can hover over sections of the minimap and see symbols of what you’re hovering over. Details!
I work mostly with JS/TS and use the Nova with (my own) Deno extension. Deno’s rust-based toolchain with Nova’s native performance just feels so light and great.
I was going to chime in that I did use Nova for a couple years for embedded (ARM32/64) development, Python scripting, and shell scripting. I have a fairly complex setup where I need to compile inside Docker and then have a variety of scripts for running or debugging in QEMU or hardware. I am a longtime Mac & iOS developer (20+ years) and really appreciate Panic and their apps, but I was just getting random crashes once or twice a week (always reported) but they were killing my productivity when they happened. Combine that with the need to do some STM32 development and debugging and VS Code’s built-in debugger support, I fully switched over to VS Code and in months now haven’t had a single crash. In addition, I find I am able to script the tasks, launchers, and hotkeys that I want faster and more flexibly, and there is a huge variety of debugger support. VS Code also feels faster. It pains me to leave behind a true native app but I have not looked back since VS Code. I feel like the sweet spot for Nova is web and scripting development but when it comes to a lot of native development, it can’t really compete the same was VS Code can.
VS Code extensions I particularly like: CodeLLDB, Cortex-Debug, indent-rainbow, PDF Preview, PlatformIO, and Vim (25+ year Vim user as well, depending on task).
Usage detection and refactoring support is the thing that is pretty universally missing from “editors” that people end up calling “IDEs”.
I just downloaded the latest version, opened up a project and it could neither tell me where a symbol was being used, nor gave me any way to intelligently rename or otherwise alter that symbol with automatic changes being applied to the calling locations.
It also doesn’t help IMO that the extension system is apparently based on javascript, but from a quick check, at least some of the extensions then require nodeJS and NPM be installed, for extensions that have no specific ties to JS as a language.
I’m not familiar with Jetbrains IDEs. What do they offer beyond what LSP offers? I’ve just recently, over the last couple of years, started using LSP more. I’d be interested to know what I’m still missing compared to a “proper” IDE.
For reference, these are the LSP functions that I most often use:
Go to definition of symbol
Find references to symbol
Go to type definition
Find implementations
Rename symbol
Apply suggested bugfix (usually that’s adding an import)
It’s been a long time since I’ve really used a proper IDE and I don’t think I’ve ever used a Jetbrains one. What else is there?
Well if those things you list actually worked in Nova maybe I’d be ok with that, but I couldn’t get it to work any more than a glorified “find substring in project” (more details here: https://lobste.rs/s/nbr9bl/what_s_your_editor_setup#c_s0ewvx)
Jetbrains does support a lot more though:
change method signature (i.e. add/remove/re-order method arguments) - this also works on interfaces, so implementing classes reflect the same change, as well as calls to the changed method;
extract variable/constant/class property/method from an expression (including duplicates);
extract interface from class (i.e. create an interface and replace applicable references to the concrete class, with the interface name);
inline a variable/method/constant/property;
move a symbol to a different namespace (including moving an entire namespace);
Sorry, I was hijacking this thread to ask about Jetbrains - I don’t know anything about Nova! Maybe Nova is buggy. For the things I listed, LSP works really well.
Thank you for the examples of other things that Jetbrains does. You’ve got me interested now: I wonder which IDE functionality is most used? Does Jetbrains collect telemetry? Come to think of it, I wonder which LSP functionality I actually use often. Naïvely, this seems like something that could actually be amenable to empirical study.
Yeah I’m not sure really - and it probably depends a lot on the language involved. I would imagine that things like this are much more heavily used, as you go up the scale in terms of how powerful the language’s type system is.
Yeah, you would think that a powerful static type system would allow more things like this to be automated. I have seen some impressive demonstrations with Idris but most IDE functionality seems to focus on Java-style OO languages rather than anything fancier. Is there anything Jetbrains-like for Haskell?
I don’t believe there’s any first-party support, but there seems to be at least one third party plugin for it - whether it supports full/any refactoring support or not I don’t really know, Haskell isn’t really in my wheelhouse.
I have all of these features, you just haven’t set anything up.
You need to install an extension which supports the type of project you’re opening (e.g. the Rust extension, the TypeScript / Deno extension, Gleam extension, etc.). The extension is responsible for hooking up a LSP with Nova’s built-in LSP client which enables these features.
Because the extension API uses JS, some people build their extensions using the NPM ecosystem. The final extension is run in a non-Node runtime. There’s certainly no requirement to have NodeJS or NPM to be installed for an extension to work, that’s on the extension author.
Ok, so I went back to it, because hey maybe I just missed something. Believe me, I’d love a lighter IDE than Jetbrains. I’d even accept some reduced functionality of esoteric things like vagrant support, or run-on-remote-interpreter support, if necessary.
The project in question happens to be PHP - a language Nova supports “out of the box”. The only extra extension I found which suggests it adds more functionality “language support” for PHP, is the one I mentioned above, which says, very clearly in its description:
Intelephense requires some additional tools to be installed on your Mac:
Node.js
Anyway. So I tried with the built in language extension, because hey, it should work right?
“Go to definition” works.. it’s janky and kind of slow but it works (or so I thought).
As far as I can tell, thats it.
I found “Jump to (Previous|Next) Instance” which just highlights the same class name (in a comment as it happens) in the currently open file. Not sure if that’s what it’s supposed to do, but it’s completely unintuitive what this is meant to do, or why I would want that.
I tried Cmd-Click just by reflex from JetBrains muscle memory - and it shows me results when invoked on methods.. but besides being ridiculously slow, it’s woefully inaccurate. It just seems to show any method call that’s the same method name, regardless of the class it’s invoked on. Further investigation shows that’s actually the same behaviour for “go to Symbol” on methods too.
I get it - some language extensions will have more functionality than others - but this is a language they list with built in support, and it has a lot of capability to specify types, right in the language. If it can’t infer the correct symbol reference from php8.x typed code, I can’t even begin to imagine what it’s like with a language like Javascript or Python or Ruby.
Maybe it all works wonderfully if you specify an interpreter path for the project? I don’t know. It doesn’t let me specify a remote interpreter (e.g. in a VM or Container, or an actual remote machine), and I’m not about to go down the rabbit hole of trying to get the same language runtime and extension versions installed on macOS just to test that theory.
Is there a timeline for qualifying against DO-178C for flight software?
I owned one of these as my first work laptop and I cannot agree, it’s a decent laptop but not the best one by far. What I disliked the most was it’s abysmal display, dark, low resolution, bad color reproduction. As usual with Lenovo, it’s a lottery game with the screen and from the model number you cannot infer what manufacturer the screen is from. The keyboard was pretty good though, even though it had a lot of flex and feels pretty cheap compared to what you get nowadays. Also, I don’t get the point of carrying another battery pack, to swap it out you need to power down the machine. HP’s elitebook 8460[w/p] models could be configured with a 9-cell battery and an optional battery slice which gave them almost a full day of battery life. Those elitebooks were built like a tank but at the same time very heavy. Compared to the X220 they’re the better laptops in my opinion. However, the best laptop is an Apple silicon MacBook Air. It’s so much better than what else is available that it’s almost unfair. No fan noise, all day battery life, instant power on and very powerful. It would be great if it could run any Linux distribution though, but macOS just works and is good enough for me.
I totally disagree, and I have both an X220 and an M1 MacBook Air.
I much prefer to the X220. In fact, I have 2 of them, and I only have the MBA because work bought me one. I would not pay for it myself.
I do use the MBA for travel sometimes, because at a conference it’s more important to have something very portable, but it is a less useful tool in general.
I am a writer. The keyboard matters more than almost anything else. The X220 has a wonderful keyboard and the MBA has a terrible keyboard, one of the worst on any premium laptop.
Both my X220s have more RAM, 1 or 2 aftermarket SSDs, and so on. That is impossible with the MBA.
My X220s have multiple USB 2, multiple USB 3, plus Displayport plus VGA. I can have it plugged in and still run 3 screeens, a keyboard, a mouse, and still have a spare port. On the MBA this means carrying a hub and thus its thinness and lightness goes away.
I am 6’2”. I cannot work on a laptop in a normal plane seat. I do not want to have to carry my laptop on board. But you cannot check in a laptop battery. The X220 solves this: I can just unplug its battery in seconds, and take only the battery on board. I can also carry a charged spare, or several.
The X220 screen is fine. I am 55. I owned 1990s laptops. I remember 1980s laptops. I remember greyscale passive-matrix LCDs and I know why OSes have options to help you find the mouse cursor. The X220 screen is fine. A bit higher-res would be fine but I cannot see 200 or 300ppi at laptop screen range so I do not need a bulky GPU trying to render invisibly small pixels. It is a work tool; I do not want to watch movies on it.
I have recently reviewed the X13S Arm Thinkpad, and the Z13 AMD Thinkpad, and the X1 Carbon gen 12.
My X220 is better than all of them, and I prefer it to all of those and to the MacBook Air.
I say all this not to say YOU ARE WRONG because you are entitled to your own opinions and choices. I am merely trying to clearly explain why I do not agree with them.
… And why it really annoys me that you and your choices have so limited the market that I have to use a decade-old laptop to get what I want in a laptop because your choices apparently outweigh mine and nobody makes a laptop that does what I want in a laptop any more, including the makers of my X220.
That is not fair and that is not OK.
It’s perfectly fair to like the X220 and other older laptop models, that’s simply personal preference.
Probably because your requirements are very specific and “developer” laptops are niche market.
Neither I, nor anyone else who bought an Apple product is responsible for your choice of a laptop.
The core of my disagreement is with this line:
Comparison: I want a phone with a removable battery, a headphone socket, physical buttons I can use with gloves on, and at least 2 SIM slots plus a card slot. These are all simple easy requirements which were ubiquitous a decade ago, but are gone now, because everyone copies the market leaders, without understanding what makes them the market leader.
If there was a significant market for a new laptop with the features similar to the X220, there would be such a laptop offered for sale.
There’s no conspiracy.
I didn’t claim there was any conspiracy.
Whereas ISTM that your argument amounts to “if people wanted that they’d buy it, so if they don’t, they mustn’t want it”. Which is trivially falsified: this does not work if there is no such product to buy.
But there used to be, same as I used to have a wide choice of phones with physical buttons, headphone sockets, easily augmented storage, etc.
In other markets, companies are thriving by supplying products that go counter to industry trends. For instance, the Royal Enfield company supplies inexpensive, low-powered motorcycles that are easily maintained by their owners, which goes directly counter to the trend among Japanese motorcycles of constantly increasing power, lowering weight, and removing customer-maintainability by making highly-integrated devices with sealed, proprietary electronics controlling them.
Framework laptops are demonstrating some of this for laptops.
When I say major brands are lacking innovation, derivative, and copy one another, this is hardly even a controversial statement. Calling it a conspiracy theory is borderline offensive and I am not happy with that.
Margins in the laptop business are razor-thin. Laptops are seen as a commodity. The biggest buyers are businesses who simply want to provide their employees with a tool to do their jobs.
These economic facts do tend to converge available options towards a market-leader sameness, but that’s simply how the market works.
Motorcycles are different. They’re consumer/lifestyle products. You don’t ride a Royal Enfield because you need to, you do it because you want to, and you want to signal within the biker community what kind of person you are.
Still no.
This is the core point. For instance, my work machine, which I am not especially fond of, is described in reviews as being a standard corporate fleet box.
I checked the price when reviewing the newer Lenovos, and it was about £800 in bulk.
But I have reviewed the X1 Carbon as a Linux machine, the Z13 similarly, and the Arm-powered X13s both with Windows and with Linux.
These are, or were when new, all ~£2000 premium devices, some significantly more.
And yet, my budget-priced commodity fleet Dell has more ports than any of them, even the flagship X1C – that has 4 USB ports, but the Dell, at about a third of the price, has all those and HDMI and Ethernet.
This is not a cost-cutting thing at the budget end of the market. These are premium devices.
And FWIW I think you’re wrong about the Enfields, too. The company is Indian, and survived decades after the UK parent company died, outcompeted by cheaper, better-engineered Japanese machines.
Enfield faded from world view, making cheap robust low-spec bikes for a billion Indian people who couldn’t afford cars. Then some people in the UK noticed that they still existed, started importing them, and the company made it official, applied for and regained the “Royal” prefix and now exports its machines.
But the core point that I was making was that in both cases, it is the budget machines at the bottom of the market which preserve the ports. It is the expensive premium models which are the highly-integrated, locked-down sealed units.
This is not cost-cutting; it is fashion-led. Like womens’ skirts and dresses without pockets, it is designed for looks not practicality, and sold for premium prices.
Basically, what I am reading from your comments is that Royal Enfield motorcycles (I knew about the Indian connection, btw, but didn’t know they’d made a comeback in the UK) and chunky black laptops with a lot of ports is for people with not a lot of money, or who prefer to not spend a lot of money for bikes or laptops.
Why there are not more products aimed at this segment of the market is left as an exercise to the reader.
ISTM that you are adamantly refusing to admit that there is a point here.
Point Number 1:
This is not some exotic new requirement. It is exactly how most products used to be, in laptops, in phones, in other sectors. Some manufacturers cut costs, sold it as part of a “fashionable” or “stylish” premium thing, everyone else followed along like sheep… And now it is ubiquitous, and some spectators, unable to follow the logic of cause and effect, say “ah well, it is like that because nobody wants those features any more.”
And no matter how many of us stand up and say “BUT WE WANT THEM!” apparently we do not count for some reason.
Point Number 2:
That’s the problem. Please, I beg you, give me links to any such device available in the laptop market today, please.
I don’t doubt there are people who want these features. They’re vocal enough.
But there are not enough of them (either self-declared, or found via market research) for a manufacturer to make the bet that they will make money making products for this market.
It’s quite possible that an new X220-like laptop would cost around $5,000. Would such a laptop sell enough to make money back for the manufacturer?
The brown manual wagon problem: everyone who says they want one will only buy them 7 years later used.
“Probably because your requirements are very specific and “developer” laptops are niche market.”
I’d suggest an alternate reason. Yes, developer laptops are a niche market. But I’d propose that laptops moving away from the X220 is a result of chasing “thinner and lighter” above all else, plus lowering costs. And the result when the majority of manufacturers all chase the same targets, you get skewed results.
Plus: User choice only influences laptop sales so much. I’m not sure what the split is, but many laptops are purchased by corporations for their workforce. You get the option of a select few laptops that business services / IT has procured, approved, and will support. If they are a Lenovo shop or a Dell shop and the next generation or three suck, it has little impact on sales because it takes years before a business will offer an alternative. If they even listen to user complaints.
And if I buy my own laptop, new, all the options look alike - so there’s no meaningful way to buy my preference and have that influence product direction.
“Neither I, nor anyone else who bought an Apple product is responsible for your choice of a laptop.”
Mostly true. The popularity of Apple products has caused the effect I described above. When Apple started pulling ahead of the pack, instead of (say) Lenovo saying “we’ll go the opposite direction” the manufacturers chased the Apple model. In part due to repeated feedback that users want laptops like the Air, so we get the X1 Carbons. And ultimately all the Lenovo models get crappy chicklet keyboards, many get soldiered RAM, fewer ports, etc. (As well as Dell, etc.)
(Note I’m making some pretty sweeping generalizations here, but my main point is that the market is limited not so much because the OP’s choices are “niche” but because the market embraces trends way too eagerly and blindly.)
This reminds me a great deal of my recurring complaint that it’s hard to find a car with a manual transmission anymore. Even down to the point that, last time I was shopping, I looked at German-designed/manufactured vehicles, knowing that the prevailing sentiment last time I visited Germany was that automatic transmissions were for people who were elderly and/or disabled.
I think the reasons are very similar.
The move to hybrid and electric has also shrunk the market for manual transmissions.
I’ve done my time with manual. My dual-clutch automatic has at least as good fuel economy and takes a lot of the drudge out of driving.
All of this! Well said, Joe.
https://asahilinux.org ;)
This, but Asahi still has a long, long way to go before it can be considered stable enough to be a viable replacement for macOS.
For the time being, you’re pretty much limited to running macOS as a host OS and then virtualize Linux on top of it, which is good enough for 90% of use cases anyway. That’s what I do and it works just fine, most of the time.
Out of curiosity, what are you using for virtualization? The options for arm64 virtualization seemed slim last I checked (UTM “works” but was buggy. VMWare Fusion only has a tech preview, which I tried once and also ran into problems). Though this was a year or two ago, so maybe things have improved.
VMware and Parallels have full versions out supporting Arm now, and there are literally dozens of “light” VM runners out now, using Apple’s Virtualisation framework (not to be confused with the older, lower level Hypervisor.framework)
I’m using UTM to run FreeBSD and also have Podman set up to run FreeBSD containers (with a VM that it manages). Both Podman (open source) and Docker Desktop (free for orgs with fewer than, I think, 250 employees) can manage a Linux VM for running containers. Apple exposes a Linux binary for Rosetta 2 that Docker Desktop uses, so can run x86 Linux containers.
I’m not speaking for @petar, but I use UTM when I need full fat Linux. (For example, to mount an external LUKS-encrypted drive and copy files.) That said, I probably don’t push it hard enough to run into real bugs. But the happy path for doing something quick on a Ubuntu or Fedora VM has not caused me any real headaches.
It feels like most of the other things I used to use a Linux VM for work well in Docker desktop. I still have my ThinkPad around (with a bare metal install) in case I need it, but I haven’t reached for it very often in the past year.
I’ve interacted with the LLVM project only once (an attempt to add a new clang diagnostic), and my experience with Phabricator was a bit painful (in particular, the arcanist tool). Switching to GitHub will certainly reduce friction for (new) contributors.
However, it’s dismaying to see GitHub capture even more critical software infrastructure. LLVM is a huge and enormously impactful project. GitHub has many eggs in their basket. The centralization of so much of the software engineering industry into a single mega-corp-owned entity makes me more than a little uneasy.
There are so many alternatives they could have chosen if they wanted the pull/merge request model. It really is a shame they ended up where they did. I’d love to delete my Microsoft GitHub account just like I deleted my Microsoft LinkedIn account, but the lock-ins all of these projects takes means to participate in open source, I need to keep a proprietary account training on all of our data, upselling things we don’t need, & making a code forge a social media platform with reactions + green graphs to induce anxiety + READMEs you can’t read anymore since it’s all about marketing (inside their GUI) + Sponsors which should be good but they’re skimming their cut of course + etc..
If even 1% of the energy that’s spent on shaming and scolding open-source maintainers for picking the “wrong” infrastructure was instead diverted into making the “right” infrastructure better, this would not be a problem.
Have you used them? They’re all pretty feature complete. The only difference really is alternatives aren’t a social network like Microsoft GitHub & don’t have network effect.
It’s the same with chat apps—they can all send messages, voice/video/images, replies/threads. There’s no reason to be stuck with WhatsApp, Messenger, Telegram, but people do since their network is there. So you need to get the network to move.
And open-source collaboration is, in fact, a social activity. Thus suggests an area where alternatives need to be focusing some time and effort, rather than (again) scolding and shaming already-overworked maintainers who are simply going where the collaborators are.
Breaking out the word “social” from “social media” isn’t even talking about the same thing. It’s social network ala Facebook/Twitter with folks focusing on how many stars, how green their activity bars are, how flashy their RENDERME.md file is, scrolling feeds, avatars, Explore—all to keep you on the platform. And as a result you can hear anxiety in many developers on how their Microsoft GitHub profile looks—as much as you hear folks obsessing about their TikTok or Instagram comments. That social anxiety should have little place in software.
Microsoft GitHub’s collaboration system isn’t special & doesn’t even offer a basic feature like threading, replying to a inline-code comment via email puts a new reply on the whole merit request, and there are other bugs. For collaboration, almost all of alternatives have a ticketing system, with some having Kanban, & additional features—but even then, a dedicated (hopefully integrated) ticketing system, forum, mailing list, or libre chat option can offer a better, tailored experience.
Suggesting open source dogfood on open source leads to better open source & more contributions rather than allowing profit-driven entities to try to gobble up the space. In the case of these closed platforms you as a maintainer are blocking off an entire part of your community that values privacy/freedom or those blocked by sanctions while helping centralization. The alternatives are in the good-to-good-enough category so there’s nothing to lose and opens up collaboration to a larger audience.
But I’ll leave you with a quote
— Matt Lee, https://www.linuxjournal.com/content/opinion-github-vs-gitlab
The population of potential collaborators who self-select out of GitHub for “privacy/freedom”, or “those blocked by sanctions”, is far smaller than the population who actually are on GitHub. So if your goal is to make an appeal based on size of community, be aware that GitHub wins in approximately the same way that the sun outshines a candle.
And even in decentralized protocols, centralization onto one, or at most a few, hosts is a pretty much inevitable result of social forces. We see the same thing right now with federated/decentralized social media – a few big instances are picking up basically all the users.
There is no number of quotes that will change the status quo. You could supply one hundred million billion trillion quadrillion octillion duodecillion vigintillion Stallman-esque lectures per femtosecond about the obvious moral superiority of your preference, and win over zero users in doing so. In fact, the more you moralize and scold the less likely you are to win over anyone.
If you genuinely want your preferred type of code host to win, you will have to, sooner or later, grapple with the fact that your strategy is not just wrong, but fundamentally does not grasp why your preferences lost.
Some folks do have a sense of morality to the decisions they make. There are always trade offs, but I fundamentally do not agree that the tradeoffs for Microsoft GitHub outweigh the issue of using it. Following the crowd is less something I’m interested in than being the change I & others would like to see. Sometimes I have ran into maintainers who would like to switch but are afraid of if folks would follow them & are then reassured that the project & collaboration will continue. I see a lot of positive collaboration on SourceHut ‘despite’ not having the social features and doing collaboration via email + IRC & it’s really cool. It’s possible to overthrow the status quo—and if the status quo is controlled by a US megacorp, yeah, let’s see that change.
But this is a misleading statement at best. Suppose that on Platform A there are one million active collaborators, and on Platform B there are ten. Sure, technically “collaboration will continue” if a project moves to Platform B, but it will be massively reduced by doing so.
And many projects simply cannot afford that. So, again, your approach is going to fail to convert people to your preferred platforms.
I don’t see caring about user privacy/freedoms & shunning corporate control as merely a preference like choosing a flavor of jam at the market. And if folks aren’t voicing an opinion, then the status quo would remain.
You seem to see it as a stark binary where you either have it or you don’t. Most people view it as a spectrum on which they make tradeoffs.
Already mentioned it. This case is a clear ‘not worth it’ because the alternatives are sufficient & the social network part is more harmful than good.
I think you underestimate the extent to which social features get and keep people engaged, and that the general refusal of alternatives to embrace the social nature of software development is a major reason why they fail to “convert” people from existing popular options like GitHub.
To clarify, are you saying that social gamification features like stars and colored activity bars are part of the “social nature of software development” which must be embraced?
Would you clarify?
And yet here you are, shaming and scolding.
What alternatives do you have in mind?
Assuming they wanted to move specifically to Git & not a different DVCS, LLVM probably would have the resources to run a self-hosted Forgejo instance (what ‘powers’ Codeberg). Forgejo supports that pull/merge request model—and they are working on the ForgeFed protocol which would as a bonus allow federation support which means folks wouldn’t even have to create an account to open issues & participate in merge requests which is a common criticism of these platforms (i.e. moving from closed, proprietary, megacorp Microsoft GitHub to open-core, publicly-traded, VC-funded GitLab is in many ways a lateral move at the present even if self-hosted since an account is still required). If pull/merge request + Git isn’t a requirement, there are more options.
How do they manage to require you to make an account for self-hosted GitLab? Is there a fork that removes that requirement?
Self-hosting GitLab does not require any connection to GitLab computers. There is no need to create an account at GitLab to use a self-hosted GitLab instance. I’ve no idea where this assertion comes from.
One does need an account to contribute on a GitLab instance. There is integration with authentication services.
Alternatively, one could wait for the federated protocol.
In my personal, GitHub-avoiding, experience, I’ve found that using mail to contribute usually works.
That’s what I meant… account required for the instance. With ForgeFed & mailing lists, no account on the instance is required. But news 1–2 weeks ago was trying to get some form of federation to GitLab. It was likely a complaint about needing to create accounts on all of the self-hosted options.
I think the core thing is that projects aren’t in the “maintain a forge” business, but the “develop a software project” business. Self-hosting is not something they want to be doing, as you can see by the maintenance tasks mentioned the article.
Of course, then the question is, why GitHub instead of some other managed service? It might be network effect, but honestly, it’s probably because it actually works mostly pretty well - that’s how it grew without a network effect in the first place. (Especially on a UX level. I did not like having to deal with Phabricator and Gerrit last time I worked with a project using those.)
I would not be surprised if GitHub actively courted them as hostees. It’s a big feather in GH’s cap and reinforces the idea that GH == open source development.
I think the move started on our side, but GitHub was incredibly supportive. They added a couple of new features that were deal breakers and they waived the repo size limits.
There is Codeberg & others running Forgejo/Gitea as well as SourceHut & GitLab which are all Git options without needing Microsoft GitHub or self-hosting. There are others for non-Git DVCSs. The Microsoft GitHub UI is slow, breaks all my browser shortcuts, and has upsell ads all throughout. We aren’t limited to
if not MicrosoftGitHub then SelfHost
.This is literally what I addressed in the second paragraph of my comment.
Not arguing against you, but with you showing examples.
Kagi is awesome. Totally worth the money.
I’ve always wanted to give it a try. But I’m bothered by the logged-in-only search, personally.
I mean it makes sense, how else are they going to prevent people from using it without paying?
It’s just that I search all of my darkest secrets, and I don’t feel comfortable with a company being able to perfectly correlate all of my searches.
Their privacy policy is pretty clear that they don’t log or associate searches with your account.
Call me sceptical, but I actually don’t believe in any privacy policy.
I’ve worked in 30-employee startups and FAANGs, and the privacy policies were always just random text with no meaning.
In small start-ups it’s written by one of the execs with a law degree, for example the CEO or the COO. They’re totally disconnected from the tech team. And one day, the tech team goes “oh… we have this data! We could use it for this feature.” or “oh… we could log this type of activity for this feature.” This is because nobody is checking features against the privacy policy which was written years ago, and the COO is totally disconnected from technical decisions, they’re just trying to grow the company.
In FAANGs, the privacy policy is written by some random lawyer from the lawyer team. But it was never checked if it’s actually the case, because there are hundreds of 10-engineer teams working on the overall product. Unless you want to spend the next 5 years trying to figure out wether everything is in order, they just talk to a few managers and team-leads and hope for the best. New features and launches need “legal approval”, but the legal review will be done by a totally different lawyer, which will take a look at it, get pressured by the product manager to give their seal of approval, then the feature will go through with little checks on whether it respected the privacy policy or not.
Maybe there is some way? If they would store a date up to which you have paid as a signed cookie (for users that have the unlimited plan), they wouldn’t need to have you logged in just to support non-customized search service. In fact some of the configs could be also stored client side probably.
Even customized, I don’t see why a system similar to Mullvad’s billing wouldn’t work. Only information retained is an account number, a “paid up until” date and billing info for the duration that retention is required (only really needed for credit cards).
Mullvad makes sense, because all they do is take money and route (mostly) HTTPs traffic.
The issue with a search engine is that the search engine needs to know what I’m searching in cleartext.
They can see one is searching for “indian restaurant metropolis” and the next month “japanese restaurant metropolis”, so they can assume this person lives in Metropolis. And then a year later this same person (they know it from the account number) is searching “retirement home smallville” or “geriatric hospital near smallville”, so they are able to deduce this person lives in Metropolis and, most likely, have an old relative in Smallville. They can then pinpoint this person is Clark Kent, even though it’s just account number 01234565.
And that would be keeping logs on queries (which kagi claims not to do)… Just like mullvad claims not to keep traffic logs. “just use HTTPS” doesn’t hide everything, access times, inbound and outbound IP addresses, and SNI headers alone are enough to deanonymize all traffic… Obviously there is no way that the public can verify that any VPN or Kagi don’t log either…
Kagi is in the business of selling search, not the business of selling your data. That is what the subscription is for and it is what ensures the alignment of incentives.
You can always use a fake email address (Kagi does not care of verify) and pay with Bitcoin/Lightning if you want to ensure to be completely anonymous and still enjoy the benefits of better search.
I don’t think this is a good reason for me to trust them. Businesses pivoting and becoming shit is basically the norm. For now they have money, but this might change down the road, when the cash dries up.
If you know the search terms of an individual you can always pinpoint who there are. I mean I search stuff about my current large city, and then a few day later, when my mom has an issue and ask for my help. Then I’m searching stuff related to the small town where I’m from. I would imagine there is only one person in the world currently leaving in a big eastern German city and searching about some a small 1k-inhabitant small town in western France.
Forgive the off-topic question, but I couldn’t find an answer on the site itself. Why is the “th” digraph represented as ð in some places and þ in others?
Noted in: https://toast.al/posts/techlore/2023-07-03_weblog-rebuilt
Historically English used both symbols interchangeably, but most words (that aren’t “with”) don’t use the sounds interchangeably. This setup is also how Icelandic uses Ð & Þ. If English were to reintroduce these symbols (personally in favor), I would prefer seeing this setup as it disambiguates the sounds for ESL speakers/readers and gives English a strong typographic identity (think how ñ makes you immediately think Spanish) for one of it’s unique characteristics: using dental fricatives.
Noteworthy: English has words like Thailand, Thomas, Thames that have ‘th’ that aren’t dental fricatives which helps disambiguate those as well before we get another US president saying “Thighland” based on spelling.
A more historically authentic way to compress “the” into a single column would be to put the “e” atop the “th”-symbol… although I don’t know that that would render legibly on an eth, as opposed to overlying the eth’s ascender.
Yes. 😅 Historically “&” was a part of the alphabet, but throwing even more symbols onto a keyboard makes less sense if it can be helped. I suppose a twin-barred “ð” could work, but at smaller resolutions, good luck. I would still value ð being the base tho, since it is voiced & I think following the ð/þ has more benefit than choosing þ to have both voiced & voiceless sounds.
Very interesting, thank you for the explanation!
If curious, you can try to read that linked post where the whole content uses ð & þ. It doesn’t take long for it to ‘click’ & personally I think it reads smoothly, but for a broad audience (which that post is not), I wouldn’t put such a burden on the copy. But around the periphery & in personal stuff, I don’t mind being the change I would like to see.
I now have a slightly ridiculous desire to build a “shadow page” into my in-progress site generator that rewrites English to use this so that every post has a version available in this mode. It is surprisingly delightful!
It could get tricky maintaining because you’d it’s not as simple as
s/th/þ/g
. I’m actually a bit surprised someone enjoyed let alone preferred reading like that. I figured most would be annoyed.You could do it ðe opposite way, where you write content wiþ þorns/eðs and automatically replace ðem wiþ “th”.
Ðat… is very not-dumb way to go about it. :)
Yeah, the fun part would be building some actual understanding of the underlying language issues. The dumb-but-probably-actually-works version is just a regex over words, where you can add to it over time. The smarter version would actually use some kind of stronger language-aware approach that actually has the relevant linguistic tokens attached. Fun either way!
(I suspect the number of people who appreciate this is indeed nearly zero, but… I and three of my friends are the kind of people to love it. 😂)
ð is voiced (“that”) and þ is unvoiced (“thing”). Feel your throat as you pronounce both and you’ll understand the difference.
In modern English, th has two different sounds (think vs this) but before that we used proper letters to distinguish those two sounds. It would be þink and ðis if we still used them.
Is this really “making illegal states unrepresentable”? Maybe I have a different interpretation of that phrase, but I’ve always understood it to mean that “illegal states” will not even compile, which is why usually it’s said in the context of languages with very strong type systems (Rust included).
However, in this example all of the “illegal states” are checked at runtime. But why is that any different from traditional error checking and validation? I can do everything done in this example in C++:
It’s closer to Parse, don’t validate than MISU. I think he said it was MISU because PDV is much more obscure and he might not have heard of it.
(PDV is a great technique too)
Side note: author of “Parse, don’t validate” published another text: https://lexi-lambda.github.io/blog/2020/11/01/names-are-not-type-safety/ . There she tells us that described Rust approach with
Username
is wrong (and I agree with her, i. e. yes, in Haskell one should try to use truly type-level guarantees, but I think that in Rust her “no newtypes” approach is too abstract)Perhaps the post is a misuse of the phrase, but I understood it as having different utility: letting you write downstream functions that assume things about your data - and have that be guaranteed when they are used.
Here’s an example of some Python code I might’ve written in times past.
But let’s say my application is more complex and I forget to put in the validation in one path. Then I would run into a runtime error if I got an empty username - or worse, the code would successfully use an empty username.
I could solve this for example by taking a boolean in
cool_username
that tells us if the username is validor by maybe adding a
validate_username
check in the function itself. But that’s kludgy.What I interpreted this post as saying is to enforce safety by requiring functions that return untrusted data to instead return a
Result
- and guarantee that theOk
case is going to be valid.I don’t know how this looks in Python so this is probably invalid code:
When you require that any unstrusted function returns a
Result
, then you can only use your downstream code on usernames that pass validation. In either case you have to do runtime validation - but the former method statically enforces that you don’t make any mistakes by passing an unvalidated username to code that expects a valid one.I imagine your interpretation of “make illegal states unrepresentable” is something like
NonEmptyString
which would enforce statically that the string you are accessing is not empty. Which I think we would agree is even more useful for downstream functions because then they can only be called on valid strings no matter where the strings come from. But I would argue that something likeResult
is still useful because it might be cumbersome to encode all validation checks into the type system.Agreed. This isn’t representation: I can absolutely represent the illegal states, I’ll just get a runtime error when I transition into them.
These examples should fail to compile.
The example is about new users registering, so we can’t make these inputs fail to compile… since they come in during runtime!
Clearly the solution is to bake a user database in at compile time! ;)
You can do it in C too, just with uglier APIs.
Where the implementation of new_string does whatever validation it wants.
I agree, this is not what “unrepresentable” means.
It’s sometimes easy to forget that all of the software that we use is built and maintained by individuals. Even tools that feel like they’ve been around for ages. (Relevant XKCD: https://xkcd.com/2347/)
Does anyone know what this means for the project and community? Are other people able to take over management or was it a one man show? I remember an argument for neovim being that it was community driven whereas vim had a BDFL.
RIP Bram
https://groups.google.com/g/vim_dev/c/6_yWxGhB_8I/m/ibserACYBAAJ
Christian is a long time contributor (and co-maintainer) of Vim and has commit access to the GitHub repo. It looks like he will step up as lead maintainer.
It has been a very long time since Bram was the sole developer on Vim, many of the changes are contributed by the community (with some notable exceptions). Bram would always commit the patches, but I would be very surprised if Bram’s passing significantly impacts the project.
I’m glad that people have access and can continue the work. Very sad that Christian didn’t get a chance to meet him in person though. You never know how much time you have with people.
Right now the EU is slowly starting to acknowledge this problem and pours a bit of cash into critical projects that are underfunded / maintained by a few.
Interestingly, a foundation called NLnet has an important role in selecting projects that get funded. AFAIK, Bram was an employee of NLnet.
Seeing multi-object iteration make it into the language is nice, as well as the other changes that make Zig more ergonomic to use. (Still waiting for a better syntax for anonymous functions, though.)
Async not making it into 0.11 was disappointing, but understandable. My 0.9.1 code can wait another year I suppose :)
All in all, awesome work Zig devs. I’m excited to see Zig slowly work its way to a 1.0 and a stable language.
I, uh, hope this change doesn’t affect fields that contain functions? E.g.:
Because I tend to use that pattern extensively for gamedev purposes (allowing items and weapons to carry references to their own hooks/triggers/callbacks is really useful and much easier than using an enum).
Your example would be replaced by the use of a function pointer.
IIRC this has been the case since 0.10
Ah, true, I forgot 0.10 made that change. It’s quite trivial to fix, though, and wasn’t really the point of my question – I was checking that fields that hold function pointers were still allowed. Good to know that’s the case.
Note that this particular issue (returning a pointer to stack allocated memory) has been much discussed in Zig’s issue tracker, and it looks like this will likely be something that will be caught either by Zig’s compiler or via runtime safety checks at some point:
I wonder if they’ll fix the long-standing bug in Terminal.app when dealing with background colors and combining characters.
Terminal.app still does not even support 24-bit color. I wouldn’t hold your breath.
Any software that lasts long enough eventually accumulates some number of bugs which critics will endlessly post “You haven’t fixed that yet?!??!?! And how long has it been open??!?!!???!!??!??!?!?!” comments about.
At this point I take such comments as a sign of successful mature software projects.
To be fair this makes using several widely-spoken languages unusable in Terminal.app…
I never said these bugs don’t necessarily have significant user impact.
Just that any sufficiently long-lived/mature software project will have some number of these, to such an extent that I tend to increase my confidence in a piece of software if I see this type of complaint comment about it.
Next to agreeing with the notion that these can just be learned I think there’s another problem.
People are scared of using other tools. If you want to use something centralized, go for SVN, if you want something that might be more your mindset and had everything built in use Fossil, if you don’t want to use relational database use one of the many alternatives.
Because I think but doing so leads to surgery bad thing which is putting huge amounts of confusing abstractions on top of it. The world doesn’t need more ORMs nor more Git GUIs where merge has a completely different meeting than it
git merge
.I’m sure there’s many good versions but the concept of layering abstractions to not learn what you could and should learn is something that causes a lot of bad software/projects/products to be out there.
This also goes into topics like security or also annoying discussions. Whenever you have some technical and interesting discussion on containers, deployment, systems, kubernetes, etc. online you are bound to have someone creep in and basically argue that their lack of knowledge is proof that a system is bad. I’m usually (not always) on the contra-sids of these things, I think often because I have enough experience to have seen them fail. But any comment drowns in “but it’s hard to learn” comments. The other side of that is “it’s best practice”.
And here I think lies the problem with Git. There’s a lot of people that feel like Git is somehow best practice or somehow the best or only serioi way to do think yet, they hop from whatever the trendiest tool is and never even understand the distributed part nor why it’s even called Pull Request on GitHub. Oh conflating Git and GitHub is another topic.
That said. Git (just like PostgreSQL) has amazing official documentation.. High quality and up to date. Please make use of it.
Or like I said before. Maybe see if there’s a better solution for you than Git.
For sure, don’t use Git. That’s great advice that people often forget is an option, even when they’re tech lead. There’s a reason successful large software development businesses still pay for perforce
What is that reason, exactly? The main argument I’ve seen is that Perforce or Subversion can act as a system of record, but any centralized repository can do that, including several git-oriented workflows.
(Also, which large firm are you thinking of, exactly? I think of Google, which developed Android using git instead of their monolithic Perforce installation.)
I don’t know the reason, I’ve not used it. Companies that came to mind were Valve and Ubisoft who are explicitly known to use Perforce.
I have read (but don’t have direct experience) that Perforce is more common in some industries (such as game development) because it handles tracking binary files/assets better than git does.
In particular, p4 can ingest large files without choking, and it has a locking mechanism so that an artist can take ownership of a file while editing it - necessary because there is no merge algorithm so optimistic concurrency doesn’t work.
I am not convinced that some 9f the advantages are actually net wins. For example, no LSP configuration. So how does it know which clangd to start? I have several versions installed on my dev machine and none of the, is both called clangd and in my default PATH. The same applies to the ‘config is not code’. For example, one of the things I have in my Vim config tells ALE to use a different clangd to the default one when I am editing files in any of the places I have clones of CHERIoT-RTOS because I want a version that knows about the extra attributes and won’t be confused by the command line flags that control this target. This is pretty common for clang-format configuration, because the output is not stable and so most projects that use it in CI require a specific version. Hopefully the no-code config can be extended with scripting?
The multi-selection feature sounds great. Faking this a little bit is why I use vim in a terminal even locally and turn off mouse mode: so mouse selections and the system clipboard don’t interfere with visual mode selections and the vim clipboard. It sounds as if Helix actually does this properly. The other examples of this looked great.
One of the reasons I stick with vim is that my muscle memory works elsewhere. For example, bash and zsh have vi modes (and even VS Code has a reasonable vim mode now). Is anyone working on a Helix mode for any shell?
There’s a workaround for this specific problem. Helix allows you to specify a language server command in the config, so having a per-project config where needed would work.
Your point stands though, the more niche your need is, the less likely it is to be supported, and there is no scripting. There are plans for a plugin interface in the future that will allow you to extend the editor without patching the source, but it’s at an early stage, and progress is slow.
I think the lack of a scripting/plugin system in these early stages of helix’s development has actually been a good thing though, as it’s pushed people to contribute to making helix support a wide array of use cases well out of the box, instead of writing a script and being done with it. My personal pet peeve right now is the lack of session history, in neovim I have a hack for getting it to behave the way I want to, but it looks like helix will soon support it in a way that doesn’t require a fragile script.
That sounds like an exciting security hole. What happens if I put a helix config in my git repo that tells it that the LSP command is
rm -rf ~
and you open a file from a git clone of my project in helix? Other editors used to have this functionality and removed it for precisely this reason.The flip side of this is that everyone has different requirements. You either add a lot of bloat to support features that no one will use (except the people who love the editor for having that feature), or you add an extension interface that allows everyone to implement their own bits and risk the situation where a load of people implement almost the same thing and duplicate work. Neither approach is ideal and starting with a solid core is probably the right approach early on.
I don’t know about helix, but emacs and direnv both solve the issue by prompting the user to allow the config file to be “executed”, and will do so every time it changes.
Neovim does this now as well.
Yes, I’m afraid it might very well be? I can’t find any mention of mitigation at a glance?
https://github.com/helix-editor/helix/pull/1249
This feels fairly straw-man-y.
For example, the law of diminishing returns means that at some point “it’s not worth it” is guaranteed to be true, but it’s highly context dependent exactly when. It could only be “ridiculous” if people are saying “it’s never worth worrying about performance”, but no one is saying that.
The same is true of most of the these “excuses” - the weak form of them is always true at some point, and no one is actually saying the strong form.
EDIT: another example
The weak form of this is “most software does not have to care about performance to the same degree as niches like gaming”, and this is true. There are millions of devs writing business apps and web sites who can happily not worry about the cost of virtual methods calls and all the other things that upset Casey.
The strong form is “only gaming has to care about performance at all”, and no one is really saying that - every web developer knows that a web page that loads in 10sec is not acceptable.
And yet it is still a frequent occurrence.
If you pay attention to how non technical people talk about computers (or at least, consumer facing software), you start to notice a trend. Typically it is complaints that the computer is fickle, unreliable, and slow. “The computer is thinking” is a common idiom. And yet I would guess that something like 90% of the software that most people use on a daily basis is not IO or compute intensive; that is, it has no basis for being slow. And it’s slow anyway.
I hear quite frequently colleagues and collaborators say things like “premature optimization is the root of all evil”, “computers are so fast now, we don’t need to worry about how fast this is”, or responding to any form of performance suggestion with “let’s wait until we do some profiling before we make performance decisions”. But rarely do I ever see that profiling take place. These phrases are all used as cover to say, essentially, “we don’t need to care about performance”. So while no one comes out and says it out loud, in practice people’s actions often do say “it’s never worth worrying about performance”.
I appreciate Casey’s takes, even if they are a little hot sometimes, because it’s good to have the lone voice in the wilderness that counterbalances the broader industry’s tendency towards performance apathy.
I would submit that most of such web sites are made by amateur web designers who just cobble together a site from Wordpress plugins and/or businesses who choose to use bottom-of-the-barrel cheap shared hosting. I’ve certainly seen this in practice with friends who can design and decided to build websites for other friends, and there’s little you can do about it besides advising them to ask a proper web development agency to build the site and pay more for hosting (which small businesses might not want to do).
Except Atlassian. Jira and Confluence - big websites backed by a big budget - still manage to frustrate me with how slow they are on a daily basis.
Jira is Windows, though, although to a lesser extent. Yeah, it kinda sucks, and it’s full of just plain weird behavior, but it’s also kinda impressive in how it serves a billion use cases that most people never heard about, and is basically essential to a whole bunch of industries, and the first thing is kind of s consequence of the second.
As I bring up every time Casey goes on a rant, and have already brought up elsewhere in this thread, game developers are empirically at least no better than other fields of programming, and often are worse because gamers are on average much more willing to buy top-end hardware and stay on a fast upgrade treadmill. So they can more easily just tell users to buy a faster SSD, buy more RAM, buy the latest video card, etc. rather than actually set and stick to a performance budget. There’s perhaps an argument that console game dev does better with this just because the hardware upgrade treadmill is slower there, but modern console titles have an iffy track record on other measures of quality (like “does the supposed release build actually work at all or does it require a multi-gigabyte patch on launch day”).
The difference for game developers, I suspect, is the binary nature of performance failures. If a game runs at under a certain frame rate and jitter rate, you cannot play it. If another desktop application pauses periodically, you can still use it, it’s just annoying. I use a few desktop apps that regularly pause for no obvious reason (yes, Thunderbird, I’m looking at you doing blocking IO on the main thread), if these things were games then I just couldn’t use them.
This probably gives people a skewed opinion because they never play games that fail to meet the required performance bar for their hardware, whereas they do use other kinds of program that fail to meet the desired performance target. For consoles, this testing is easy and a game that can’t meet the perf target for a particular console is never supported on that console. Or, in quite a few cases I’ve seen recently, is launched on the console a year after the PC version once they’ve made it fast enough.
As to thinking about performance in other domains, I have a couple of anecdotes that I think contradict Casey’s world view:
Many years ago now, I was working on a Smalltalk compiler and writing some GUI apps using it. For debugging, I added a simple AST interpreter. To improve startup times, I moved the JIT to a shared library so that it could be loaded after process start and we could shift over to the JIT’d code later. At some point, I had a version mismatch in the shared library that prevented it from loading. For about two weeks, all of my code was running in the slow (probably two orders of magnitude slower than the JIT) interpreter. I did not notice, performance was fine. This was on a 1 GHz Celeron M.
When I got the ePub version of my first book back, I realised that they’d lost all of the semantic markup on their conversion so I wrote a tool for my second book that would parse the LaTeX subset that I use and generate good HTML. I intentionally wrote this in a clear and easy to debug style, aiming to optimise it later. The first time I ran it, it took around 200ms to process the entire book (typesetting it in LaTeX to generate the PDF took about two minutes). I did find one loop where a load of short-lived objects were created and stuck an autorelease pool around it, which dropped peak memory usage by about 90%, but I never bothered to do anything to improve performance beyond that.
People always say things like this, and then I go look again at Minecraft, which is the best-selling video game of all time, and I scratch my head a bit. It has a whole third-party industry of mods whose sole purpose is to bring the game’s performance up to basic playable levels, because Minecraft’s performance on average hardware (i.e., not “gaming rigs”) is so abysmal.
So I still don’t really buy into the idea that there’s some unique level of caring-about-performance in game dev.
These are good rules. We use all of them in my current job which involves writing flight software.
Re: rule 7: annoyingly, neither Clang nor GCC has a compiler flag to issue a warning for ignoring function return values universally (there is -Wunused-result, but this only issues warnings for functions with an explicit “must use the return value” annotation).
That’s an interesting thought. It would be far too noisy on a lot of C/C++ code, but would be very useful for safety-critical things (I have some places that would probably enable it if it existed). It would be fairly easy to add to clang, you just change the check for
[[nodiscard]]
to also check for the flag and diagnose if that warning is enabled. Probably a total of around ten lines of code to add (most of it plumbing the flag through), and a test that’s about as long as the code.I’d be happy to review this if someone wants to try it - it’s a good first-clang-patch thing.
I’ll take you up on that :) https://reviews.llvm.org/D149380
Awesome, thanks! This machine doesn’t seem to have my phabricator credentials, so I’ll do a proper review later. Other folks will probably want to bikeshed the flag name.
The only thing that’s obviously missing is the negative cases in the test: do we get warnings without the flag passed?
Really interesting post so far.
One question. What does phrase mean:
I know all the words used but don’t entirely understand the implication of putting them together in that order.
In C++, collection templates in the standard library are parameterized by an allocator. You specify what type of allocator to use at compile time, when the collection template is specialized. This isn’t normally visible in C++ code because the allocator parameter defaults to the standard global allocator, and this default is rarely overridden. The contrast for Zig is that the allocator is specified when you perform an operation on the collection (if needed), and not when you declare the collection variable.
Put another way, the allocator is not part of the type of the collection in Zig, it is simply a struct field. Whereas in C++ (and I guess Rust, though I don’t have experience there), the type of allocator used is part of the type of the collection itself.
That is an interesting choice since it means that the calls to allocator can’t be inlined. That would I expect have non trivial overhead when used with simple/fast allocators like for example bump allocator (would that be FixedBufferAllocator in zig?).
That’s true, although I think there is some move away from allocator template params?
Sorry I can’t find the reference but I remember some CppCon talks about that
Probably somewhere in here - https://www.youtube.com/results?search_query=cppcon+allocator
I think the global
new
override was regarded as a mistake for sureBut also you don’t tend to see allocator params to say
std::vector
that often, even though they existI think the talk was more about allocators as a runtime param, not compile time, which matches more usage patterns
comptime_int
seems very odd; it looks like Rust’s{number}
pseudo-type, but{number}
is like the one place that Rust will silently convert a number into a concrete type for you. Havingvar x = 0;
infer ascomptime_int
by default then seems like the opposite of what you usually want. Anyone have any explanation for why Zig requires this stricter type?I am not sure how it could’ve worked differently.
0
is a comptime_known expression, it needs to evaluate to something at compile time, and evaluating to the natural number 0 seems like the most reasonable choice.Then, in
var x = { some natural number which is a result of comptime computation }
you need to pick a type for runtime value of x on the stack. You can’t use HM-style “collect constraints, then solve equations” inference to get the intended type from the use of x — the type of x very much affects what type constraints you would get. There’s no static function body — the actual code is always the result of some compile-time evaluation, where types of things are usual values. You could default to i32 or something, but that’d be horrible it seems.Besides the “can’t”, there’s also “shouldn’t” angle: Zig is all about (say it with me) “Communicating Intent Precisely!”. Agnostic of the way
var x = 0
could be made to work, that very much does not communicate the intended type ofx
at all.I think one thing that Zig could’ve done in the current design is adding literals of specific numeric types, like
var x = 0u128
. My guess is that here we hit Zig’s minimalism. Why would you want that, when you already have one to many ways to spell this,var x: u128 = 0
andvar x = @as(u128, 0)
?I don’t know how Rust does this, but how is the compiler supposed to know which type to use (u8, u16, u32, etc) if it is not told explicitly?
It infers it from the surrounding function, and yells at you with an error if it can’t infer it down to a single concrete type. So in section 9 with
comptime_int
messing up the test:In Rust this would be:
Rust looks at the
5
intest_add()
, says “it’s{number}
”, looks at the return type ofadd()
, says “it’si64
”, then looks at the arg types forassert_eq()
and says “are this{number}
and thisi64
the same type? They are now!” So the type of5
becomesi64
.This is a special case of subtyping, really, and subtyping in general is really good at breaking type inference, but in this one case it seems to work out pretty well. If Rust did C-style implicit promotion of numbers to wider types, it would probably be horrible to code and horrible to use.
I was specifically referring to your example
var i = 0;
where the variable is not passed to another function (eg when it is used as a loop counter). There is no information on what the type should be, so the programmer has to specify explicitly.It’s inferred from the surrounding constraints (mostly the sink). Though IIRC rust ends up defaulting it i32 if it cannot constraint to a single concrete type.
There are also cases where it refuses to do that but I’m less clear on those, I know that traits contexts are a common issue (e.g.
sum
require an annotation of some sort if a concrete type can’t be inferred).is the problem really Alpine, or musl? i mean yea, Alpine uses musl, but it’s even mentioned in the article that DNS over TCP isn’t enabled by design, why not explore that a little more in the article?
It’s a flaw in musl, but using musl outside Alpine is … extremely rare, as far as I can tell.
The real question in my mind is why people continue to use musl and Alpine when it has such a serious flaw in it. Are they unaware of the problem, or do they just not care?
I don’t know that I’d call it a “flaw” rather than a “design choice”.
The DNS APIs in libc (getaddrinfo and gethostbyname) are poorly designed for the task of resolving DNS names (they are blocking and implementation-defined). musl implements these in a simple manner for simple use cases, but for anything more involved the recommendation of the musl maintainers is to use a dedicated DNS resolver library.
This article goes into a bit more depth, but at the end of the day I think it’s a reflection of the different philosophy behind musl more generally (which is why I call it a “design choice” instead of a “flaw”).
“Better is different” means people will get mad at you for trying to make things better. :-)
Better is different doesn’t imply that different is better. The getaddrinfo function is the only moderately good way of mapping names to hosts without embedding knowledge of the lookup mechanism in the application. Perhaps a modern Linux system could have a DBUS service to do this, but that would add a lot more to containers (if containers had a sane security model, this is how it would work, code outside the container would do the lookup, and the container would not be able to create sockets except by asking this service, but I digress).
The suggestion to use a DNS library misses the point: DNS should be an implementation detail. The application should not know if the name is resolved via a hosts file, a DNS, WINS, or something custom for micro service deployments. The decision on Alpine means that you need to encode that as custom logic in every program.
I think that’s a bit dramatic. Most applications won’t do a query that returns a DNS response bigger than 512 bytes because setting up TCP takes at least three times as longer than the UDP response, and that pisses off most users, so most sites try to make sure this isn’t necessary to show a website to sell people things, so very very few people outside of the containerverse will ever see it happen.
Most applications just do a gethostbyname and connect to whatever the first thing is. There’s no reason for that to take more than 512 bytes, and so it’s hard to lament: Yes yes, if you want 200 IP addresses for your service, you’ll need more than 512 byte packets, but 100 IP addresses will fit, and I absolutely wonder about the design of a system that wants to use gethostbyname to get more than 100 IP addresses.
The reason why, is because gethostbyname isn’t parallel, so an application that wants to use it in parallel service will need to use threads. Many NSS providers behave badly when threaded, so desktop applications that want to connect to multiple addresses in parallel (e.g. the happy eyeballs protocol used by chrome, firefox, curl, etc) avoid the NSS api completely and either implement DNS directly or use a non-blocking DNS client library.
Most software that I’ve written that does any kind of name lookup takes address inputs that are not hard coded into the binary. As a library or application developer, I don’t know the maximum size of a host or domain name that users of my code are going to use. I don’t know if they’re going to use DNS at all, or whether they’re going to use host files, WINS via Samba, or something else. And the entire point of NSS is that I don’t have to know or care. If they want to use some exciting Web3 Blockchain nonsense that was invented after I wrote my code for looking up hosts, they can as long as they provide an NSS plugin. If I have to care about how host names provided by the user are mapped to network addresses as a result of using your libc, your libc has a bug.
Hopefully not, anything written in the last 20 years should be using
getaddrinfo
and then it doesn’t have to care what network protocol it’s using for the connection. It may be IPv6, it may be something legacy like IPX (in which case the lookup definitely won’t be DNS!), it may be something that hasn’t been invented yet.That is a legitimate concern, and I’d love to see an asynchronous version of
getaddrinfo
.Yes you do, because we’re talking about Alpine and Alpine use-cases, and in those use-cases where it tunnels DNS into the NSS API. RFC 1035 is clear on this. It’s 250 “bytes”.
There’s absolutely nothing you or any of your users who are using Alpine can do on a LAN serving a single A or AAAA record to get over 512 bytes.
No it won’t be IPX because we’re talking about Alpine and Alpine use-cases. Alpine users don’t use IPX.
No it won’t. That’s not how anything works. First you write the code, then you can use it.
I don’t write my code for Alpine, I write it to work on a variety of operating systems and on a variety of use cases. Alpine breaks it. I would hazard a guess that the amount of code written specific targeting Alpine, rather than targeting Linux/POSIX and being run on Alpine, is a rounding error above zero.
I do not write my code assuming that the network is IPv4 or IPv6. I do not write my code assuming that the name lookup is a hosts file, that it’s DNS, WINS, or any other specific mechanism. I write my code over portable abstractions that let the user select the name resolution mechanism and let the name resolution mechanism select to transport protocol.
That is literally how the entire Berkeley socket API was designed: to allow code to be written without any knowledge of the network protocol and to move between them as required. This is how you wrote code 20-30 years ago that worked over DECNET, IPX, AppleTalk, or IP. The getaddrinfo function was only added about 20 years ago, so is relatively young, but added host resolution to this. Any code that was written using it and the rest of the sockets APIs was able to move to IPv6 with no modification (or recompile), to support mDNS when that was introduced, and so on.
These APIs were specifically designed to be future proof, so that when a new name resolution mechanism came along (e.g. DNS over TCP), or a new transport protocol, it could be transparently supported. If a new name lookup mechanism using a distributed consensus algorithm instead of hierarchical authority comes along, code using these APIs will work on any platform that decides that name resolution mechanism is sensible. If IPv7 comes along, as long as it offers stream and datagram connections, any code written using these APIs will be able to adopt it as soon as the kernel does, without a recompile.
Can you show a single example of a real-world environment that is broken by what Alpine is doing, and that isn’t some idiot trying to put more than one or two addresses in a response?
I don’t know if I agree or disagree with anything else you’re trying to say. I certainly would never say Alpine is “broken” because its telnet can’t reach IPX hosts on my lan, but you can’t be complaining about that because that’d be moronic. Some of the futuristic protocols you mention sound nice, but they can tunnel their responses in DNS too and will work on Alpine just fine. If you don’t want to use Alpine, don’t use Alpine, but switching to it saved me around 60gb of ram, so I was willing to make some changes to support Alpine. This is not one of the changes I had to make.
You have no good options for DNS on Linux. You can’t static link the glibc resolver, so you can either have your binaries break every time anything changes, or use musl and have very slightly broken DNS.
There are some standalone DNS libraries but they’re enormous and have nasty APIs and don’t seem worth using.
There are a great many things I dislike about glibc, but binary compatibility is one thing that they do exceptionally well. I think glibc was the first library to properly adopt ELF symbol versioning and their docs are what everyone else refers to. If they need to make an ABI-breaking change, they add new versioned symbols and keep the old ones. You can easily run a binary that was created with a 10-year-old glibc with the latest glibc shared object. As I recall, the last time glibc broke backwards binary compat was when they introduced symbol versioning.
The glibc resolver is site-specific, which means it’s designed to be customised by the system administrator and static-linking would prevent it’s primary use-case. It also has nothing to do with DNS except that it ships with a popular “fallback” configuration is to try looking up hosts on the Internet if they aren’t managed by the local administrator.
Nonsense: I upgrade glibc and my existing binaries still work. You’re doing something else wrong.
glibc
has a quite stable ABI - it’s a major event when it breaks any sort of backwards compatibility. Sure, it’s not as stable as the Linux userspace ABI, but it’s still extremely rare to encounter breakage.I suppose I don’t care. I might even think of it as an anti-feature: I don’t want my kubernetes containers asking the Internet where my nodes are. It’s slow, it’s risky, and there’s no point, Kubernetes already has perfect knowledge anyway:
If you bind-mount an /etc/hosts file (or hosts.dbm or hosts.sqlite or whatever) that would be visible instantly to every client. This is a trivial controller that anyone can put in their cluster and it solves this “problem” (if you think it’s a problem) and more:
DNS introduces extra failure-modes I don’t need and don’t want, and having
/etc/resolv.conf
point to trash allows me to effectively prevent containers from doing DNS. DNS can be used to smuggle information in-and-out of the container, so having a policy of no-DNS-but-Internet-DNS makes audit much easier.I’ve seen people suggest that installing
bind-tools
with apk will magically solve the problem, but this doesn’t make sense to me… unless there’s some fallback to usinghost
ordig
for DNS lookups… ?BUT, it’s really odd to me that seemingly so many people use Alpine for smaller containers, but no one has bothered to fix the issue. Have people “moved on”? Is there another workaround people use?
I have a shell script that whips up a hosts file and bind-mounts it into the container. This prevents all DNS lookups (and failure cases), is faster, and allows me to disable DNS on any container that doesn’t need access to the Internet (a cheap way to slow down attackers). It uses
kubectl get -w
to wait for updates so it isn’t spinning all the time.I can’t think the only advantage of Kubernetes abusing DNS for service discovery, and maybe there is one with Windows containers or something else I don’t use, but there are substantial performance and security disadvantages that I don’t even bother with it.
If you’re interested in PLATO, The Friendly Orange Glow is a great book to read, although I felt like it was a bit more rosy eyed than technical. It’s definitely an incredibly interesting and capable system.
In my opinion this is a must-read for anyone interested in computer history.
You can explicitly tell tmux not to do this for certain variables. Example:
It’s me! The one who uses Panic’s Nova! https://nova.app
It’s an absolute underdog — a Mac-only GUI IDE - but I love it and have loved it since it launched at the end of 2020, and it’s been receiving regular and substantial updates since then.
And there’s so many tiny great things, like how you can hover over sections of the minimap and see symbols of what you’re hovering over. Details!
I work mostly with JS/TS and use the Nova with (my own) Deno extension. Deno’s rust-based toolchain with Nova’s native performance just feels so light and great.
I was going to chime in that I did use Nova for a couple years for embedded (ARM32/64) development, Python scripting, and shell scripting. I have a fairly complex setup where I need to compile inside Docker and then have a variety of scripts for running or debugging in QEMU or hardware. I am a longtime Mac & iOS developer (20+ years) and really appreciate Panic and their apps, but I was just getting random crashes once or twice a week (always reported) but they were killing my productivity when they happened. Combine that with the need to do some STM32 development and debugging and VS Code’s built-in debugger support, I fully switched over to VS Code and in months now haven’t had a single crash. In addition, I find I am able to script the tasks, launchers, and hotkeys that I want faster and more flexibly, and there is a huge variety of debugger support. VS Code also feels faster. It pains me to leave behind a true native app but I have not looked back since VS Code. I feel like the sweet spot for Nova is web and scripting development but when it comes to a lot of native development, it can’t really compete the same was VS Code can.
VS Code extensions I particularly like: CodeLLDB, Cortex-Debug, indent-rainbow, PDF Preview, PlatformIO, and Vim (25+ year Vim user as well, depending on task).
Same. I love Nova. It feels like the most “Mac-assed Mac app.” Two other things ill add that I’ve enjoyed-
I’ve been starting to dabble with Deno and i’m using your extension. Thanks for making it!
Ohhh yeah the diff view is great! And thanks for trying my extension!
$100 for a text editor? Is it worth it?
For me who makes a web developer’s salary and wants to see investment in tools that are not made with electron, yes!
For me definitely. Programming is what I do for a living, and $100 is a small amount to pay for something which makes my work more pleasant every day.
And ahem, it’s an IDE.
I mean even Panic don’t call it an IDE - I think there’s a certain level of functionality that people expect when the term IDE is used.
I haven’t looked at Panic since it was in pre-release days, but I would be very surprised if it had the functionality an IDE like Jetbrains offers.
That doesn’t mean it’s bad necessarily, it just means it’s a different type of tool.
An electric drill can drive screws, but it isn’t the same thing as an impact driver, which is designed to drive screws.
If having a debugger, symbol browser, and LSP support doesn’t get you into the IDE club I don’t know what does!
Usage detection and refactoring support is the thing that is pretty universally missing from “editors” that people end up calling “IDEs”.
I just downloaded the latest version, opened up a project and it could neither tell me where a symbol was being used, nor gave me any way to intelligently rename or otherwise alter that symbol with automatic changes being applied to the calling locations.
It also doesn’t help IMO that the extension system is apparently based on javascript, but from a quick check, at least some of the extensions then require nodeJS and NPM be installed, for extensions that have no specific ties to JS as a language.
I’m not familiar with Jetbrains IDEs. What do they offer beyond what LSP offers? I’ve just recently, over the last couple of years, started using LSP more. I’d be interested to know what I’m still missing compared to a “proper” IDE.
For reference, these are the LSP functions that I most often use:
It’s been a long time since I’ve really used a proper IDE and I don’t think I’ve ever used a Jetbrains one. What else is there?
Well if those things you list actually worked in Nova maybe I’d be ok with that, but I couldn’t get it to work any more than a glorified “find substring in project” (more details here: https://lobste.rs/s/nbr9bl/what_s_your_editor_setup#c_s0ewvx)
Jetbrains does support a lot more though:
Sorry, I was hijacking this thread to ask about Jetbrains - I don’t know anything about Nova! Maybe Nova is buggy. For the things I listed, LSP works really well.
Thank you for the examples of other things that Jetbrains does. You’ve got me interested now: I wonder which IDE functionality is most used? Does Jetbrains collect telemetry? Come to think of it, I wonder which LSP functionality I actually use often. Naïvely, this seems like something that could actually be amenable to empirical study.
Yeah I’m not sure really - and it probably depends a lot on the language involved. I would imagine that things like this are much more heavily used, as you go up the scale in terms of how powerful the language’s type system is.
Yeah, you would think that a powerful static type system would allow more things like this to be automated. I have seen some impressive demonstrations with Idris but most IDE functionality seems to focus on Java-style OO languages rather than anything fancier. Is there anything Jetbrains-like for Haskell?
I don’t believe there’s any first-party support, but there seems to be at least one third party plugin for it - whether it supports full/any refactoring support or not I don’t really know, Haskell isn’t really in my wheelhouse.
I have all of these features, you just haven’t set anything up.
You need to install an extension which supports the type of project you’re opening (e.g. the Rust extension, the TypeScript / Deno extension, Gleam extension, etc.). The extension is responsible for hooking up a LSP with Nova’s built-in LSP client which enables these features.
Because the extension API uses JS, some people build their extensions using the NPM ecosystem. The final extension is run in a non-Node runtime. There’s certainly no requirement to have NodeJS or NPM to be installed for an extension to work, that’s on the extension author.
Ok, so I went back to it, because hey maybe I just missed something. Believe me, I’d love a lighter IDE than Jetbrains. I’d even accept some reduced functionality of esoteric things like vagrant support, or run-on-remote-interpreter support, if necessary.
The project in question happens to be PHP - a language Nova supports “out of the box”. The only extra extension I found which suggests it adds more functionality “language support” for PHP, is the one I mentioned above, which says, very clearly in its description:
Anyway. So I tried with the built in language extension, because hey, it should work right?
“Go to definition” works.. it’s janky and kind of slow but it works (or so I thought).
As far as I can tell, thats it.
I found “Jump to (Previous|Next) Instance” which just highlights the same class name (in a comment as it happens) in the currently open file. Not sure if that’s what it’s supposed to do, but it’s completely unintuitive what this is meant to do, or why I would want that.
I tried Cmd-Click just by reflex from JetBrains muscle memory - and it shows me results when invoked on methods.. but besides being ridiculously slow, it’s woefully inaccurate. It just seems to show any method call that’s the same method name, regardless of the class it’s invoked on. Further investigation shows that’s actually the same behaviour for “go to Symbol” on methods too.
I get it - some language extensions will have more functionality than others - but this is a language they list with built in support, and it has a lot of capability to specify types, right in the language. If it can’t infer the correct symbol reference from php8.x typed code, I can’t even begin to imagine what it’s like with a language like Javascript or Python or Ruby.
Maybe it all works wonderfully if you specify an interpreter path for the project? I don’t know. It doesn’t let me specify a remote interpreter (e.g. in a VM or Container, or an actual remote machine), and I’m not about to go down the rabbit hole of trying to get the same language runtime and extension versions installed on macOS just to test that theory.