I’ve tried leaning on LLMs on a couple of test projects, and ended up disappointed (I’ve tried Zed with Claude 3.7, ChatGPT 4o & o3-mini, and GitHub copilot).
It’s okay for 100-line programs, but gets cumbersome and counterproductive around 1000 lines. It’s creates a promising start, but it can’t finish what it started.
There’s no good way to get an LLM to refactor code other than by re-generating it. When a change needs to span more than a small function, this becomes slow and tedious. I can’t even leave it to rewrite a long program for as long as it needs to, because it may randomly stop half way. It requires babysitting. It never does things perfectly on the first try, so either I have to fix it myself manually, or wait again and check again and wait again.
In non-toy projects, the context window size becomes a problem too. Even if I just want to add a new function, the LLM needs to know the structure of my program – the functions available, the database schema, data structures (otherwise it’s forced to hallucinate them or reinvent duplicates). For programs that are more than a couple of files, providing the right context is tedious, and maxing out the context window gets expensive.
I’ve also tried Copilot for navigating large projects like LLVM and the Rust compiler, and it had no clue about them. It clearly just searched the codebase and tried to guess the answer from the results. That’s not smart, that’s just a second-hand retelling of grep.
I find I get the most value out of the Codeium VS Code extension for larger projects. It primarily provides a smarter autocomplete, and you can also select a block of code, and prompt it to refactor it in a specific way.
Absolutely no idea how it would scale to projects the size of LLVM/rustc though. It definitely gets sluggish/worse results on larger files, so is definitely some context limitations.
I’ve tried that as well, but I don’t get much value out of autocomplete. In the autocomplete mode there isn’t room to give precise instructions, so it doesn’t guess that well. The latency is high, so even when it guesses what I want, it doesn’t feel like it’s saving me time.
2025-02-21 07:28 pushcx Story: Linus replies to R4L controversy
Action: deleted story
Reason: Don’t link into projects’ issue trackers and discussion spaces to brigade Lobsters readers into their arguments.
Yes, extremely silly that the direct link was censored, but the exact same content via phoronix is allowed. There were some good discussion in the deleted thread.
Thanks for the link here. Yeah, it’s not about the content of Linus’s email here, it’s about linking our 100k+ readers into projects’ community spaces. Linux is sort of the worst possible first example here because it’s a huge stable project and the friction of signing up to a single-purpose high-volume mailing list means it’s especially unlikely that we’re going to meaningfully disrupt Linux. The other end of the spectrum is linking into a small project’s GitHub issue, where most of our readers are going to be logged in and looking at that inviting <textarea> on a contentious topic with little context or history.
If the rule is “don’t submit links into projects’ spaces” it’s a clear rule, and I admit that it’s overkill for this specific situation. “Don’t submit links into projects’ spaces unless they’re big and probably fine like Linux or Mozilla (but a big project like Firefox not a small one like NSS)” is an unending series of judgment calls that are often going to be about really contentious issues that feel like they justify an exception to our rules or to common courtesy. It’s an imperfect rule, but there’s value in predictability and legibility.
If this compromise isn’t clear from what I’ve written in that code and the guidelines, I’m very open to suggestions for improving it. Doubly so if it’s the wrong compromise and there’s a path to us having better conversations and being a better neighbor on the web. As a reminder, the next office hours stream is in ~2h hours and this is the kind of thing I’m started office hours to talk about, in the hopes that folks find that more convenient or less formal than a meta thread or emailing me.
I feel the tradeoff is this instead basically links to blogspam that barely summarizes it, then links it anyways. They get the ad revenue. Maybe if we waited for a better thing to post about it, i.e. an LKML article or something in that ballpark?
The small benefit is that it’s one more small step that makes bad behavior less likely but, you’re right, it does incentivize lazy sites like this one.
You probably meant to write LWN? I’ve been mentioning them a lot in this running discussion about what our rules should be, I agree they’re a consistently excellent source. I don’t want to take a hard dep on them so I try to write things like “neutral third-party” but yeah, they’re first in my thoughts as well.
One aspect of getting good writeups of these things is false urgency, or maybe that urgency depends on proximity. To people who are involved or affected by the topic, Linus posting a single email is a significant development. They want to know immediately because it could significantly affect their work. So they want to see the primary source, or a repost of it. But anyone outside that narrow circle needs a writeup that explains the topic and puts it into the context of the last few months of news. That takes a lot more time to produce and sometimes it doesn’t happen. So even for obviously topical stories we have two very different kinds of readers. A significant part of the brigading problem is when the second, bigger group hits an update appropriate for the narrow group. They can’t contextualize it, but if it hits a hot button like the morality of licensing or Linus insulting people, it can generate a lot of outrage that makes them feel like they need to do something, and that unacceptable behavior like trolling is justified by the circumstances.
For a long time Lobsters has avoided being a source of brigading by trying to have norms that are kinder than average. That lowers the temperature of every discussion, makes us less appealing to the serious trolls, and makes it less likely that any particular discussion is going to gather enough outrage to hit the critical mass where our readers brigade into a project. But much bigger than our active users, our readership has been growing steadily, so even as our norms reduce the percentage chance of bad behavior, I’m worried that it’s not reducing enough to offset growth. If the percent risk drops by half but the readership grows 10x, we have a higher absolute risk.
To bring it back to a specific example, last summer Nix was having a running governance crisis around the project’s direction, corporate/government involvement, and codes of conduct. There was a series of stories about breaking news and new dimensions to the broader story about who should be running Nix and how, and it was tons of hot-button issues. A lot of their work happens on GitHub and a bunch of the issues tracking different proposals, petitions, and governance actions were submitted here, so all of the ingredients for brigading were present and temperatures were rising. Some of the links were submitted by the people directly involved. To put it charitably they were advocating and organizing for better governance; to put it uncharitably they were trying to brigade our readers into the project to overwhelm it. I did my best to separate the two and I think we discussed very important, hard topics while being a good neighbor on the web, but it’s why I added to the brigading guidelines about preferring not to link into project spaces.
To sum up, the rule against linking into community spaces trades off between a lot of hard topics. I’m trying to reduce judgment calls and our risk of harming projects while maintaining high-quality discussions on important topics. Sacrificing urgency draws a predictable, clear line about what links are acceptable, though I know that’s especially frustrating to people who are most involved with breaking news. So that’s why my last message called the rule a compromise, and I again encourage folks to help the site figure out better ones.
Probably a silly question, but would something like an https://archive.is snapshot of the target be enough of a barrier to brigading? Could even add that functionality internally..
Yeah, I wasn’t sure about this either. I understand the rationale for the no-brigading, but I don’t see much difference in posting this URL vs LKML directly.
(Also hope there’s a way we can still discuss Linus’s statement regardless)
Was really missing this functionality a month ago when trying to figure out what to do with an old PyPI project we didn’t want to delete but weren’t going to update. Thanks to all who worked on it!
Not just that, it’s also the worst pick for such a move. The non-commercial clause in CC is very unspecific, meaning that it’s liberally interpreted in some jurisdictions, and very strict in others. They’d be better off with one of the “fair source” licenses that at least has more clarification.
For all we know it could be the investors (I assume they were venture backed?) declared any source release must prevent any potential reboots or direct competitors. Capital is only a fan of open-source when they serve to profit from it, after all.
Source-available is still fine by me. You can run a personal (or non-profit, or whatever) instance of it to keep the spirit of the tool alive, and you can learn from what they did to apply the ideas to your own engineering problems. Not every source dump needs to be OSI-approved or reusable commercially.
Sure, I’d have preferred an AGPLv3 code dump, but startup lawyers are extremely allergic to the letters “GPL” and immediately shut down any conversation that includes them, in my rough experience over the years (almost always a degree or two removed from the lawyers themselves, to be fair).
Clearly intentional. I’m not sure what the point of this was. Potentially a PR move? Clearly it would spark some WTFs… and as someone who never even heard of Campsite until now, it worked.
Description: SecureDrop is an open source whistleblower submission system used by journalists to communicate with sources. Through its hardened architecture and the use of the Tor network, it offers whistleblowers strong security and anonymity protections. Used by more than 70 news organizations worldwide, including The New York Times, The Washington Post, The Guardian, and Al Jazeera, SecureDrop is composed of a variety of components, including SecureDrop Server (original server and submission interface developed by Aaron Swartz), SecureDrop Workstation (Qubes OS-based journalist-facing management system), SecureDrop Protocol (future end-to-end encrypted system).
Tech stack: Debian/Qubes/Fedora/Tails; Python (primarily) and Rust (up and coming), also SaltStack (for Qubes) and misc. bash and Perl
FPF pays 100% of premiums for health, vision, and dental insurance for employees, spouses, and dependents.
I think this work and mission are fantastic, but compensation for the role is on the low end. Am I understanding that the healthcare benefit is free on this point?
Correct, FPF pays for the health plan but you still have to pay any deductible/copay/etc.
And yes, the compensation is on the low end unfortunately; we’re still a relatively small non-profit. I personally took a pay cut to work here because I enjoy the job and the culture, but I understand that not everyone can afford to do so.
We didn’t want to impose additional complexity and costs by including an external managed switch IC. One port is 1GBit/s capable, while the other features a speed up to 2.5GBit/s. This is a limitation of the chosen SoC.
Kind of strange reasoning, a router needs a bunch of Ethernet ports in my opinion. As someone who’s done firmware for a few devices with discrete managed switch IC, that stuff is cheap and works super well. PHYs are usually integrated. You hook up I2C for management, R(G)MII to the SoC and Ethernet ports to the outside and you’re done.
Yeah, I don’t know why they didn’t just put at least a jellybean switching IC at least on the gigabit controller. Seems like a strange omission for an all-in-one compact router.
my container host at home uses quadlet to do container stuff directly from systemd using podman. pretty nice and simple. The only annoyance I have run into is getting it to pull conainers from a private registry without tls. This is not production in the sense that I serve a business from it, but it currently runs about 16 different local services I run at home.
It is exceedingly simple compared to anything having to do with k8s.
We do something similar, but we use docker in systemd. In order to avoid the iptables problem, we only use port publishes on localhost, like -p 127.0.0.1:8001:80. We also run HAProxy installed directly on the host which does forwarding to the correct containers.
I’ve even made a custom solution for zero downtime deploys and failovers where we on deploy just swap out the backend server via the admin API socket in HAProxy. For soft failover, we pause the traffic and then wait for other server to start the container, and then we swap backend and unpauses. (Pausing is done by setting maxconn = 0).
My home setup is also mostly all containerized through the use of quadlet/systemd, and probably the most important thing for me is that it all just works with incredibly minimal babysitting. I also use podman-auto-update so I don’t need to worry about updates and restarts and whatnot.
I’ve also been doing the podman+systemd thing for 4.5 years. I haven’t moved to quadlet yet, but now that I’m on Podman 5 I should have that option. Even doing it “the hard way” has been very reliable and manageable, but the new way definitely makes getting up and running easier.
On Arch and even Ubuntu-based systems, I have found that Podman regularly breaks after updates. It happened enough to force me to switch to Docker, which seems much more stable.
Am I the only one with this experience? Am I doing it wrong? Or is everyone on Redhat OSes, where Podman probably works more reliably?
I’m using Podman on macOS and FreeBSD. The macOS version is somewhat cheating, it’s actually podman-remote and a little bit of VM management, so is actually running an Fedora CoreOS VM that runs the containers. I’ve not had problems with either.
No, but there’s no reason that contains have a Linux ABI. OCI has specs for Linux and Windows containers ratified. Solaris uses the Linux ABI in branded zones, but the (not yet final) FreeBSD container spec uses native FreeBSD binaries. I’d like to be able to run Darwin binaries in containers on macOS. Apple had a job ad up recently for kernel engineers to work on OCI support, so hopefully that will come soon. The immutable system image model of macOS and APFS providing lightweight CoW snapshots should mean that a lot of building blocks are there.
Been doing it on Debian for over 4 years. I think there was one update that cost me a few hours, but no, I haven’t run into any kind of frequent breakage.
On second thought, I was probably using bleeding-edge Podman (even on Ubuntu) because the Debian version was really old and lacking an important feature or bugfix that I needed.
Hmm, my experience was something like that too. Issues with the networking drivers and weird almost but not quite compatibility with Docker Compose (overpromising and under delivering), etc. (can’t remember everything). I tried going back to it twice but finally gave up as each time I ended up wasting hours over something, whereas Docker has always just worked. Some of my issues could have been a result of using Arch, and perhaps Podman is in a much better state these days, but I’ve been burned too many times by it at this point to give it another go (I don’t trust it anymore) and I don’t really feel like I have anything to gain from it now anyway. Rootless Docker works well enough for me.
Wikipedia (through the MediaWiki software) mostly decided to keep outputting <big> for similar reasons (see e.g. this discussion). If browsers ever started seriously discussing dropping support, we’d just have our own extension to HTML; I drafted an example of what that might look like at https://www.mediawiki.org/wiki/User:Legoktm/HTML%2BMediaWiki. I don’t expect this to ever be a practical problem though.
Most of my Quadlet user units have a “sleep 30” before starting because there’s no easy way to wait for networking to be ready, the new functionality will be really nice and allow me to remove that.
What’s missing from this article is that the string has been misused in DoS attacks where people made otherwise benign software receive and store this string in a database only to find its files corrupted by antivirus removing the whole.
Imagine an attacker covering their traces by getting an http servers access.log deleted for containing this string.
I printed big vinyl stickers with EICAR in QR code form. I leave them on car bumpers and other high visibility locations in the hope that ALPRs and other public surveillance stacks will self-destruct. 😘
It’s only a little more trouble to achieve this by grabbing a sample of an actual virus. Or even just packing a perfectly benign executable with a packer commonly used by virus distributors, for some anti-malware setups.
So I’m not saying you’re wrong, but the problem isn’t the existence of a well known string so much as that anti-malware sofware is often very low quality.
I did not believe you were blaming EICAR. I replied the way I did because I have personally worked with people who would read your comment and conclude that EICAR was a genuine hazard to their systems.
Source: I saw a web server log get deleted in the wild this way, once. The EICAR test was in the query string on the very first request after the log was rotated. When we finally untangled what had happened, the manager’s response was to ask us to write a WAF rule to drop such requests without logging them. No thought was given to opening a ticket with the vendor, and excluding the server log files from AV scans was rejected.
Unfortunately not. But a cursory internet search landed me on https://security.stackexchange.com/questions/66699/eicar-virus-test-maliciously-used-to-delete-logs where the main reply claims that “it should not be possible” to perform any DoS attack using the EICAR test string because for it to work it needs to be at the start of the file. However, the top-voted comment underneath contradicts with a specific example where someone was apparently dumb enough not to read the spec and also match within files.
one of my favorite memories as a teenager (and definitely not into my early 20s) after learning about this was pasting this string in a crowded IRC server and watching the people behind a vigilant firewall (and plaintext IRC) drop out of the channel
Aren’t the detection systems nowadays aware of this eicar? I use it on a day to day basis as a test file for my part of testing the system dealing with security. I learned about eicar from an educational perspective where it can be used for tests, not from a harmful perspective.
This is the right decision and it has nothing to do with “US law” as some of the lwn people seem to be talking about. Russia is a dictatorship with sophisticated state-powered cyberwarfare capabilities. Regardless of whether a Russian-based maintainer has malicious intent towards the Linux kernel, it’s beyond delusional to think that the Russian government isn’t aware of their status as kernel developers or would hesitate to force them to abuse their position if it was of strategic value to the Russian leadership. Frankly it’s a kindness to remove them from that sort of position and remove that risk to their personal safety.
It may or may not have been the right decision, but it was definitely the wrong way to go about it. At the very least there should have been an announcement and a reason provided. And thanks for their service so far. Not this cloak and dagger crap.
Indeed this was quite the inhumane way to let maintainers with hundreds of contributions go, this reply on the ML phrases it pretty well:
There is the form and there is the content – about the content one
cannot do much, when the state he or his organization resides in gives
an order.
But about the form one can indeed do much. No "Thank you!", no "I hope
we can work together again once the world has become sane(r)"... srsly,
what the hell.
Edit: There is another reply now with more details on which maintainers were removed, i.e. people whose employer is subject to an OFAC sanctions program - with a link to a list of specific companies.
I hope we can work together again once the world has become sane(r)
This would be a completely inappropriate response because it mischaracterizes the situation at hand: if the maintainers want to continue working on Linux, they only have to quit their jobs at companies producing weapons and parts used to kill Ukrainian children. It has nothing to do with the world being (in)sane, and everything to do with sanctions levied against companies complicit in mass murder.
Yes, the decision is reasonable whether or not it is right, but the communication and framing is terrible. “Sorry, but we’re forced to remove you due to US law and/or executive orders. Thanks for your past contributions” would have been the better approach.
This is true of quite a few governments, including those you think are friendly, and it is a huge blind spot to believe otherwise. Dictatorship doesn’t have anything to do with it, it isn’t as though these decisions are made right at the top.
Do you have the same reaction to contributions from US-based companies that have military contracts? While the US isn’t a dictatorship, the security and foreign policy apparatuses are very distant from democratic feedback.
Regardless of whether a Russian-based maintainer has malicious intent towards the Linux kernel, it’s beyond delusional to think that the Russian government isn’t aware of their status as kernel developers or would hesitate to force them to abuse their position if it was of strategic value to the Russian leadership.
It’s hard to single out Russia for this in a post-Snowden world. Not to mention that if maintainers can be forced to do something nefarious, then they can do the same thing of their own will or for their own benefit.
Frankly it’s a kindness to remove them from that sort of position and remove that risk to their personal safety.
The Wikimedia Foundation has taken similar action by removing Wikipedia administrators from e.g. Iran as a protective measure (sorry, don’t have links offhand), but even if that’s the reason, the Linux actions seem to have a major lack of compassion for the people affected.
It wasn’t xenophobia. The maintainers who were removed all worked for companies on a list of companies that US organizations and/or EU organizations are prohibited from “trading” with.
The message could have (and should have) been wrapped in a kinder envelope, but the rationale for the action was beyond the control of Linus & co.
Here’s what Linus has said, and it’s more than just “sanction.”
Moreover, we have to remove any maintainers who come from the following countries or regions, as they are listed in Countries of Particular Concern and are subject to impending sanctions:
Burma, People’s Republic of China, Cuba, Eritrea, Iran, the Democratic People’s Republic of Korea, Nicaragua, Pakistan, Russia, Saudi Arabia, Tajikistan, and Turkmenistan.
Algeria, Azerbaijan, the Central African Republic, Comoros, and Vietnam.
For People’s Republic of China, there are about 500 entities that are on the U.S. OFAC SDN / non-SDN lists, especially HUAWEI, which is one of the most active employers from versions 5.16 through 6.1, according to statistics. This is unacceptable, and we must take immediate action to address it, with the same reason
The same could be said of US contributors to Linux, even moreso considering the existence of National security letters. The US is also a far more powerful dictatorship than the Russian Federation, and is currently aiding at least two genocides.
The Linux Foundation should consider moving its seat to a country with more Free Software friendly legislation, like Iceland.
In other words, refusing to comply with international sanctions. This is in fact an incredibly high bar to clear for Iceland. It would require the country to dissociate itself from the Nordic Council, the EEA, and NATO.
a kernel dev quoted in the Phoronix article wrote:
Again, we’re really sorry it’s come to this, but all of the Linux infrastructure and a lot of its maintainers are in the US and we can’t ignore the requirements of US law. We are hoping that this action alone will be sufficient to satisfy the US Treasury department in charge of sanctions and we won’t also have to remove any existing patches.
that made me think it was due to US (not international) sanctions and that the demand was made by a US body without international jurisdiction. what am I missing?
Without a citation of which sanction they’re referencing it’s really hard to say. I assumed this sanction regime was one shared by the US and the EU, and that Iceland would follow as a member of NATO and the EEA. If it is specific to the US, like their continued boneheaded sanctions against Cuba, than basing the Linux foundation in another country would prevent this specific instance (a number of email addresses removed from a largely ceremonial text file in an open source project) from happening again.
Note however that Icelandic law might impose other restrictions on the foundation’s work. The status of taxation as a non-profit is probably different.
even if it has to do with international sanctions, their interpretation and enforcement seems to have been particular to the US. it reeks of “national security” with all the jackbootery that comes with it.
Thunderbird has one of the worst user experiences I’ve ever seen. It takes seconds to delete an email from my inbox, the UI thread hangs all the time, if I click on a notification, it makes a black window because the filter moved the email while the notification was up and they didn’t track it by its ID or some basic mistake like that. Every update the UI gets clunkier and slower. Searching has a weird UI and fails to find matches. I could go on and on, there are so many UX issues with this worthless software. I have no idea what’s going on over at Mozilla. I think the org just needs to be burned to the ground.
I use it as my daily driver for now, but I feel like I’m on a sinking ship surrounded by nothing but ocean.
I use K-9 on Android and it’s fine… the idea of transforming it into Thunderbird blows my mind.
Yep, same boat. 100k+ emails, lots of filters, etc., it just works honestly. Thunderbird has only gotten better for me since Mozilla stopped supporting them.
Out of curiosity, are you using POP or IMAP? I imagine the performance characteristics would be very different, given their different network patterns.
I run Dovecot as an IMAP server on a Thinkpad which was first sold in 2010 with an SSD in it. I keep thinking I should change the hardware but it just keeps trucking & draws less than 10W so it never seems worth the effort.
It’s stuck behind a 100Mbit ethernet connection (for power saving reasons) which is roughly equivalent to my Internet connection but the latency is probably lower than it would be to an IMAP server on the wider Internet.
Having exclusive use of all that SSD bandwidth probably helps too of course.
I use K-9 on Android and it’s fine… the idea of transforming it into Thunderbird blows my mind.
Branding is a powerful concept. You think Outlook on iOS or Android shares anything with the desktop app? Nope, it’s also a rebranded M&A. It is kind of funny the same happened with Thunderbird.
Which leads to funny things where Outlook for mobile gets features before the desktop version (unified inbox and being able to see the sender’s email address as well as their name come to mind).
I don’t have it in front of me to double check but yeah the message UI is weird. It shows their name and if you hover over it then it pops up a little contact card that also doesn’t show the actual email address. IIRC hitting reply helps because it’s visible in the compose email UI.
I use K-9 on Android and it’s fine… the idea of transforming it into Thunderbird blows my mind.
i thought the idea in Mozilla’s head would be more like “okay we have this really good base for an email app (k-9), lets support it, and add our branding to it and ship it as Thunderbird “
Thunderbird does a lot of disk I/O on the main thread. On most platforms, this responds in, at most, one disk seek time (<10ms even for spinning rust, much less for SSDs, even less for things in the disk cache) so typically doesn’t hurt responsiveness. On Windows, Window Defender will intercept these things and scan them, which can add hundreds of milliseconds or even seconds of latency, depending on the size of the file.
This got better when Thunderbird moved from mbox to Maildir by default, but I don’t think it migrates automatically. If you have a 1 GB mbox file, Windows Defender will scan the whole thing before letting Thunderbird read on 1 KiB email from it. This caused pause times of 20-30 seconds in common operations. With Maildir, it will just scan individual emails. This can still be slow for things with big attachments, but it rarely causes stutters of more than a second.
Thunderbird was originally a refactoring of Mozilla Mail and News into a stand-alone app. Mozilla Mail and Newsgroups was the open-source version of Netscape Mail and Newsgroups (both ran in the same process as the browser, so a browser crash took out your mail app, and browser crashes happened a few times a day back then). Netscape Mail and Newsgroups was released in 1995.
It ran on Windows 3.11, Windows 95, Classic MacOS, and a handful of *NIX systems. Threading models were not present on all of them, and did not have the same semantics on the ones that did. Doing I/O on the UI thread wasn’t a thing, doing I/O and UI work on the one thread in the program was.
It’s been refactored a lot since those days, but there’s still a lot of legacy code. Next year, the codebase will be 30 years old.
I really agree. I’m a big fan of K-9 Mail and hearing that Thunderbird was taking it over did not sound like good news at all.
I’ll disagree with one of your points though – deleting an email happens instantly and usually by accident. Hitting undo and waiting for there to be any sign in the UI that it heard you, now that takes forever.
While NSA did not possess the equipment required to access the footage from the media format in which it was preserved, NSA deemed the footage to be of significant public interest and requested assistance from the National Archives and Records Administration (NARA) to retrieve the footage. NARA’s Special Media Department was able to retrieve the footage contained on two 1’ AMPEX tapes and transferred the footage to NSA to be reviewed for public release.
This should be AMPEX tapes, not APEX. Looks like this was a mistake in the original – I’ve seen the mistake quoted elsewhere – and it also looks like the original has been corrected.
I’ve tried leaning on LLMs on a couple of test projects, and ended up disappointed (I’ve tried Zed with Claude 3.7, ChatGPT 4o & o3-mini, and GitHub copilot).
It’s okay for 100-line programs, but gets cumbersome and counterproductive around 1000 lines. It’s creates a promising start, but it can’t finish what it started.
There’s no good way to get an LLM to refactor code other than by re-generating it. When a change needs to span more than a small function, this becomes slow and tedious. I can’t even leave it to rewrite a long program for as long as it needs to, because it may randomly stop half way. It requires babysitting. It never does things perfectly on the first try, so either I have to fix it myself manually, or wait again and check again and wait again.
In non-toy projects, the context window size becomes a problem too. Even if I just want to add a new function, the LLM needs to know the structure of my program – the functions available, the database schema, data structures (otherwise it’s forced to hallucinate them or reinvent duplicates). For programs that are more than a couple of files, providing the right context is tedious, and maxing out the context window gets expensive.
I’ve also tried Copilot for navigating large projects like LLVM and the Rust compiler, and it had no clue about them. It clearly just searched the codebase and tried to guess the answer from the results. That’s not smart, that’s just a second-hand retelling of
grep.I find I get the most value out of the Codeium VS Code extension for larger projects. It primarily provides a smarter autocomplete, and you can also select a block of code, and prompt it to refactor it in a specific way.
Absolutely no idea how it would scale to projects the size of LLVM/rustc though. It definitely gets sluggish/worse results on larger files, so is definitely some context limitations.
I’ve tried that as well, but I don’t get much value out of autocomplete. In the autocomplete mode there isn’t room to give precise instructions, so it doesn’t guess that well. The latency is high, so even when it guesses what I want, it doesn’t feel like it’s saving me time.
Dupe of https://lobste.rs/s/dw09hf/ai_where_loop_should_humans_go
This is the one on the guys personal blog, so maybe this should be the primary?
Do folks know if the Ryzen AI chips in the Framework 13 are actually suitable for running LLMs locally or is it just marketing buzzwords?
I was under the possibly mistaken impression that you needed like, a dedicated GPU for any reasonable performance.
It’s just a marketing rebrand for laptop Zen 5 chips, not that it really does AI better
I’ve passed the link onto ArchiveTeam, who are looking into it.
@pushcx the actual link to the lkml was deleted:
Is this link allowed?
Yes, extremely silly that the direct link was censored, but the exact same content via phoronix is allowed. There were some good discussion in the deleted thread.
@pushcx posted in the modlog: https://github.com/lobsters/lobsters/commit/dca5f5674997d6d39172490d623d48b65996ebab - which explains the rationale regarding brigadiering.
Thanks for the link here. Yeah, it’s not about the content of Linus’s email here, it’s about linking our 100k+ readers into projects’ community spaces. Linux is sort of the worst possible first example here because it’s a huge stable project and the friction of signing up to a single-purpose high-volume mailing list means it’s especially unlikely that we’re going to meaningfully disrupt Linux. The other end of the spectrum is linking into a small project’s GitHub issue, where most of our readers are going to be logged in and looking at that inviting
<textarea>on a contentious topic with little context or history.If the rule is “don’t submit links into projects’ spaces” it’s a clear rule, and I admit that it’s overkill for this specific situation. “Don’t submit links into projects’ spaces unless they’re big and probably fine like Linux or Mozilla (but a big project like Firefox not a small one like NSS)” is an unending series of judgment calls that are often going to be about really contentious issues that feel like they justify an exception to our rules or to common courtesy. It’s an imperfect rule, but there’s value in predictability and legibility.
If this compromise isn’t clear from what I’ve written in that code and the guidelines, I’m very open to suggestions for improving it. Doubly so if it’s the wrong compromise and there’s a path to us having better conversations and being a better neighbor on the web. As a reminder, the next office hours stream is in ~2h hours and this is the kind of thing I’m started office hours to talk about, in the hopes that folks find that more convenient or less formal than a meta thread or emailing me.
I feel the tradeoff is this instead basically links to blogspam that barely summarizes it, then links it anyways. They get the ad revenue. Maybe if we waited for a better thing to post about it, i.e. an LKML article or something in that ballpark?
The small benefit is that it’s one more small step that makes bad behavior less likely but, you’re right, it does incentivize lazy sites like this one.
You probably meant to write LWN? I’ve been mentioning them a lot in this running discussion about what our rules should be, I agree they’re a consistently excellent source. I don’t want to take a hard dep on them so I try to write things like “neutral third-party” but yeah, they’re first in my thoughts as well.
One aspect of getting good writeups of these things is false urgency, or maybe that urgency depends on proximity. To people who are involved or affected by the topic, Linus posting a single email is a significant development. They want to know immediately because it could significantly affect their work. So they want to see the primary source, or a repost of it. But anyone outside that narrow circle needs a writeup that explains the topic and puts it into the context of the last few months of news. That takes a lot more time to produce and sometimes it doesn’t happen. So even for obviously topical stories we have two very different kinds of readers. A significant part of the brigading problem is when the second, bigger group hits an update appropriate for the narrow group. They can’t contextualize it, but if it hits a hot button like the morality of licensing or Linus insulting people, it can generate a lot of outrage that makes them feel like they need to do something, and that unacceptable behavior like trolling is justified by the circumstances.
For a long time Lobsters has avoided being a source of brigading by trying to have norms that are kinder than average. That lowers the temperature of every discussion, makes us less appealing to the serious trolls, and makes it less likely that any particular discussion is going to gather enough outrage to hit the critical mass where our readers brigade into a project. But much bigger than our active users, our readership has been growing steadily, so even as our norms reduce the percentage chance of bad behavior, I’m worried that it’s not reducing enough to offset growth. If the percent risk drops by half but the readership grows 10x, we have a higher absolute risk.
To bring it back to a specific example, last summer Nix was having a running governance crisis around the project’s direction, corporate/government involvement, and codes of conduct. There was a series of stories about breaking news and new dimensions to the broader story about who should be running Nix and how, and it was tons of hot-button issues. A lot of their work happens on GitHub and a bunch of the issues tracking different proposals, petitions, and governance actions were submitted here, so all of the ingredients for brigading were present and temperatures were rising. Some of the links were submitted by the people directly involved. To put it charitably they were advocating and organizing for better governance; to put it uncharitably they were trying to brigade our readers into the project to overwhelm it. I did my best to separate the two and I think we discussed very important, hard topics while being a good neighbor on the web, but it’s why I added to the brigading guidelines about preferring not to link into project spaces.
To sum up, the rule against linking into community spaces trades off between a lot of hard topics. I’m trying to reduce judgment calls and our risk of harming projects while maintaining high-quality discussions on important topics. Sacrificing urgency draws a predictable, clear line about what links are acceptable, though I know that’s especially frustrating to people who are most involved with breaking news. So that’s why my last message called the rule a compromise, and I again encourage folks to help the site figure out better ones.
Probably a silly question, but would something like an https://archive.is snapshot of the target be enough of a barrier to brigading? Could even add that functionality internally..
Yeah, I wasn’t sure about this either. I understand the rationale for the no-brigading, but I don’t see much difference in posting this URL vs LKML directly.
(Also hope there’s a way we can still discuss Linus’s statement regardless)
Here’s the LKML link.
I enjoyed the post, but found it funny that the code examples are all in Rust, but the linked encoder/decoder is implemented in TypeScript :)
Rust doesn’t work in a browser, I guess :)
Except wasm
Was really missing this functionality a month ago when trying to figure out what to do with an old PyPI project we didn’t want to delete but weren’t going to update. Thanks to all who worked on it!
The NonCommercial variant of the Creative Commons license suite is not open source because it restricts what you can do with it.
The codebase is merely source available.
Not just that, it’s also the worst pick for such a move. The non-commercial clause in CC is very unspecific, meaning that it’s liberally interpreted in some jurisdictions, and very strict in others. They’d be better off with one of the “fair source” licenses that at least has more clarification.
For all we know it could be the investors (I assume they were venture backed?) declared any source release must prevent any potential reboots or direct competitors. Capital is only a fan of open-source when they serve to profit from it, after all.
Source-available is still fine by me. You can run a personal (or non-profit, or whatever) instance of it to keep the spirit of the tool alive, and you can learn from what they did to apply the ideas to your own engineering problems. Not every source dump needs to be OSI-approved or reusable commercially.
Sure, I’d have preferred an AGPLv3 code dump, but startup lawyers are extremely allergic to the letters “GPL” and immediately shut down any conversation that includes them, in my rough experience over the years (almost always a degree or two removed from the lawyers themselves, to be fair).
Clearly intentional. I’m not sure what the point of this was. Potentially a PR move? Clearly it would spark some WTFs… and as someone who never even heard of Campsite until now, it worked.
Company: Freedom of the Press Foundation
Company site: https://freedom.press / https://securedrop.org
Position(s): Senior Software Engineer
Location: Remote (US)
Description: SecureDrop is an open source whistleblower submission system used by journalists to communicate with sources. Through its hardened architecture and the use of the Tor network, it offers whistleblowers strong security and anonymity protections. Used by more than 70 news organizations worldwide, including The New York Times, The Washington Post, The Guardian, and Al Jazeera, SecureDrop is composed of a variety of components, including SecureDrop Server (original server and submission interface developed by Aaron Swartz), SecureDrop Workstation (Qubes OS-based journalist-facing management system), SecureDrop Protocol (future end-to-end encrypted system).
Tech stack: Debian/Qubes/Fedora/Tails; Python (primarily) and Rust (up and coming), also SaltStack (for Qubes) and misc. bash and Perl
Compensation: See https://freedom.press/careers/job/?gh_jid=4508975005
Contact: Please apply at https://freedom.press/careers/job/?gh_jid=4508975005, happy to answer questions.
P.S. FPF is also hiring a Senior IT/Infrastructure Engineer based in Brooklyn, NY; I’m not on that team but you will get to sit next to me.
I think this work and mission are fantastic, but compensation for the role is on the low end. Am I understanding that the healthcare benefit is free on this point?
Correct, FPF pays for the health plan but you still have to pay any deductible/copay/etc.
And yes, the compensation is on the low end unfortunately; we’re still a relatively small non-profit. I personally took a pay cut to work here because I enjoy the job and the culture, but I understand that not everyone can afford to do so.
It would also be a pretty substantial paycut for me, but the work sounds meaningful, so I’m considering it.
Cool project, I hope the winners use colorblind-accessible colors, i.e. not red and green :)
LWN received an early version and reviewed it: https://lwn.net/Articles/994961/
Kind of strange reasoning, a router needs a bunch of Ethernet ports in my opinion. As someone who’s done firmware for a few devices with discrete managed switch IC, that stuff is cheap and works super well. PHYs are usually integrated. You hook up I2C for management, R(G)MII to the SoC and Ethernet ports to the outside and you’re done.
Yeah, I don’t know why they didn’t just put at least a jellybean switching IC at least on the gigabit controller. Seems like a strange omission for an all-in-one compact router.
my container host at home uses quadlet to do container stuff directly from systemd using podman. pretty nice and simple. The only annoyance I have run into is getting it to pull conainers from a private registry without tls. This is not production in the sense that I serve a business from it, but it currently runs about 16 different local services I run at home.
It is exceedingly simple compared to anything having to do with k8s.
We do something similar, but we use docker in systemd. In order to avoid the iptables problem, we only use port publishes on localhost, like
-p 127.0.0.1:8001:80. We also run HAProxy installed directly on the host which does forwarding to the correct containers.I’ve even made a custom solution for zero downtime deploys and failovers where we on deploy just swap out the backend server via the admin API socket in HAProxy. For soft failover, we pause the traffic and then wait for other server to start the container, and then we swap backend and unpauses. (Pausing is done by setting
maxconn = 0).My home setup is also mostly all containerized through the use of quadlet/systemd, and probably the most important thing for me is that it all just works with incredibly minimal babysitting. I also use podman-auto-update so I don’t need to worry about updates and restarts and whatnot.
this is great, thank you! I’m struggling to find The Docs, but it does look like it’s on official part of podman now so that definitely bodes well
You saw this article: https://matduggan.com/replace-compose-with-quadlet/ Discussed here: https://lobste.rs/s/ss8oea/replace_docker_compose_with_quadlet_for
I don’t think I did see that, or if I did I don’t remember. I’m sorry, I don’t follow?
It’s a quadlet howto, which seems like what you were interested in.
ah ok cool, thanks!
I’ve also been doing the podman+systemd thing for 4.5 years. I haven’t moved to quadlet yet, but now that I’m on Podman 5 I should have that option. Even doing it “the hard way” has been very reliable and manageable, but the new way definitely makes getting up and running easier.
On Arch and even Ubuntu-based systems, I have found that Podman regularly breaks after updates. It happened enough to force me to switch to Docker, which seems much more stable.
Am I the only one with this experience? Am I doing it wrong? Or is everyone on Redhat OSes, where Podman probably works more reliably?
I’m using Podman on macOS and FreeBSD. The macOS version is somewhat cheating, it’s actually podman-remote and a little bit of VM management, so is actually running an Fedora CoreOS VM that runs the containers. I’ve not had problems with either.
docker on macOS is also using sneaky VM’s in the background. Not really other way to have linux ABI, is there?
No, but there’s no reason that contains have a Linux ABI. OCI has specs for Linux and Windows containers ratified. Solaris uses the Linux ABI in branded zones, but the (not yet final) FreeBSD container spec uses native FreeBSD binaries. I’d like to be able to run Darwin binaries in containers on macOS. Apple had a job ad up recently for kernel engineers to work on OCI support, so hopefully that will come soon. The immutable system image model of macOS and APFS providing lightweight CoW snapshots should mean that a lot of building blocks are there.
Been doing it on Debian for over 4 years. I think there was one update that cost me a few hours, but no, I haven’t run into any kind of frequent breakage.
On second thought, I was probably using bleeding-edge Podman (even on Ubuntu) because the Debian version was really old and lacking an important feature or bugfix that I needed.
Hmm, my experience was something like that too. Issues with the networking drivers and weird almost but not quite compatibility with Docker Compose (overpromising and under delivering), etc. (can’t remember everything). I tried going back to it twice but finally gave up as each time I ended up wasting hours over something, whereas Docker has always just worked. Some of my issues could have been a result of using Arch, and perhaps Podman is in a much better state these days, but I’ve been burned too many times by it at this point to give it another go (I don’t trust it anymore) and I don’t really feel like I have anything to gain from it now anyway. Rootless Docker works well enough for me.
Do you write the quadlet
.services yourself? Use some generator? Or templates you copy-paste?Wikipedia (through the MediaWiki software) mostly decided to keep outputting
<big>for similar reasons (see e.g. this discussion). If browsers ever started seriously discussing dropping support, we’d just have our own extension to HTML; I drafted an example of what that might look like at https://www.mediawiki.org/wiki/User:Legoktm/HTML%2BMediaWiki. I don’t expect this to ever be a practical problem though.What problem is this solving?
Most of my Quadlet user units have a “sleep 30” before starting because there’s no easy way to wait for networking to be ready, the new functionality will be really nice and allow me to remove that.
I thought these were generated from systemd unit files - what happens if you setThis is addressed in the blog postWants=network-online.target?while ! ip -4 addr show dev br0 | grep -q 192.168 ; do sleep 1 ; donenever a need to sleep for 30
What’s missing from this article is that the string has been misused in DoS attacks where people made otherwise benign software receive and store this string in a database only to find its files corrupted by antivirus removing the whole.
Imagine an attacker covering their traces by getting an http servers access.log deleted for containing this string.
I printed big vinyl stickers with EICAR in QR code form. I leave them on car bumpers and other high visibility locations in the hope that ALPRs and other public surveillance stacks will self-destruct. 😘
It’s only a little more trouble to achieve this by grabbing a sample of an actual virus. Or even just packing a perfectly benign executable with a packer commonly used by virus distributors, for some anti-malware setups.
So I’m not saying you’re wrong, but the problem isn’t the existence of a well known string so much as that anti-malware sofware is often very low quality.
Oh yes. If by any means, you were under the impression that I would blame this simple piece of a test string, then I am very sorry for the ambiguity.
In almost all cases, it’s the antivirus being shit ☺️
I did not believe you were blaming EICAR. I replied the way I did because I have personally worked with people who would read your comment and conclude that EICAR was a genuine hazard to their systems.
Source: I saw a web server log get deleted in the wild this way, once. The EICAR test was in the query string on the very first request after the log was rotated. When we finally untangled what had happened, the manager’s response was to ask us to write a WAF rule to drop such requests without logging them. No thought was given to opening a ticket with the vendor, and excluding the server log files from AV scans was rejected.
Do you have a source for that? (So it can be added to the article)
Unfortunately not. But a cursory internet search landed me on https://security.stackexchange.com/questions/66699/eicar-virus-test-maliciously-used-to-delete-logs where the main reply claims that “it should not be possible” to perform any DoS attack using the EICAR test string because for it to work it needs to be at the start of the file. However, the top-voted comment underneath contradicts with a specific example where someone was apparently dumb enough not to read the spec and also match within files.
If you knew a professional security expert who was willing to write about this, it could be added and cited… :)
The fediverse delivers https://mastodon.social/@jandi/113381920809632917
one of my favorite memories as a teenager (and definitely not into my early 20s) after learning about this was pasting this string in a crowded IRC server and watching the people behind a vigilant firewall (and plaintext IRC) drop out of the channel
Aren’t the detection systems nowadays aware of this eicar? I use it on a day to day basis as a test file for my part of testing the system dealing with security. I learned about eicar from an educational perspective where it can be used for tests, not from a harmful perspective.
This is the right decision and it has nothing to do with “US law” as some of the lwn people seem to be talking about. Russia is a dictatorship with sophisticated state-powered cyberwarfare capabilities. Regardless of whether a Russian-based maintainer has malicious intent towards the Linux kernel, it’s beyond delusional to think that the Russian government isn’t aware of their status as kernel developers or would hesitate to force them to abuse their position if it was of strategic value to the Russian leadership. Frankly it’s a kindness to remove them from that sort of position and remove that risk to their personal safety.
It may or may not have been the right decision, but it was definitely the wrong way to go about it. At the very least there should have been an announcement and a reason provided. And thanks for their service so far. Not this cloak and dagger crap.
Indeed this was quite the inhumane way to let maintainers with hundreds of contributions go, this reply on the ML phrases it pretty well:
Edit: There is another reply now with more details on which maintainers were removed, i.e. people whose employer is subject to an OFAC sanctions program - with a link to a list of specific companies.
This would be a completely inappropriate response because it mischaracterizes the situation at hand: if the maintainers want to continue working on Linux, they only have to quit their jobs at companies producing weapons and parts used to kill Ukrainian children. It has nothing to do with the world being (in)sane, and everything to do with sanctions levied against companies complicit in mass murder.
it has everything to do with sanity or lack thereof, when such a standard is applied so unevenly
Yes, the decision is reasonable whether or not it is right, but the communication and framing is terrible. “Sorry, but we’re forced to remove you due to US law and/or executive orders. Thanks for your past contributions” would have been the better approach.
This is true of quite a few governments, including those you think are friendly, and it is a huge blind spot to believe otherwise. Dictatorship doesn’t have anything to do with it, it isn’t as though these decisions are made right at the top.
Dictator, you say? I chuckled. Linus is literally a “BDFL”.
Maybe we’ll eventually see an official BRICS fork of the Linux kernel? Pretty sure China has been working on it.
Do you have the same reaction to contributions from US-based companies that have military contracts? While the US isn’t a dictatorship, the security and foreign policy apparatuses are very distant from democratic feedback.
much more distant than russia’s in fact
It’s hard to single out Russia for this in a post-Snowden world. Not to mention that if maintainers can be forced to do something nefarious, then they can do the same thing of their own will or for their own benefit.
Did you hear this from the affected parties?
The Wikimedia Foundation has taken similar action by removing Wikipedia administrators from e.g. Iran as a protective measure (sorry, don’t have links offhand), but even if that’s the reason, the Linux actions seem to have a major lack of compassion for the people affected.
It wasn’t xenophobia. The maintainers who were removed all worked for companies on a list of companies that US organizations and/or EU organizations are prohibited from “trading” with.
The message could have (and should have) been wrapped in a kinder envelope, but the rationale for the action was beyond the control of Linus & co.
Thank you for the explanation, makes sense as is common and compatible with sanctions to other countries. I was replying to the comment above mostly.
This was what Hangton Chen has to say about this:
Hi James,
Here’s what Linus has said, and it’s more than just “sanction.”
Moreover, we have to remove any maintainers who come from the following countries or regions, as they are listed in Countries of Particular Concern and are subject to impending sanctions:
Burma, People’s Republic of China, Cuba, Eritrea, Iran, the Democratic People’s Republic of Korea, Nicaragua, Pakistan, Russia, Saudi Arabia, Tajikistan, and Turkmenistan. Algeria, Azerbaijan, the Central African Republic, Comoros, and Vietnam. For People’s Republic of China, there are about 500 entities that are on the U.S. OFAC SDN / non-SDN lists, especially HUAWEI, which is one of the most active employers from versions 5.16 through 6.1, according to statistics. This is unacceptable, and we must take immediate action to address it, with the same reason
did you just deliberately ignore the fact that huawei is covered by special exemption in the sanctions?
The same could be said of US contributors to Linux, even moreso considering the existence of National security letters. The US is also a far more powerful dictatorship than the Russian Federation, and is currently aiding at least two genocides.
The Linux Foundation should consider moving its seat to a country with more Free Software friendly legislation, like Iceland.
I’m Icelandic and regret I only have two eyebrows to raise at that.
it’s an incredibly low bar that Iceland has to clear, as this story demonstrates
Please expand on how Iceland would act to be seen as a more FLOSS friendly place, as opposed to for example the United States.
not mandating the removal of maintainers
In other words, refusing to comply with international sanctions. This is in fact an incredibly high bar to clear for Iceland. It would require the country to dissociate itself from the Nordic Council, the EEA, and NATO.
a kernel dev quoted in the Phoronix article wrote:
that made me think it was due to US (not international) sanctions and that the demand was made by a US body without international jurisdiction. what am I missing?
Without a citation of which sanction they’re referencing it’s really hard to say. I assumed this sanction regime was one shared by the US and the EU, and that Iceland would follow as a member of NATO and the EEA. If it is specific to the US, like their continued boneheaded sanctions against Cuba, than basing the Linux foundation in another country would prevent this specific instance (a number of email addresses removed from a largely ceremonial text file in an open source project) from happening again.
Note however that Icelandic law might impose other restrictions on the foundation’s work. The status of taxation as a non-profit is probably different.
even if it has to do with international sanctions, their interpretation and enforcement seems to have been particular to the US. it reeks of “national security” with all the jackbootery that comes with it.
Thunderbird has one of the worst user experiences I’ve ever seen. It takes seconds to delete an email from my inbox, the UI thread hangs all the time, if I click on a notification, it makes a black window because the filter moved the email while the notification was up and they didn’t track it by its ID or some basic mistake like that. Every update the UI gets clunkier and slower. Searching has a weird UI and fails to find matches. I could go on and on, there are so many UX issues with this worthless software. I have no idea what’s going on over at Mozilla. I think the org just needs to be burned to the ground.
I use it as my daily driver for now, but I feel like I’m on a sinking ship surrounded by nothing but ocean.
I use K-9 on Android and it’s fine… the idea of transforming it into Thunderbird blows my mind.
I have an inbox with > 20k emails in it & deletions happen instantly in Thunderbird.
Likewise dialog boxes appear & disappear instantaneously.
Are you sure there isn’t something up with your system?
Yep, same boat. 100k+ emails, lots of filters, etc., it just works honestly. Thunderbird has only gotten better for me since Mozilla stopped supporting them.
Also for me. 157k mails, everything feels snappy.
I like it significantly better than any web client too, as those are usually pretty laggy.
OP, could it be slow hardware or no compaction?
Out of curiosity, are you using POP or IMAP? I imagine the performance characteristics would be very different, given their different network patterns.
IMAP.
I run Dovecot as an IMAP server on a Thinkpad which was first sold in 2010 with an SSD in it. I keep thinking I should change the hardware but it just keeps trucking & draws less than 10W so it never seems worth the effort.
Ah, IMAP, but on the local network? That’s likely to be much faster than IMAP over the internet, which I think is a very common use case.
It’s stuck behind a 100Mbit ethernet connection (for power saving reasons) which is roughly equivalent to my Internet connection but the latency is probably lower than it would be to an IMAP server on the wider Internet.
Having exclusive use of all that SSD bandwidth probably helps too of course.
Branding is a powerful concept. You think Outlook on iOS or Android shares anything with the desktop app? Nope, it’s also a rebranded M&A. It is kind of funny the same happened with Thunderbird.
Which leads to funny things where Outlook for mobile gets features before the desktop version (unified inbox and being able to see the sender’s email address as well as their name come to mind).
Wait what?
Be thankful if you’ve never had to use desktop Outlook…
I don’t have it in front of me to double check but yeah the message UI is weird. It shows their name and if you hover over it then it pops up a little contact card that also doesn’t show the actual email address. IIRC hitting reply helps because it’s visible in the compose email UI.
i thought the idea in Mozilla’s head would be more like “okay we have this really good base for an email app (k-9), lets support it, and add our branding to it and ship it as Thunderbird “
I think it’s more like “K-9, now known as Thunderbird”
Are you on Windows, by any chance?
Thunderbird does a lot of disk I/O on the main thread. On most platforms, this responds in, at most, one disk seek time (<10ms even for spinning rust, much less for SSDs, even less for things in the disk cache) so typically doesn’t hurt responsiveness. On Windows, Window Defender will intercept these things and scan them, which can add hundreds of milliseconds or even seconds of latency, depending on the size of the file.
This got better when Thunderbird moved from mbox to Maildir by default, but I don’t think it migrates automatically. If you have a 1 GB mbox file, Windows Defender will scan the whole thing before letting Thunderbird read on 1 KiB email from it. This caused pause times of 20-30 seconds in common operations. With Maildir, it will just scan individual emails. This can still be slow for things with big attachments, but it rarely causes stutters of more than a second.
It happens on any platform as long as you have enough mail. IDK why we still have GUI apps in $CURRENTYEAR doing work on the UI thread.
Thunderbird was originally a refactoring of Mozilla Mail and News into a stand-alone app. Mozilla Mail and Newsgroups was the open-source version of Netscape Mail and Newsgroups (both ran in the same process as the browser, so a browser crash took out your mail app, and browser crashes happened a few times a day back then). Netscape Mail and Newsgroups was released in 1995.
It ran on Windows 3.11, Windows 95, Classic MacOS, and a handful of *NIX systems. Threading models were not present on all of them, and did not have the same semantics on the ones that did. Doing I/O on the UI thread wasn’t a thing, doing I/O and UI work on the one thread in the program was.
It’s been refactored a lot since those days, but there’s still a lot of legacy code. Next year, the codebase will be 30 years old.
No, I’m on KDE. I also experienced this behavior on XFCE.
I really agree. I’m a big fan of K-9 Mail and hearing that Thunderbird was taking it over did not sound like good news at all.
I’ll disagree with one of your points though – deleting an email happens instantly and usually by accident. Hitting undo and waiting for there to be any sign in the UI that it heard you, now that takes forever.
https://blog.torproject.org/tor-is-still-safe/ has some more details, that it was apparently an old version of Ricochet that didn’t have the newish Vanguards feature.
This is a follow-up to the story posted last month: Grace Hopper’s Lost Lecture found in an NSA Vault.
Quoting from the NSA press release:
This should be AMPEX tapes, not APEX. Looks like this was a mistake in the original – I’ve seen the mistake quoted elsewhere – and it also looks like the original has been corrected.
Thanks, updated my comment.