One question that comes to mind is how this allocator can be hardened against attacks. Zig makes buffer overflow and use-after-free vulnerabilities less common than in C, but they can still happen, and when they do, allocator exploitation can provide a route to much more damaging attacks.
For example, since SmpAllocator uses a linked free-list, it looks pretty trivial to have it return any address you want, if you can modify a freed slot. Pointer encryption would be pretty easy to add to SmpAllocator, and can help with this, though it’s not foolproof.
One example of an allocator with a lot of hardening would be OpenBSD’s malloc. It has canaries, some double free and use after free protections, guard pages and more.
The two things I like most about the EUPL is 1. the protection against the SaaS loophole by including or providing access to its essential functionalities in it’s definition of ‘Distribution’ or ‘Communication’. And 2. that both static and dynamic linking of some EUPL licensed software doesn’t automatically make the combined work a Derivative Work. It’s comparable but better than the LGPL on that front.
While I think it’s fun to build tools and break from general advice, things like this almost consistently have the same markers that makes me go “oh no”. Why use RSA over a ECC algorithm? Why is this not using an authenticated encryption algorithm over AES-CBC? I don’t feel like a lot of the choices are very well justified in the blog. In the end this just strikes me as a use case for age (which btw is very small 4k LoC with tests), over potentially becoming reliant on broken things.
Tough crowd, but nice to have links that actually explain a problem that’s actually relevant. Thanks @quad! The unauthenticated encryption complaint was convincing. I verified that it is in fact possible to modify the output of the script (via its IV portion) to get the recipient to silently see different data. It’s not an attack mode (MITM) that we actually care about (other problems with that anyway), but it does seem silly not to use authenticated mode when it’s easy.
I also dropped support for encrypting with RSA directly in the same PR. Disappointed that neither reason in the linked article applies for this use-case, so can’t verify anything that’s wrong with it, but I can’t justify keeping it anyway, since it raises questions and is only there because that’s what the script started with, i.e. historical reasons.
It’s not an attack mode (MITM) that we actually care about (other problems with that anyway), but it does seem silly not to use authenticated mode when it’s easy.
Good question, that’s what I start the article with, but don’t say it explicitly. I am assuming I can talk to my colleague freely, no MITM or impersonator involved. I am concerned that putting a secret in plain text would mean it’s present on systems outside of my knowledge or control (like Slack servers). So I am assuming the messages in the channel can leak, and I am concerned about confidentiality.
That strikes me as quite in-the-moment. A future leak of a private key or a spearfish would allow an attacker to get all your secrets quite conveniently from Slack.
Disappointed that neither reason in the linked article applies for this use-case, so can’t verify anything that’s wrong with it, but I can’t justify keeping it anyway, since it raises questions and is only there because that’s what the script started with, i.e. historical reasons.
A big reason to use symmetric crypto over asymmetric has always been performance.
I’m an openbsd lover like any other, but it’s worth noting that openrsync doesn’t implement all of rsync, and my impression is that work on it is pretty slow.
That’s unnecessarily rude towards rsync maintainers, and also disingenuous.
Everyone eventually messes up in C, including OpenBSD devs. They wrote the code that lead to CVE-2023-25136 and CVE-2024-6387, two remote code execution vulns in OpenSSH. Can we stop pretending that the solution is to just find some kind of mystical superhuman C programmers who don’t make mistakes?
That’s unnecessarily rude towards rsync maintainers, and also disingenuous.
It was only a response to the IMO short sighted “Written in C, or…?” and about openrsync, not about rsync.
/adding:
Everyone eventually messes up in C, including OpenBSD devs. They wrote the code that lead to CVE-2023-25136 and CVE-2024-6387, two remote code execution vulns in OpenSSH. Can we stop pretending that the solution is to just find some kind of mystical superhuman C programmers who don’t make mistakes?
I agree that finding superhumans is not the solution and encourage new projects to not start with C. OpenBSD is a bit of a different world compared to other OS’s. Because of all their mitigations most memory management mistakes in a program practically result in a DoS at worst, not RCE (as can be seen with the two mentioned CVE’s). AFAICT CVE-2023-25136 has not been shown publicly to be a RCE because of the privilege separated design of OpenSSH.
Because of all their mitigations most memory management mistakes in a program practically result in a DoS, not RCE (as can be seen with the two mentioned CVE’s).
That is true for the 2024 CVE (“regreSSHion”), but for the 2023 CVE the opposite is true: Security researchers initially tried to exploit in on GNU/Linux but were unsuccessful, then pulled it off against OpenBSD due to the way their allocator works: https://seclists.org/oss-sec/2023/q1/92
More generally, I think it’s time to push back against the truism that OpenBSD is singularly secure because of its mitigations. It’s repeated all over the internet but rarely substantiated. At https://isopenbsdsecu.re/ there is a more nuanced in-depth analysis of the various mitigations, how well they work and how other operating systems compare.
Security researchers initially tried to exploit in on GNU/Linux but were unsuccessful, then pulled it off against OpenBSD due to the way their allocator works: https://seclists.org/oss-sec/2023/q1/92
I just read it as well, but it’s not a RCE against sshd, but maybe that was also not what you were trying to say and I misread that. But from that same post:
The next steps, which may or may not be feasible at all, are:
step 2, execute arbitrary code despite the ASLR, NX, and ROP
protections (this will probably require an information leak, either
through the same bug or through a secondary bug);
step 3, escape from sshd’s sandbox (through a secondary bug, either in
the privileged parent process or in the kernel’s reduced attack
surface).
Quickly skimming just that mailing list thread, it doesn’t look like Qualsys was able to accomplish step 2 and 3.
More generally, I think it’s time to push back against the truism that OpenBSD is singularly secure because of its mitigations. It’s repeated all over the internet but rarely substantiated.
I agree that there is some “feel-good”-risk knowing that you’re on OpenBSD with all their mitigations. The fact that not so many exploits are known I think is partly because Linux and Windows are easier targets with far more users and thus consequences.
At https://isopenbsdsecu.re/ there is a more nuanced in-depth analysis of the various mitigations, how well they work and how other operating systems compare.
I’ve seen that one back in the day, I remember I found it interesting.
I agree that finding superhumans is not the solution and encourage new projects to not start with C.
What alternatives do we have though? In cases where we have CPU cycles and memory to spare, and portability isn’t too much of an issue (say we’re targetting no more than the 3 big OSes), sure, we have lots of alternatives, including Rust and many garbage collected languages that have a native compiler.
Me however, I like to write foundational libraries that run everywhere. Which at least means a C API, most probably a C ABI, and if I’m to actually support a gazillion platforms… C code. Though I can see myself generating C code instead of writing it by hand. That would solve many problems.
Now to address the superhuman part, my superhuman skills are rooted in automated tests (property based mostly), and sanitisers. Without those I revert to being a mere mortal who cannot hope to write correct C code.
It is highly likely this application is made up of a number of libraries. I personally like to isolate the data processing part from the I/O, and the former can be remarkably independent from the environment — unless we want to take advantage of stuff like vector instructions. That makes those part very easy to test, but also very tempting to make portable. To name one example: cryptographic protocols. I systematically isolate them in a pure memory API, one that reads & writes buffers.
Thus, for new projects I don’t care much for C as far as I/O is concerned. We can be closer to the system and all that, but that doesn’t really matter, since those bits aren’t portable to begin with. The portable parts that are worth writing a public API for however, I do tend to favour C over anything else. It’s a weak, dangerous language, but it’s API is used everywhere, and there’s a compiler even for your toaster. Done well, it’s pretty easy to write bindings for any other language and build the application from there.
That said, FFI overhead is no joke. Not for the CPU, but for the poor humans who have to call C from whatever better language they’re using. It is critical that the functionality is significant enough, and the API small enough, to justify the hassle. Otherwise just use the application language.
Another route, if the application layer is small enough, is to just write everything in C. Long term though, I’m actually looking forward to ditch C in the bin of history where it belongs. I just have yet to settle on a replacement. Possible candidates include Rust, Zig, Odin, Jai, and probably as many others I’m not familiar with. Even then though, I’m not entirely comfortable with any language taking over. What we actually need is a protocol for different parts of a program to talk to each other, that does not force any part to be written in the same language. And so far, well… the best candidate so far is probably a subset of C — specifically, the most popular C ABI for each platform.
These all sound like post-facto justifications for a decision that’s already been made. Which is fine, let’s just be clear about it. You want to use C, everything else comes second to that.
These all sound like post-facto justifications for a decision that’s already been made.
They’re not. When I started Monocypher 7 years ago I considered using something else, but sadly, to make something portable, there was only one choice. Even now I’m not aware of any other choice. If you have one I would be elated to learn about it.
You want to use C, everything else comes second to that.
That’s unnecessarily rude towards rsync maintainers, and also disingenuous.
Even though I was not talking about rsync, but purely about openrsync, I’d like to note that I did gut rsync back in 2018, and this hardened version is not vulnerable to these new CVE’s: https://github.com/timkuijsten/hrsync
/edit when properly used with the two new options I’ve added: –chroot and –dropsuper (I did it for some backup program I wrote).
Yeh, which Apple replaced /usr/bin/rsync with in a recent update, and now behaviour I use all the time (browsing remote sources) no longer works. Thank pkgsrc I can still install a working version.
In the spririt of hoping to be corrected: it appears that most of these depend on either a running rsyncd (which I haven’t seen in 20 years) or the attacker having access to the source filesystem while an rsync is in progress.
Many vendors, especially those of open source operating systems, use rsync in daemon mode to sync build artifacts to various mirrors. HardenedBSD is one such vendor.
Edit[1]: Clarify how the daemon is run (rsyncd -> rsync in daemon mode.)
Their is a similar question open on the mailing list about whether the code is only active in rsyncd or maybe also when invoking rsync --server --sender via ssh (currently unanswered): https://marc.info/?l=oss-security&m=173688743232255&w=2
So wireguard isn’t an encryption scheme, it’s an entire protocol.
I think your question really is why not use wireguard as the underlying dataplane for a service like this?
Well, because where’s the fun in that? I’ve been hacking on sanctum for > 1 year and its very much in a production ready state. Hacking on things and building new cool stuff shouldn’t be limited to using only existing and well established projects.
Has it been audited?
I couldn’t find anything about that, and I’d personally wouldn’t want to call such a project production ready without an audit. It’s not a question of skill, just being a mere human.
The tech seems cool (I read the SEC-T slides, pdf). I’m glad to see in depth sandboxing efforts, I need more of that in my life!
Completely understand the reasoning but I do wonder why not go with some noise construction? Or maybe I should ask, why not pubkey auth? It would make management less of a hassle.
There are actually several reasons, one is that its just easier to implement a one-way key offering when you don’t have to do an interactive key exchange. Another is that asymmetry is just more complicated.
The key management difficulty is actually eleviated here by using black keys and providing an easy way to distribute these to your devices. The KEK management isn’t tricky either, an offline laptop that you use for ambry generation (the wrapped bundles that are uploaded to the cathedral) is all it takes.
I am not saying it’s EASIER per-se but it’s not as hard as it sounds.
The big thing one does not have here is PFS, in case you accidentally tweet one of your KEKs.
But with Kyber only recently being standarized as ML-KEM the sane requirement of doing a hybrid key exchange when using asymmetry (ECDH+ML-KEM for example) makes the code base a lot more complex. More so than I am comfortable with.
Interesting post! I’ve been working on a simple database (document store) to support offline-first. It really depends on the business case what merge strategy is appropriate and what can be automated. In principle a conflict is something that can not be resolved in a generic way.
This didn’t make it into the post but I actually do recommend people think about this as a database problem.
If I told you that I was going to have two database write replicas partitioned and accepting completely divergent writes and then I was going to use CRDTs or OT or some other ✨ magic ✨ to merge them together, you’d rightfully balk. But if one of those replicas is a browser with a text document in it or something, somehow a lot of people think that’s ok!
The cases this is fine are generally cases where direct conflicts are unlikely, or you can simply ignore direct conflicts, or the data is restricted enough there is no such thing as a conflict (like a monotonically increasing count).
I think this is the right mental model. I also think that it’s unhelpful that we’ve strayed from CRDTs as I learned them.
When I was first introduced to the idea they were called “Commutative Replicated Data Types”. And I don’t think that name is any better or worse than conflict-free, but I do think it gives us a tool to talk about what you’re bring up in the blog post.
For instance in your above question, if the data type was an integer, and the operation was “sum”. I would pretty much be on board with saying that we CAN magically merge them! (with some edge cases around maximums?). Sum on integers I think we’ll all agree, is at least mostly, commutative.
But are document interactions commutative? I think that the answer is probably “NO”.
[edit] Immediately after writing this I got to wondering about when commutative became conflict-free and I wonder if one of the folks working on textual CRDTs had the same pedantic math realization and changed the name.
Anders! How are you? Agree with all of this, and also, your last point is kind of funny. Another thing we took out was a section with a similarly pedantic header: “‘Conflict-free’ as in ‘we pretend conflicts do not exist”. We pulled it because it felt mean-spirited and we didn’t want to make it a post about making fun of people, and in any event that name is probably not changing.
To saturate 10 GbE lines, it will require some serious horsepower from the entire chain (including good quality RJ45 cables that actually are manufactured to spec). And a fast CPU, which is a problem with most of the “router” type boxes that ship with Celerons and Atoms. Also see https://marc.info/?l=openbsd-misc&m=167665861931266&w=2. With right hardware and a good amount of tweaking/optimization, you may, in theory reach those speeds, but remember speed is not a primary goal for OpenBSD, so the correctness and security involve tradeoffs that may sacrifice speed that is taken for granted with other *BSD firewalls.
I have a stock OpenBSD router/firewall with pf enabled, on a Gigabit internet connection, and can only push like 800 Mbit/s or thereabouts. This is with Protectli VP2420. Not optimized, but does the job reliably and I am very happy with it.
just for comparison’s sake, I have a gigabit symmetrical fiber connection and was using a protectli device with an Atom CPU to run OpenBSD+pf on the router and saw similar max throughput. I replaced that box with an older Dell Optiplex SFF PC with an i5-6500 and now have no issues saturating the connection.
Do you use vanilla OpenBSD on your router? What has been your experience so far regarding hardware support and performance?
Yes, you have to make sure you have a device with properly and well performing NICs. In my case I have an older APU2 with 3x Intel I210. I’ve read about tests that claim to come close to 900 Mbit/sec (but not sure if it was on OpenBSD), but in my case my ISP uses PPPoE which makes that receive side scaling and other forms of TCP/UDP offloading that the I210 offers, can’t be utilized. So only one of the four CPU’s is the bottleneck and I don’t come much further than 480 Mb/sec. But this is an older and passively cooled/low power device.
As an outsider (and at the risk of causing a flamewar), I have to wonder if there’s some connection to the zealotry that goes along with Rust.
Over the years I’ve seen a lot of “militant [insert new language] zealots”, but Rust (in my opinion) takes it further than languages like Haskell. It’s not enough for Rust to interact with an existing library - if that library isn’t written in Rust then they need a new one entirely.
This zealotry’s prevalence is overstated, and when it happens, is often from enthusiastic inexperienced users, not from the folks working on the language itself.
It’s not enough for Rust to interact with an existing library - if that library isn’t written in Rust then they need a new one entirely.
I can’t think of a single person that I know of involved in Rust leadership who thinks like that.
when it happens, is often from enthusiastic inexperienced users, not from the folks working on the language itself.
Partially disagree here.
Indeed the folks working on the Rust language itself are great.
My past experience with “enthusiastic” users, as you call them, is that they are usually strong specific domain experts with enough knowledge in Rust to anticipate that a (re)write in Rust is the right thing to do - even when it may not be.
(The indie dev guy that coded Rust for 3 years - I can’t find the link - made an excellent point; Rust is a language that forces a developer into a path of correctness. But correctness may not always be conducive to success).
This is an idea I’ve had kicking around for a bit but I do think there is a deep connection between the burnout in the blog post, the zealotry you mention, and a handful of other complaints that have surfaced from time to time about the language and its community.
My root observation is that the mindset of the community and the guiding principle of the language and its design is “we’re going to get this one right,” or in other words, the Rust contributors have set a high bar for themselves. This spills out as the burnout that gets brought up from time to time: if the bar is high there are few people, or maybe only a single person, who can “do it right.” That’s not just a feeling a maintainer might have: given the complexity of the compiler or the subtlety of what you’re trying to implement there are just not a lot of people who can do the work! You solve this by forcing people into management positions where they’re required to mentor and delegate but that’s hard to do in practice in a volunteer, open source project. The result is the burnout spiral mentioned in the article: “work doesn’t happen unless you do it personally”, you get tired, you get burnt out.
The “doing it right” also makes the language into fertile grounds for zealotry. Everything’s been built to such a high standard you can look at any use case for Rust and find one place, or several places, where Rust has some advantage over the opposing choice. If Rust’s design nurtures these kinds of arguments you’ll see people who make those arguments come to Rust, find it suitable for their purposes and stay with it.
if the bar is high there are few people, or maybe only a single person, who can “do it right.” That’s not just a feeling a maintainer might have: given the complexity of the compiler or the subtlety of what you’re trying to implement there are just not a lot of people who can do the work!
i think there is a pretty strong disparity between domains where bugs are ~fast to discover and fix, and domains where bugs might not show up for months or years. in the former you can kinda just try things and if it breaks oh well you fix it and it’s fine. in the latter, avoiding bugs requires a lot of domain expertise because you have a really hard time catching them with tests. https://youtu.be/tgaKAF_eiOg?si=P2QKBbZsGVYFAl4k&t=786
My root observation is that the mindset of the community and the guiding principle of the language and its design is “we’re going to get this one right,” or in other words, the Rust contributors have set a high bar for themselves.
Like the author points out at the beginning of the article, the secret sauce is modal editing.
And there have been super interesting advancements in terminal modal editors fairly recently with editors such as kakoune and helix. Helix’s bindings in particular make up a language that I find more intuitive and expressive than vim’s (after having used vim and neovim for a long time).
These days I recommend helix over vim or neovim especially to newcomers, the learning curve is more of a curve and less of a wall.
Ki is similar to vim in that it uses modal editing.
Its similar to helix in that:
It selects first, then acts on the selection.
It has first-class multi-cursor support
It aims to be low-config
It has built-in LSP support. Adding a new language is a matter of declaring it. (I add 2 on my own yesterday, within 2 hours of using Ki. I could never have dared do this with vim/neovim).
Its different from both vim and Helix in that it splits the mental model into:
selection mode;
movement;
action
such that:
A selection mode sets the current unit - column/character, word, line, syntax node, the latter two of which are semantic units derived from tree-sitter grammar.
Because selection unit is already configured, movements are reduced to hjkl.
Actions, like helix, then act on the current selection.
I think I made the explanation more complicated than the actual execution.
I’ve only been exploring it for a day. Navigation through syntax nodes is impressive, but also heavily reliant on the language’s tree-sitter grammar being decent. Also, I’m not sure how much of a leg up it is against helix’s LSP jump to symbol. But helix’s operations on syntax nodes surely feels like a second thought, when compared to Ki’s.
There are other goodies:
Everything is a buffer. So, same key-bindings are used everywhere.
It has a built-in file-tree explorer (using yaml!), which can be fuzzy-searched too.
Thought-out keybindings. For example, choosing between editor and system clipboard is a matter of \ key. y or p copies or pastes to editor clipboard, while \y or \p copies or pastes to the system clipboard.
I want to like ‘meow’, a modal editing package for Emacs. It clicks with me in a way that vi doesn’t (and I’ve been using vi in a minimal capacity for 30 years), and could be the first thing to really get me using modal editing. I just haven’t figured out a meow layout that works for me on both QWERTY and Colemak.
As a die-hard vi user I’ve been trying Helix for a while but had some stability issues with the language server and since then my optimism waned off a bit in the last couple of months.
Mmh why do you blame it on helix though? Probably depends on the particular LSP backend. LSP integration in vim/neovim is not better than helix’s for sure.
I’ve recently had issues with ESLint (which is actually the LSP used internally by vscode, broken out). The newer versions of the LSP use a different mechanism (pull-based messaging or something like that? I forget the details) that Helix just doesn’t support. Neovim, I believe, does support this new mechanism.
There is a pull request open to fix the issue in vscode, and for now you can always downgrade to an older version of the ESLint plugin, but it cost me a couple of hours the other day trying to figure out how to make an the pieces talk to each other properly.
FWIW, this isn’t just a Helix issue, it’s also partly that ESLint doesn’t have an official LSP outside the one used internally by vscode. And I still really enjoy using Helix, although I think I’ll enjoy it more once it’s easier to configure it more with plugins and more complex integrations than just the LSP system.
Out of the box LSP support was the main reason to try out Helix. And while in the beginning everything just worked, after a couple of months it became less stable for an unknown reason. I used the Go language server and it might not be true for this specific piece of software but in general all official Go code is of pretty high quality with very little knobs, so I was not looking into that direction much.
I still plan to give Helix another try in a couple of months and hope it’s better.
Wow, this is my conference. Happy to see this submission here!
Fun aside: Steve did not give the talk I approached him for, but the talk I wanted :D.
I would like to add a small ad block: OxidizeConf 2025 happens next year in Berlin (and, there may even be another one somewhere else) and if you have a cool Rust thing to talk about, I think lobste.rs clientele is the best speaker material. Our goal is to have talks that a) talk about a real, preferably deployed, preferably industrial thing and b) are no marketing kitsch. It’s fully fine to talk about names, brands, visions, etc. shortly, but your talk will be graded by how well you extract the technical learnings from your work.
MPL 2.0 has a number of agreeable properties; e.g., it’s file-based copyleft rather than project-viral, which makes it easier to share smaller parcels of source with other open source projects. The patent protections are valuable, as they also have been for the (similar) CDDL, where those grants are the only thing that stands between the children of OpenSolaris and their rapaciously litigious former steward. Flicking through our internal documentation I see also a mention of the explicit representation of ownership and right to contribute that’s in the licence, thus obviating the need for an explicit CLA (something that arguably harms open source projects and communities in general.)
It depends on what it is, though. For libraries where we don’t anticipate the need for any of the stuff above, we also just try to fit in with the prevailing ecosystem; e.g., many or most of our thin FFI wrapper crates are (or could be, if we are reminded!) something like dual Apache 2.0 and MIT if it helps people adopt them.
The MPL is truly a fantastic license, being copyleft that integrates into the wider FOSS ecosystem rather than isolating itself into its own world.
The FSF consistently misrepresents the GPL and regularly does a rather nasty motte-and-bailey regarding what free software means. For example, at https://www.gnu.org/licenses/why-not-lgpl.html:
Using the ordinary GPL for a library gives free software developers an advantage over proprietary developers: a library that they can use, while proprietary developers cannot use it.
This is, of course, not the whole truth. I write a lot of free software but it tends to be MIT + Apache. As such, I cannot legally depend on a GPL library. The entire GPL ecosystem is simply closed off to me, a free software developer. (For example, I cannot use GNU readline.)
My understanding was that Durov left Russia because he didn’t want to just open up Telegram and give access to the government there. Specifically, to shut down the chats of the opposition followers. So he went to the West.
Now in the West, he’s actually being hounded in a worse way than in his native country.
That’s half the story. The other half is that, after Durov left and started Telegram, the Russian government tried to block Telegram in 2018, possibly, in part, over concerns about Telegram Open Network. Telegram largely managed to evade the ban, and authorities sort of turned a blind eye to it (parts of the Russian government actually continued to run some channels on it).
However, in 2020, Telegram, the General Prosecutor, and the Roskomnadzor reached an agreement about “cooperat[ing] in combating terrorism and extremism on the platform” (quote from official press release here). The details of this agreement, or even whether such an agreement was actually reached or the Russian government just gave up, aren’t known AFAIK, but this is the typical legalese for “we figured out a mutually-beneficial arrangement”.
It’s not like we have much substantial information to go onso I’m not going to speculate over what prompted his arrest. But I would like to point out that, despite the flurry of materials about how he stood up to the Kremlin that surfaced back in 2014, there’s an overall feeling that, if not Durov himself, then at the very least Telegram, is on somewhat cosier terms with the Kremlin than their pre-2020 history would suggest.
I would like to point out that either Durov or Telegram have eventually agreed to collaborate with Russian authorities after Durov left Russia, and that while the details of their agreement aren’t public, they were mutually advantageous to a high enough degree that Russian authorities dropped the case, and both the Russian government and various government-affiliated actors are fine running their own channels on Telegram now.
Yes, among other things, this has led Telegram users to question how much “at odds” Telegram and the Kremlin are now. It got some coverage in Western media a while back (see e.g. here). Telegram has been pretty happy to take down channels at the request of the Russian government for a while now. Promptly, too, earlier in January, during the protests in Bashkortostan, they started blocking local channels in a matter of hours.
To be fair I believe Telegram took down a number of channels that were designated by the UK government as coordinating the recent riots in that country.
Right, I don’t mean to suggest that he’s working really well with the Kremlin but not at all well with the folks at Élysée, or that he’s in some strange conspiracy, or some other weird neocon thing. Just that the staunch defender of free speech in Russia persona thing is old and quite possibly out of date. It may have been true once, or at least true enough for Western media outlets to work with Durov’s PR and media agents and build something good, but the way Durov’s current company approaches its relations with national governments today is different from Durov-era VK.
Like anyone who’s been in tech for more than like six months I’m very skeptical about what governments and their institutions do and why. It’s just I’m equally skeptical of what rich people do and why.
Durov settled in the UAE (Telegram is based in Dubai) via a purchased citizenship. That’s hardly a bastion of freedom. His French passport was issued later, see my earlier comment in this thread.
Multiple people in Russia are calling for his release, including Maria Butina (deported from the US for being a literal spy), Dmitry Medvedev, and RichardEdward Snowden[1]. There’s a lot of stuff we don’t know about this yet. Maybe he feels safer in French custody than possibly being deported from the UAE to Russia.[2]
[1] source for Butina and Medvedev, article in Swedish Dagens Nyheter. Snowden, personal communication with someone more plugged into news in Russian than I am.
I guess the direction that Operating Systems need to take is more POLA like what Endo does for JavaScript, executing third-party code in an isolated environment.
The bug was introduced in vixie cron and that patch was incorporated into OpenBSD in 2023 but not in FreeBSD. The step value was no longer range checked since that patch. From the OP:
In May of 2023, significant changes were made to the range and step handling code of a crontab entry in Vixie Cron. A new function, set_range() was introduced in entry.c. This patch was incorporated into OpenBSD in June of 2023.
/edit
Funny to read that after that patch in 2023, FreeBSD did consider using the OpenBSD version of cron instead of their own fork but didn’t move over because of time constraints:
I like the idea of using OpenBSD as an upstream, but that would take a lot more time than I have right now. (Most or all of my FreeBSD time is on Dell’s clock.)
FreeBSD is among those that use Vixie cron, though alternative implementations can be installed via pkg. I don’t see any associated FreeBSD security advisory though, at least, not yet: https://www.freebsd.org/security/advisories/
One question that comes to mind is how this allocator can be hardened against attacks. Zig makes buffer overflow and use-after-free vulnerabilities less common than in C, but they can still happen, and when they do, allocator exploitation can provide a route to much more damaging attacks.
For example, since SmpAllocator uses a linked free-list, it looks pretty trivial to have it return any address you want, if you can modify a freed slot. Pointer encryption would be pretty easy to add to SmpAllocator, and can help with this, though it’s not foolproof.
One example of an allocator with a lot of hardening would be OpenBSD’s malloc. It has canaries, some double free and use after free protections, guard pages and more.
The two things I like most about the EUPL is 1. the protection against the SaaS loophole by including or providing access to its essential functionalities in it’s definition of ‘Distribution’ or ‘Communication’. And 2. that both static and dynamic linking of some EUPL licensed software doesn’t automatically make the combined work a Derivative Work. It’s comparable but better than the LGPL on that front.
Good enough is the real enemy of good
Another topic for another post!
good enough is the enemy of all change. Terms and conditions apply, do your own risk assessment before changing tech stack, yada yada
then it’s not good enough ;-)
While I think it’s fun to build tools and break from general advice, things like this almost consistently have the same markers that makes me go “oh no”. Why use RSA over a ECC algorithm? Why is this not using an authenticated encryption algorithm over AES-CBC? I don’t feel like a lot of the choices are very well justified in the blog. In the end this just strikes me as a use case for age (which btw is very small 4k LoC with tests), over potentially becoming reliant on broken things.
Unauthenticated encryption is insecure. Full stop.
@dsagal, if you’re determined to write your own tool, please consider using one of the bindings to libsodium and its sealed box abstraction.
Also, please stop encrypting with RSA directly
I couldn’t believe the code went out of its way to have a path where it encrypted the plaintext with the public key.
Ugh. I promised myself that I wouldn’t dunk.
Tough crowd, but nice to have links that actually explain a problem that’s actually relevant. Thanks @quad! The unauthenticated encryption complaint was convincing. I verified that it is in fact possible to modify the output of the script (via its IV portion) to get the recipient to silently see different data. It’s not an attack mode (MITM) that we actually care about (other problems with that anyway), but it does seem silly not to use authenticated mode when it’s easy.
PR here: https://github.com/gristlabs/secrets.js/pull/4
I also dropped support for encrypting with RSA directly in the same PR. Disappointed that neither reason in the linked article applies for this use-case, so can’t verify anything that’s wrong with it, but I can’t justify keeping it anyway, since it raises questions and is only there because that’s what the script started with, i.e. historical reasons.
Those changes help a lot, things are much simpler and more robust with just that imo.
What attacks do you care about?
Good question, that’s what I start the article with, but don’t say it explicitly. I am assuming I can talk to my colleague freely, no MITM or impersonator involved. I am concerned that putting a secret in plain text would mean it’s present on systems outside of my knowledge or control (like Slack servers). So I am assuming the messages in the channel can leak, and I am concerned about confidentiality.
That strikes me as quite in-the-moment. A future leak of a private key or a spearfish would allow an attacker to get all your secrets quite conveniently from Slack.
Personally, I use https://magic-wormhole.readthedocs.io/ to ship sensitive material between machines.
A big reason to use symmetric crypto over asymmetric has always been performance.
Should have been released together with Snow Leopard, the best desktop OS ever.
As a reminder, openrsync exists.
I’m an openbsd lover like any other, but it’s worth noting that openrsync doesn’t implement all of rsync, and my impression is that work on it is pretty slow.
Written in C, or…?
Yes, by people who know how to write secure C.
That’s unnecessarily rude towards rsync maintainers, and also disingenuous.
Everyone eventually messes up in C, including OpenBSD devs. They wrote the code that lead to CVE-2023-25136 and CVE-2024-6387, two remote code execution vulns in OpenSSH. Can we stop pretending that the solution is to just find some kind of mystical superhuman C programmers who don’t make mistakes?
It was only a response to the IMO short sighted “Written in C, or…?” and about openrsync, not about rsync.
/adding:
I agree that finding superhumans is not the solution and encourage new projects to not start with C. OpenBSD is a bit of a different world compared to other OS’s. Because of all their mitigations most memory management mistakes in a program practically result in a DoS at worst, not RCE (as can be seen with the two mentioned CVE’s). AFAICT CVE-2023-25136 has not been shown publicly to be a RCE because of the privilege separated design of OpenSSH.
That is true for the 2024 CVE (“regreSSHion”), but for the 2023 CVE the opposite is true: Security researchers initially tried to exploit in on GNU/Linux but were unsuccessful, then pulled it off against OpenBSD due to the way their allocator works: https://seclists.org/oss-sec/2023/q1/92
More generally, I think it’s time to push back against the truism that OpenBSD is singularly secure because of its mitigations. It’s repeated all over the internet but rarely substantiated. At https://isopenbsdsecu.re/ there is a more nuanced in-depth analysis of the various mitigations, how well they work and how other operating systems compare.
I just read it as well, but it’s not a RCE against sshd, but maybe that was also not what you were trying to say and I misread that. But from that same post:
Quickly skimming just that mailing list thread, it doesn’t look like Qualsys was able to accomplish step 2 and 3.
I agree that there is some “feel-good”-risk knowing that you’re on OpenBSD with all their mitigations. The fact that not so many exploits are known I think is partly because Linux and Windows are easier targets with far more users and thus consequences.
I’ve seen that one back in the day, I remember I found it interesting.
What alternatives do we have though? In cases where we have CPU cycles and memory to spare, and portability isn’t too much of an issue (say we’re targetting no more than the 3 big OSes), sure, we have lots of alternatives, including Rust and many garbage collected languages that have a native compiler.
Me however, I like to write foundational libraries that run everywhere. Which at least means a C API, most probably a C ABI, and if I’m to actually support a gazillion platforms… C code. Though I can see myself generating C code instead of writing it by hand. That would solve many problems.
Now to address the superhuman part, my superhuman skills are rooted in automated tests (property based mostly), and sanitisers. Without those I revert to being a mere mortal who cannot hope to write correct C code.
Is rsync a library or an application?
An application, so it doesn’t apply.
But.
It is highly likely this application is made up of a number of libraries. I personally like to isolate the data processing part from the I/O, and the former can be remarkably independent from the environment — unless we want to take advantage of stuff like vector instructions. That makes those part very easy to test, but also very tempting to make portable. To name one example: cryptographic protocols. I systematically isolate them in a pure memory API, one that reads & writes buffers.
Thus, for new projects I don’t care much for C as far as I/O is concerned. We can be closer to the system and all that, but that doesn’t really matter, since those bits aren’t portable to begin with. The portable parts that are worth writing a public API for however, I do tend to favour C over anything else. It’s a weak, dangerous language, but it’s API is used everywhere, and there’s a compiler even for your toaster. Done well, it’s pretty easy to write bindings for any other language and build the application from there.
That said, FFI overhead is no joke. Not for the CPU, but for the poor humans who have to call C from whatever better language they’re using. It is critical that the functionality is significant enough, and the API small enough, to justify the hassle. Otherwise just use the application language.
Another route, if the application layer is small enough, is to just write everything in C. Long term though, I’m actually looking forward to ditch C in the bin of history where it belongs. I just have yet to settle on a replacement. Possible candidates include Rust, Zig, Odin, Jai, and probably as many others I’m not familiar with. Even then though, I’m not entirely comfortable with any language taking over. What we actually need is a protocol for different parts of a program to talk to each other, that does not force any part to be written in the same language. And so far, well… the best candidate so far is probably a subset of C — specifically, the most popular C ABI for each platform.
These all sound like post-facto justifications for a decision that’s already been made. Which is fine, let’s just be clear about it. You want to use C, everything else comes second to that.
They’re not. When I started Monocypher 7 years ago I considered using something else, but sadly, to make something portable, there was only one choice. Even now I’m not aware of any other choice. If you have one I would be elated to learn about it.
I don’t.
We are in the context of an application here, not a library. The Monocypher example is not relevant here.
Even though I was not talking about rsync, but purely about openrsync, I’d like to note that I did gut rsync back in 2018, and this hardened version is not vulnerable to these new CVE’s: https://github.com/timkuijsten/hrsync
/edit when properly used with the two new options I’ve added: –chroot and –dropsuper (I did it for some backup program I wrote).
By people whose mouths write checks their record can’t cash.
hmmmmm
Yeh, which Apple replaced
/usr/bin/rsyncwith in a recent update, and now behaviour I use all the time (browsing remote sources) no longer works. Thank pkgsrc I can still install a working version.In the spririt of hoping to be corrected: it appears that most of these depend on either a running rsyncd (which I haven’t seen in 20 years) or the attacker having access to the source filesystem while an rsync is in progress.
Many vendors, especially those of open source operating systems, use
rsyncin daemon mode to sync build artifacts to various mirrors. HardenedBSD is one such vendor.Edit[1]: Clarify how the daemon is run (
rsyncd->rsyncin daemon mode.)Anecdotal,
rsyncdis used by of the Tier 0 and some Tier 1 package mirrors in Arch Linux.Infrastructure source: https://gitlab.archlinux.org/archlinux/infrastructure/-/tree/master/roles/dbscripts?ref_type=heads
Their is a similar question open on the mailing list about whether the code is only active in rsyncd or maybe also when invoking
rsync --server --sendervia ssh (currently unanswered): https://marc.info/?l=oss-security&m=173688743232255&w=2Synology uses rsync (and I think rsyncd but maybe not?) to migrate/sync data between their NAS boxes.
Why a custom encryption scheme instead of something existing and audited like wireguard?
So wireguard isn’t an encryption scheme, it’s an entire protocol.
I think your question really is why not use wireguard as the underlying dataplane for a service like this?
Well, because where’s the fun in that? I’ve been hacking on sanctum for > 1 year and its very much in a production ready state. Hacking on things and building new cool stuff shouldn’t be limited to using only existing and well established projects.
Has it been audited? I couldn’t find anything about that, and I’d personally wouldn’t want to call such a project production ready without an audit. It’s not a question of skill, just being a mere human.
The tech seems cool (I read the SEC-T slides, pdf). I’m glad to see in depth sandboxing efforts, I need more of that in my life!
Completely understand the reasoning but I do wonder why not go with some noise construction? Or maybe I should ask, why not pubkey auth? It would make management less of a hassle.
That’s a fair question.
There are actually several reasons, one is that its just easier to implement a one-way key offering when you don’t have to do an interactive key exchange. Another is that asymmetry is just more complicated.
The key management difficulty is actually eleviated here by using black keys and providing an easy way to distribute these to your devices. The KEK management isn’t tricky either, an offline laptop that you use for ambry generation (the wrapped bundles that are uploaded to the cathedral) is all it takes.
I am not saying it’s EASIER per-se but it’s not as hard as it sounds.
The big thing one does not have here is PFS, in case you accidentally tweet one of your KEKs.
But with Kyber only recently being standarized as ML-KEM the sane requirement of doing a hybrid key exchange when using asymmetry (ECDH+ML-KEM for example) makes the code base a lot more complex. More so than I am comfortable with.
When can we expect part 3? :)
It is mostly written but still need a lot of work. Not sure I will manage to do that during the holidays so probably around January.
I don’t plan my blog, else it would become a chore ;-)
awesome! can’t wait. ;)
it’s here : https://lobste.rs/s/okqjn5/20_years_linux_on_desktop_part_3
thanks for the heads-up! :)
Interesting post! I’ve been working on a simple database (document store) to support offline-first. It really depends on the business case what merge strategy is appropriate and what can be automated. In principle a conflict is something that can not be resolved in a generic way.
This didn’t make it into the post but I actually do recommend people think about this as a database problem.
If I told you that I was going to have two database write replicas partitioned and accepting completely divergent writes and then I was going to use CRDTs or OT or some other ✨ magic ✨ to merge them together, you’d rightfully balk. But if one of those replicas is a browser with a text document in it or something, somehow a lot of people think that’s ok!
The cases this is fine are generally cases where direct conflicts are unlikely, or you can simply ignore direct conflicts, or the data is restricted enough there is no such thing as a conflict (like a monotonically increasing count).
I think this is the right mental model. I also think that it’s unhelpful that we’ve strayed from CRDTs as I learned them.
When I was first introduced to the idea they were called “Commutative Replicated Data Types”. And I don’t think that name is any better or worse than conflict-free, but I do think it gives us a tool to talk about what you’re bring up in the blog post.
For instance in your above question, if the data type was an integer, and the operation was “sum”. I would pretty much be on board with saying that we CAN magically merge them! (with some edge cases around maximums?). Sum on integers I think we’ll all agree, is at least mostly, commutative.
But are document interactions commutative? I think that the answer is probably “NO”.
[edit] Immediately after writing this I got to wondering about when commutative became conflict-free and I wonder if one of the folks working on textual CRDTs had the same pedantic math realization and changed the name.
Anders! How are you? Agree with all of this, and also, your last point is kind of funny. Another thing we took out was a section with a similarly pedantic header: “‘Conflict-free’ as in ‘we pretend conflicts do not exist”. We pulled it because it felt mean-spirited and we didn’t want to make it a post about making fun of people, and in any event that name is probably not changing.
I’m great! Sent an email so we don’t have to catch up via lobste.rs :-D
My personal favorite: “Introduced dhcp6leased(8), a daemon to acquire IPv6 prefix delegations from DHCPv6 servers.”
It allowed me to delete dhcpcd, the one and only third-party package on my router.
Do you use vanilla OpenBSD on your router? What has been your experience so far regarding hardware support and performance?
I am wondering if it is already possible to set up an OpenBSD router for a 10 GbE home network.
To saturate 10 GbE lines, it will require some serious horsepower from the entire chain (including good quality RJ45 cables that actually are manufactured to spec). And a fast CPU, which is a problem with most of the “router” type boxes that ship with Celerons and Atoms. Also see https://marc.info/?l=openbsd-misc&m=167665861931266&w=2. With right hardware and a good amount of tweaking/optimization, you may, in theory reach those speeds, but remember speed is not a primary goal for OpenBSD, so the correctness and security involve tradeoffs that may sacrifice speed that is taken for granted with other *BSD firewalls.
I have a stock OpenBSD router/firewall with pf enabled, on a Gigabit internet connection, and can only push like 800 Mbit/s or thereabouts. This is with Protectli VP2420. Not optimized, but does the job reliably and I am very happy with it.
Thank you very much for taking your time to write this down. This all goes on my reading list! :)
just for comparison’s sake, I have a gigabit symmetrical fiber connection and was using a protectli device with an Atom CPU to run OpenBSD+pf on the router and saw similar max throughput. I replaced that box with an older Dell Optiplex SFF PC with an i5-6500 and now have no issues saturating the connection.
Yes, you have to make sure you have a device with properly and well performing NICs. In my case I have an older APU2 with 3x Intel I210. I’ve read about tests that claim to come close to 900 Mbit/sec (but not sure if it was on OpenBSD), but in my case my ISP uses PPPoE which makes that receive side scaling and other forms of TCP/UDP offloading that the I210 offers, can’t be utilized. So only one of the four CPU’s is the bottleneck and I don’t come much further than 480 Mb/sec. But this is an older and passively cooled/low power device.
Ah, that’s interesting! Thank you very much! I will definitely pick OpenBSD up then for some tests.
As an outsider (and at the risk of causing a flamewar), I have to wonder if there’s some connection to the zealotry that goes along with Rust.
Over the years I’ve seen a lot of “militant [insert new language] zealots”, but Rust (in my opinion) takes it further than languages like Haskell. It’s not enough for Rust to interact with an existing library - if that library isn’t written in Rust then they need a new one entirely.
Seems like a lot of pressure.
This zealotry’s prevalence is overstated, and when it happens, is often from enthusiastic inexperienced users, not from the folks working on the language itself.
I can’t think of a single person that I know of involved in Rust leadership who thinks like that.
Partially disagree here.
Indeed the folks working on the Rust language itself are great.
My past experience with “enthusiastic” users, as you call them, is that they are usually strong specific domain experts with enough knowledge in Rust to anticipate that a (re)write in Rust is the right thing to do - even when it may not be.
(The indie dev guy that coded Rust for 3 years - I can’t find the link - made an excellent point; Rust is a language that forces a developer into a path of correctness. But correctness may not always be conducive to success).
You mean this one? https://lobste.rs/s/nyikhk/lessons_learned_after_3_years_fulltime
This is an idea I’ve had kicking around for a bit but I do think there is a deep connection between the burnout in the blog post, the zealotry you mention, and a handful of other complaints that have surfaced from time to time about the language and its community.
My root observation is that the mindset of the community and the guiding principle of the language and its design is “we’re going to get this one right,” or in other words, the Rust contributors have set a high bar for themselves. This spills out as the burnout that gets brought up from time to time: if the bar is high there are few people, or maybe only a single person, who can “do it right.” That’s not just a feeling a maintainer might have: given the complexity of the compiler or the subtlety of what you’re trying to implement there are just not a lot of people who can do the work! You solve this by forcing people into management positions where they’re required to mentor and delegate but that’s hard to do in practice in a volunteer, open source project. The result is the burnout spiral mentioned in the article: “work doesn’t happen unless you do it personally”, you get tired, you get burnt out.
The “doing it right” also makes the language into fertile grounds for zealotry. Everything’s been built to such a high standard you can look at any use case for Rust and find one place, or several places, where Rust has some advantage over the opposing choice. If Rust’s design nurtures these kinds of arguments you’ll see people who make those arguments come to Rust, find it suitable for their purposes and stay with it.
i think this is very true, along with the rest of the pattern you’ve identified, but it’s not specific to rust. http://rhaas.blogspot.com/2024/05/hacking-on-postgresql-is-really-hard.html
i think there is a pretty strong disparity between domains where bugs are ~fast to discover and fix, and domains where bugs might not show up for months or years. in the former you can kinda just try things and if it breaks oh well you fix it and it’s fine. in the latter, avoiding bugs requires a lot of domain expertise because you have a really hard time catching them with tests. https://youtu.be/tgaKAF_eiOg?si=P2QKBbZsGVYFAl4k&t=786
This also explains the perpetual 0.x versioning.
I’ve been busy modernizing symon this summer and I’m very interested in this, but also RFD 161 and 442. Will these be opened to the public as well?
For whatever it’s worth, we have opened RFD 161 Metrics data model.
awesome!
Like the author points out at the beginning of the article, the secret sauce is modal editing.
And there have been super interesting advancements in terminal modal editors fairly recently with editors such as kakoune and helix. Helix’s bindings in particular make up a language that I find more intuitive and expressive than vim’s (after having used vim and neovim for a long time).
These days I recommend helix over vim or neovim especially to newcomers, the learning curve is more of a curve and less of a wall.
I will put another one in the ring: Ki.
Ki is similar to vim in that it uses modal editing.
Its similar to helix in that:
Its different from both vim and Helix in that it splits the mental model into:
such that:
I think I made the explanation more complicated than the actual execution.
I’ve only been exploring it for a day. Navigation through syntax nodes is impressive, but also heavily reliant on the language’s tree-sitter grammar being decent. Also, I’m not sure how much of a leg up it is against helix’s LSP jump to symbol. But helix’s operations on syntax nodes surely feels like a second thought, when compared to Ki’s.
There are other goodies:
\key.yorpcopies or pastes to editor clipboard, while\yor\pcopies or pastes to the system clipboard.I want to like ‘meow’, a modal editing package for Emacs. It clicks with me in a way that vi doesn’t (and I’ve been using vi in a minimal capacity for 30 years), and could be the first thing to really get me using modal editing. I just haven’t figured out a meow layout that works for me on both QWERTY and Colemak.
As a die-hard vi user I’ve been trying Helix for a while but had some stability issues with the language server and since then my optimism waned off a bit in the last couple of months.
Mmh why do you blame it on helix though? Probably depends on the particular LSP backend. LSP integration in vim/neovim is not better than helix’s for sure.
I’ve recently had issues with ESLint (which is actually the LSP used internally by vscode, broken out). The newer versions of the LSP use a different mechanism (pull-based messaging or something like that? I forget the details) that Helix just doesn’t support. Neovim, I believe, does support this new mechanism.
There is a pull request open to fix the issue in vscode, and for now you can always downgrade to an older version of the ESLint plugin, but it cost me a couple of hours the other day trying to figure out how to make an the pieces talk to each other properly.
FWIW, this isn’t just a Helix issue, it’s also partly that ESLint doesn’t have an official LSP outside the one used internally by vscode. And I still really enjoy using Helix, although I think I’ll enjoy it more once it’s easier to configure it more with plugins and more complex integrations than just the LSP system.
Out of the box LSP support was the main reason to try out Helix. And while in the beginning everything just worked, after a couple of months it became less stable for an unknown reason. I used the Go language server and it might not be true for this specific piece of software but in general all official Go code is of pretty high quality with very little knobs, so I was not looking into that direction much.
I still plan to give Helix another try in a couple of months and hope it’s better.
Wow, this is my conference. Happy to see this submission here!
Fun aside: Steve did not give the talk I approached him for, but the talk I wanted :D.
I would like to add a small ad block: OxidizeConf 2025 happens next year in Berlin (and, there may even be another one somewhere else) and if you have a cool Rust thing to talk about, I think lobste.rs clientele is the best speaker material. Our goal is to have talks that a) talk about a real, preferably deployed, preferably industrial thing and b) are no marketing kitsch. It’s fully fine to talk about names, brands, visions, etc. shortly, but your talk will be graded by how well you extract the technical learnings from your work.
https://oxidizeconf.com
Cool! Happy to hear there will be another one in Berlin. I was already feeling bad I missed this one. :)
Great talk!
A bit tangential, but can anyone elaborate on the choice for the MPL license? Maybe @steveklabnik or @jclulow?
MPL 2.0 has a number of agreeable properties; e.g., it’s file-based copyleft rather than project-viral, which makes it easier to share smaller parcels of source with other open source projects. The patent protections are valuable, as they also have been for the (similar) CDDL, where those grants are the only thing that stands between the children of OpenSolaris and their rapaciously litigious former steward. Flicking through our internal documentation I see also a mention of the explicit representation of ownership and right to contribute that’s in the licence, thus obviating the need for an explicit CLA (something that arguably harms open source projects and communities in general.)
It depends on what it is, though. For libraries where we don’t anticipate the need for any of the stuff above, we also just try to fit in with the prevailing ecosystem; e.g., many or most of our thin FFI wrapper crates are (or could be, if we are reminded!) something like dual Apache 2.0 and MIT if it helps people adopt them.
The MPL is truly a fantastic license, being copyleft that integrates into the wider FOSS ecosystem rather than isolating itself into its own world.
The FSF consistently misrepresents the GPL and regularly does a rather nasty motte-and-bailey regarding what free software means. For example, at https://www.gnu.org/licenses/why-not-lgpl.html:
This is, of course, not the whole truth. I write a lot of free software but it tends to be MIT + Apache. As such, I cannot legally depend on a GPL library. The entire GPL ecosystem is simply closed off to me, a free software developer. (For example, I cannot use GNU readline.)
The MPL does not have this issue.
Promising, this paper was also recently referenced by Ben Hawkes: https://lobste.rs/s/qar0gh/openssh_backdoors.
(Not my phrase, I borrowed it from somebody.)
My understanding was that Durov left Russia because he didn’t want to just open up Telegram and give access to the government there. Specifically, to shut down the chats of the opposition followers. So he went to the West.
Now in the West, he’s actually being hounded in a worse way than in his native country.
That’s half the story. The other half is that, after Durov left and started Telegram, the Russian government tried to block Telegram in 2018, possibly, in part, over concerns about Telegram Open Network. Telegram largely managed to evade the ban, and authorities sort of turned a blind eye to it (parts of the Russian government actually continued to run some channels on it).
However, in 2020, Telegram, the General Prosecutor, and the Roskomnadzor reached an agreement about “cooperat[ing] in combating terrorism and extremism on the platform” (quote from official press release here). The details of this agreement, or even whether such an agreement was actually reached or the Russian government just gave up, aren’t known AFAIK, but this is the typical legalese for “we figured out a mutually-beneficial arrangement”.
It’s not like we have much substantial information to go onso I’m not going to speculate over what prompted his arrest. But I would like to point out that, despite the flurry of materials about how he stood up to the Kremlin that surfaced back in 2014, there’s an overall feeling that, if not Durov himself, then at the very least Telegram, is on somewhat cosier terms with the Kremlin than their pre-2020 history would suggest.
You would like to point out that there’s an overall feeling?
I would like to point out that either Durov or Telegram have eventually agreed to collaborate with Russian authorities after Durov left Russia, and that while the details of their agreement aren’t public, they were mutually advantageous to a high enough degree that Russian authorities dropped the case, and both the Russian government and various government-affiliated actors are fine running their own channels on Telegram now.
Yes, among other things, this has led Telegram users to question how much “at odds” Telegram and the Kremlin are now. It got some coverage in Western media a while back (see e.g. here). Telegram has been pretty happy to take down channels at the request of the Russian government for a while now. Promptly, too, earlier in January, during the protests in Bashkortostan, they started blocking local channels in a matter of hours.
To be fair I believe Telegram took down a number of channels that were designated by the UK government as coordinating the recent riots in that country.
Right, I don’t mean to suggest that he’s working really well with the Kremlin but not at all well with the folks at Élysée, or that he’s in some strange conspiracy, or some other weird neocon thing. Just that the staunch defender of free speech in Russia persona thing is old and quite possibly out of date. It may have been true once, or at least true enough for Western media outlets to work with Durov’s PR and media agents and build something good, but the way Durov’s current company approaches its relations with national governments today is different from Durov-era VK.
Like anyone who’s been in tech for more than like six months I’m very skeptical about what governments and their institutions do and why. It’s just I’m equally skeptical of what rich people do and why.
The fact that it was possible at all to do this is an argument against using Telegram if you are any kind of political dissident.
Durov settled in the UAE (Telegram is based in Dubai) via a purchased citizenship. That’s hardly a bastion of freedom. His French passport was issued later, see my earlier comment in this thread.
Multiple people in Russia are calling for his release, including Maria Butina (deported from the US for being a literal spy), Dmitry Medvedev, and
RichardEdward Snowden[1]. There’s a lot of stuff we don’t know about this yet. Maybe he feels safer in French custody than possibly being deported from the UAE to Russia.[2][1] source for Butina and Medvedev, article in Swedish Dagens Nyheter. Snowden, personal communication with someone more plugged into news in Russian than I am.
[2] wild speculation: https://threadreaderapp.com/thread/1827622301363232885.html
I assume you mean Edward Snowden?
Correct, updated!
Edward Snowden
Correct, updated!
Edouard Neigeden
I guess the direction that Operating Systems need to take is more POLA like what Endo does for JavaScript, executing third-party code in an isolated environment.
I don’t know why this is marked as an OpenBSD vulnerability specifically. Vixie Cron is used in a lot of places.
The bug was introduced in vixie cron and that patch was incorporated into OpenBSD in 2023 but not in FreeBSD. The step value was no longer range checked since that patch. From the OP:
/edit Funny to read that after that patch in 2023, FreeBSD did consider using the OpenBSD version of cron instead of their own fork but didn’t move over because of time constraints:
FreeBSD is among those that use Vixie cron, though alternative implementations can be installed via pkg. I don’t see any associated FreeBSD security advisory though, at least, not yet: https://www.freebsd.org/security/advisories/