You know why I like Rust? Because my entire goddamn life I’ve wanted to have something for writing video game and operating system code that is better than fucking C, and it didn’t exist. D wasn’t it, C# wasn’t it, C++ sure as hell wasn’t it, Pascal wasn’t it though may have been if I’d started a decade earlier, OCaml wasn’t it, Erlang wasn’t it, Forth wasn’t it, Scala wasn’t it, Eiffel wasn’t it… It wasn’t because such a language was impossible to make, even in 2005 it was very easy to see lots and lots of small, horrible things that C did wrong that could be gotten rid of and the world would be a better place. It wasn’t difficult, it wasn’t a major research breakthrough it was just because nobody had gotten around to it!
Rust, and now Zig, finally fucking got around to it. Then Odin and Hare showed up and lookie lookie, they could have been made in 2005 and looked almost the same! So I’m sorry for being overjoyed that after a lifetime of having nothing but a blunt, beat up chisels and some repurposed screwdrivers, desperately trying to figure out how to make something better or use them in a better way, someone has finally made a real nice, good, sharp chisel for me to use. You have a job that requires a screwdriver, or a saw, or a wrench, that is just fine. I’m just happy to finally be able to chisel things without gouging my fingers off as often.
I’ve wanted to have something for writing [X] that is better than fucking C
it was very easy to see lots and lots of small, horrible things that C did wrong that could be gotten rid of and the world would be a better place
Rust, and now Zig, finally fucking got around to it.
This is exactly what the article is trying to criticise. “X is bad, use Y instead”. There’s nothing wrong with having an opinion about certain programming languages, but if one language becomes the answer to every problem out there, you may become oblivious to potentially better solutions.
To go back to your tool-analogy: you wouldn’t use a wrench to drive a screw, would you?
That…. is exactly why I specified “for writing [X]”. I can tighten “video games” to “video game engines” if you want.
Not in 2015 when Rust 1.0 was released, really. Like D, Nim kinda starts off by assuming a GC and then tries to work down to manual memory management, which is quite a bit harder to do well than starting with absolute control over all memory and then building up conveniences on top of it. Having automatic memory management makes it a lot easier to implement a language ‘cause you can make all sorts of simplifying assumptions, things like “most things are accessed through pointers”, “objects can be moved around safely”, “pointers always point to valid memory”, “pointers can’t be fabricated out of integers”, stuff like that. But those assumptions stop working so well when you have manual memory management, and now your language is built around them.
That said, Nim does appear to be tackling those issues fairly well, though it also appears to be still something of a work-in-progress. I check in on it from time to time, it is a nice language, though some of its more Pascal-heritage design decisions seem kinda suspicious. Rust is very comfy for me ‘cause it’s basically OCaml on many levels. I don’t have many great use cases personally for the “higher than system lang, lower than scripting lang” niche that Nim and Go fill, but it’s a pretty good niche that deserves filling.
I don’t entirely agree. The language a program is written in tells me a few things:
These may not be the things that I care about the most, but they are things that I care about.
Rust is a weird case here because there are different styles of Rust that range from completely type- and concurrency-safe to worse than C in terms of their baseline safety properties (if you think UB is bad in C, wait to see the things unsafe code can make the compiler do in safe Rust).
All but the last bullet are questions to ask yourself to pin down the language requirements for a problem domain, as suggested by the article.
The only valid question (IMO) to ask where the answer might deviate from the norm is your last bullet, “Will I need to learn a new language to fix bugs in it”. When in doubt, go with experience. Otherwise, always choose the norm.
That’s the question to ask when starting a project, but all of my points are things I care about when using a project. For anything except single-author projects, there are more people that went down the path of non-user to user to contributor than there are people who make the original decision about language.
I care about a bunch of these even if I have no interest let alone plans or ambition to take the step from user to contributor. If I notice that a project is (say) written in Python, I have reasonable grounds to suspect I might have to deal with some kind of version and dependency hell if I choose to attempt to deploy this software. (And as I don’t work with Python myself, solving even problems considered “easy” or non-issues by people who know their pips from their venvs can be very time consuming.) While that won’t stop me from using it outright, I’ll probably pick a different option, all else being equal. (All else is of course almost never equal, and sometimes there aren’t any other choices short of building my own, but it certainly influences my decision.)
Same here. I think have have contributed at one point or another to most things I use daily.
I don’t even use Rust for my own projects, and I’m not good at it, and find it quite annoying to read, but it’s been easy to contribute to projects that use it. I can’t think of something in a different language that has been less effort.
My server setup is built around docker, and traefik makes it very easy to expose your docker containers right in the docker-compose files, without needing much configuration.
Appreciated that you’re trying to help here, but open source projects don’t just get maintainers by looking for random people on the internet. A new maintainer needs to be invested in the project, have an existing history of high quality contributions, and a be trusted to maintain the project’s vision, i.e. any potential new maintainers will already be known to the existing maintainers.
You’re correct, but I don’t think it defeats OP’s purpose.
There are many projects that I use (but have never contributed to), that I’d be willing to find the time to inherit if no one else would. I’m not saying the outgoing maintainer should hand it off to me straight away, but this serves as a plaza to put outgoing and incoming maintainers in touch with each other.
There’s also https://adoptoposs.org for this specific purpose
Right. I find it hard to work or understand things I’m not interested in (or motivated to be), so I’m not sure how well this drive-by maintainer search could work.
The idea for this list came from a Mastodon thread. Someone I follow was feeling burnt out about his project and was looking for someone to help him review submissions for the 512kb.club, to which I offered my help. I’m now reviewing multiple PRs a day for the project.
seeking-maintainers.net is an experiment to see if there is demand for such a platform in other parts of the internet.
There are some benchmarks that I don’t remember. Essentially, QBE aims to get you 70% of the way there with 0.01% of the complexity. The other 30% are optimizations that would require huge complexity. This means that llvm is faster in most situations since it does a lot of optimization, but QBE is far more lightweight.
There is a great lightning talk by Drew DeVault you might be interested in:
This is because BTRFS does not know about the drives yet when the filesystems are mounted
Does anyone know why this happens? I haven’t experienced it on Ubuntu / CentOS / Fedora. Even mounting btrfs from a file just works without extra steps there. Why does Alpine need an explicit scan?
The btrfs-packages for distros like Ubuntu/Fedora install an init-script that does this. Alpine isn’t “there” yet, so you have to manually add it.
Do you know where specifically? In Ubuntu for example I can only find
64-btrfs-dm.rules which I don’t believe will kick in after running mkfs. And I’m sure you can mkfs, then mount without scanning.
I imagine it is just that. The systemd-udevd vendored rules and the btrfs built-in, in udevd, remove the need to use the btrfs-progs
btrfs device scan sub command.
I only dug about this deep: https://github.com/systemd/systemd/blob/main/src/udev/udev-builtin-btrfs.c#L37 https://github.com/kdave/btrfs-progs/blob/master/common/device-scan.c#L233
I’m assuming the device scan ioctl probably does most of what udevd would know from running its own filesystem detection built-ins or utilities. I checked /usr/include/linux/btrfs.h but the documentation in the header is very sparse. I guess the next step would be poking through the kernel to confirm the ioctl behavior is what I think it is. Maybe I’ll update this post if I get to it.
Ok so this makes sense - mkfs.btrfs does the scan ioctl call itself on the device when it’s finished: https://github.com/kdave/btrfs-progs/blob/c0ad9bde429196db7e8710ea1abfab7a2bca2e43/mkfs/main.c#L1657
umm, it seems it would require different signups and login for different services? It would be great if you can add SSO
I recently built a simple compiler in rust. I’m documenting what I did, and plan to release this resource as a book.
I think this idea needs to go back to the drawing board.
Thanks for your feedback. Some details and suggestions would be helpful.
No it’s not, it’s a complete disaster. You encourage people to insert sensitive data (i.e. passwords) into a random website. You might have implemented this complete client side, but there is no sane way to enforce that this is done always this way. Also it leads that people might believe they can put there secretes into a (maybe phishing) website.
 I’m to lacy to check this right now
.. that this can’t be changed for one specific user, after for example a compromised host
I recommend using messengers like signal or matrix for such things. There is no easy user targeting and they have applications for everyone with a good track record.