Data point for you… the machine you linked has CPUs that get a geekbench of 507/3069 (single/multi). The Ryzen 7 4800U gets a geekbench of 1030/5885.
I have a SimplyNUC Ruby R7 with that Ryzen CPU, 64GB of ram, and two 2TB SSDs. It’s substantially more power efficient than that server. The only downside is fewer cores overall (8x2 vs 6x2x2).
That said, the NUC is the VM host for my k8s cluster, Prometheus/grafana VM, 2x local dns resolvers, a backup WireGuard server, and a smattering of test VM’s.
I went through the same thought process you’re talking about now a year ago. Decided on the NUC for power, noise, and space - but especially power.
edit: I should clarify - while that server linked is substantially cheaper than the NUC, the power, noise, and “accoutrement” of rack mounted servers is going to add up. So I balanced that cost with my available disposable income at the time and was fine with paying more for something tiny.
There are many similarities between the two, though I don’t have any experience with Nix to comment further here.
I don’t understand why one would use Guix without trying Nix first.
What I gathered is that the author has a preference for Scheme and thus also for Guix, which I think is a sufficient argument.
However, I’d be very interested in reading an in-depth comparison between the two projects, as they target the same niche and are quite related.
Early on Nix had the opposite problem. You would ask it to install firefox, and completely unprompted it would install the adobe flash plugin to go with it. They told me if I wanted firefox without that awful shit to install a separate “firefox-no-plugins” package or something ridiculous like that.
It hasn’t done that for a while, but it’s taken like a decade to recover from the lost trust. I couldn’t handle the idea of running an OS managed by people capable of making such a spectacularly bad decision.
Doesn’t that depend on your affinities? If you like lisp languages then Guix seems like a logical choice. I guess it also depends on what software you depend on in your day-to-day activities.
There is more to consider here:
Guix is a GNU project with the unique lenses and preferences that come along with that. It is also a much smaller community that Nix, which means less packages, less eyes on the software, and I would argue also less diversity of thought.
I personally prefer Scheme to Nix’s DSL-ish language, but as a project I think Nix is in a much better position to deliver a reasonable system of such levels of ambition.
It also frustrates me that Guix tries to act like it’s not just a fork of Nix, when in reality it is, and it would be better to embrace this and try to collaborate and follow Nix more closely.
Unfortunately the GNU dogma probably plays a role in preventing that.
It is also a much smaller community that Nix, which means less packages
If you convert Nix packages to Guix packages, you can get the best of both worlds in Guix. But that’s admittedly not a very straightforward process, and guix-import
is being/has been removed due to bugginess.
I’m aware of both, but I tried Guix first because people I know use it, I like the idea of using a complete programming language (even if I dislike parens), and the importers made getting started easy. Also guix is just an apt install away for me.
I know this is a Clojure post, but out of curiosity I wrote it in Go (what I’ve been working in lately) just to see if I could solve it quickly. Took about 10 minutes, subtract 3-4 for fighting with runes:
package main
import "fmt"
type result struct {
letter string
count int
}
func main() {
const input = "aaaabbbcca"
var ret []result
currentLetter := string(input[0])
countCurrentLetter := 1
for _, elem := range input[1:] {
elemAsString := string(elem)
if currentLetter == elemAsString {
countCurrentLetter++
} else {
ret = append(ret, result{currentLetter, countCurrentLetter})
currentLetter = elemAsString
countCurrentLetter = 1
}
}
ret = append(ret, result{currentLetter, countCurrentLetter})
fmt.Printf("%+v", ret)
}
It’s not particularly elegant, but it works.
It’s not particularly elegant, but it works.
That’s my problem with many other (non-Lispy) languages. The programs are not elegant, even though they do work. What works for a computer, don’t always work for me.
Okay, I am 5 months late, but this code is terrible and I must object to it because there’s no good Go code in this thread. You are mixing up two problems, lexing a token, and figuring out the next token. Apart from that, the code is very nonIdiomaticWithTheseVariableNames
, but more importantly blows up on non-ASCII strings.
Here’s two solutions, one imperative: https://play.golang.org/p/-zdWZAnmBip, and one recursive: https://play.golang.org/p/TBudEZBphv7.
The proposed solutions:
else
in sight.if
s are early returns.[1:]
boundary conditions.Here’s my take on it. It took me (roughly) the same 8-10 minutes to type it in the web-ui. In Emacs I could shave some time off of it.
type tuple struct {
s string
i int
}
func splitStringReturnTuples(str string) []tuple {
str = " " + str
res := []tuple{}
for i := 1; i < len(str); i++ {
if str[i] != str[i-1] {
res = append(res, tuple{string(str[i]), 1})
} else {
res[len(res)-1].i++
}
}
return res
}
Runnable code at the go playground
This loops over the bytes in the string instead of the runes in the string. Try inserting a multi-byte rune such as 本 in the string, and see what happens.
The problem statement clearly stated the data set, there was no multi-byte symbols. But the tweet gave a clear understanding that interviewer expects solution to be give in a limited time frame. Therefore the solution was provided in terse notation with abbreviated variables taking provided test input and returning expected output. Not more nor less.
But point is taken. Here’s the code correctly handling multi-byte encodings. The logic is the same, but the part of casting passed string into a slice of runes.
When I interview people I don’t expect them to write code perfectly handling every possible input in the limited time. What I’m interested in first, if they are able to come up with straightforward solution leveraging data structures and algorithms helping them solve the problem with optimal complexity. Second, if they can clearly communicate their approach. And coding comes third.
That makes sense. I did not mean to criticize your solution in particular, just highlight that this is a common “gotcha” in Go. Casting strings to []rune
or looping with for _, r := range str
is, as far as I know, the only built-in way to access the letters in strings correctly. I’ve seen many problems arise from assuming that str[x]
returns a rune instead of a byte. I think it would be more useful and intuitive if []byte(str)[x]
was needed to return a byte, while just str[x]
could be used to return a rune.
Unfortunately, OpenRC maintenance has stagnated: the last release was over a year ago.
I don’t really see this as a bad thing.
Also, wouldn’t the obvious choice be to pick up maintenance of OpenRC rather than writing something brand new that will need to be maintained?
There is nothing really desirable about openrc and it simply does not support the required features like supervision. Sometimes its better to start fresh, or in this case with the already existing s6/s6-rc which is build on a better design.
There is nothing really desirable about openrc
I’d say this is a matter of opinion, because there’s inherent value in simplicity and systemd isn’t simple.
But why compare the “simplicity” to systemd instead of something actually simple, openrcs design choices with its shell wrapping instead of a simple supervision design and a way to express dependencies outside of the shell script is a lot simpler. The daemontool like supervision systems simply have no boilerplate in shell scripts and provide good features like tracking pids without pid files and therefor reliably signaling the right processes, they are able to restart services if they get down and they provide a nice and reliable way to collect stdout/stderr logs of those services.
Edit: this is really what the post is about, taking the better design and making it more user friendly and implementing the missing parts.
the 4th paragraph
This work will also build on the work we’ve done with ifupdown-ng, as ifupdown-ng will be able to reflect its own state into the service manager allowing it to start services or stop them as the network state changes. OpenRC does not support reacting to arbitrary events, which is why this functionality is not yet available.
also, the second to last graf
Alpine has gotten a lot of mileage out of OpenRC, and we are open to contributing to its future maintenance while Alpine releases still include it as part of the base system, but our long-term goal is to adopt the s6-based solution.
so, they are continuing to maintain OpenRC while alpine still requires it, but it doesn’t meet their needs, hence they are designing something new
I was thinking the same thing.
I have no sources, but when was the last time OpenBSD or FreeBSD had a substantial change to their init systems?
I don’t know enough to know why there’s a need to iterate so I won’t comment on the quality of the changes or existing system.
To my knowledge, there’s serious discussion in the FreeBSD community about replacing their init system (for example, see this talk from FreeBSD contributor and previous Core Team member Benno Rice: The Tragedy of systemd).
And then there’s the FreeBSD-based Darwin, whose launchd is much more similar to systemd than to either BSD init or SysVinit to my knowledge.
this talk from FreeBSD Core Team member Benno Rice: The Tragedy of systemd).
This was well worth the watch/listen. Thanks for the link.
I believe the last major change on FreeBSD was adding the rc-order stuff (from NetBSD?) that allowed expressing dependencies between services and sorting their launch order so that dependencies were fulfilled.
That said, writing a replacement for the FreeBSD service manager infrastructure is something I’d really, really like to do. Currently devd
, inetd
, and cron
are completely separate things and so you have different (but similar) infrastructure for running a service:
I really like the way that Launchd unifies these (though I hate the fact that it uses XML property lists, which are fine as a human-readable serialisation of a machine format, but are not very human-writeable). I’d love to have something that uses libucl to provide a nice composable configuration for all of these. I’d also like an init system that plays nicely with the sandboxing infrastructure on FreeBSD. In particular, I’d like to be able to manage services that run inside a jail, without needing to run a service manager inside the jail. I’d also like something that can set up services in Capsicum sandboxes with libpreopen-style behaviour.
I believe the last major change on FreeBSD was adding the rc-order stuff (from NetBSD?) that allowed expressing dependencies between services and sorting their launch order so that dependencies were fulfilled.
Yep, The Design and Implementation of the NetBSD rc.d system, Luke Mewburn, 2000. One of the earlier designs of a post-sysvinit dependency based init for Unix.
I’ve been able to manage standalone services to run inside a jail, but it’s more than a little hacky. For fun a while back, I wrote a finger daemon in Go, so I could keep my PGP keys available without needing to run something written in C. This runs inside a bare-jail with a RO mount of the homedirs and not much else and lots of FS restrictions. So jail.conf
ended up with this in the stanza:
finger {
# ip4.addr, ip6.addr go here; also mount and allow overrides
exec.start = "";
exec.stop = "";
persist;
exec.poststart = "service fingerd start";
exec.prestop = "service fingerd stop";
}
and then the service file does daemon -c jexec -u ${runtime_user_nonjail} ${jail_name} ${jail_fingerd} ...
; the tricky bit was messing inside the internals of rc.subr
to make sure that pidfile management worked correctly, with the process finding handling that the jail is not “our” jail:
jail_name="finger"
jail_root="$(jls -j "${jail_name}" path)"
JID=$(jls -j ${jail_name} jid)
jailed_pidfile="/log/pids/fingerd.pid"
pidfile="${jail_root}${jailed_pidfile}"
It works, but I suspect that stuff like $JID
can change without notice to me as an implementation detail of rc.subr
. Something properly supported would be nice.
I think the core issue is that desktops have very different requirements than servers. Servers generally have fixed hardware, and thus a hard-coded boot order can be sufficient.
Modern desktops have to deal with many changes like: USB disks being plugged in (mounting and unmounting), Wi-Fi going in and out, changing networks, multiple networks, Bluetooth audio, etc. It’s a very different problem
I do think there should be some “server only” init systems, and I think there are a few meant for containers but I haven’t looked into them. If anyone has pointers I’d be interested. Desktop is a complex space but I don’t think that it needs to infect the design for servers (or maybe I’m wrong).
Alpine has a mix of requirements I imagine. I would only use it for servers, and its original use case was routers, but I’m guessing the core devs also use it as their desktops.
I recently got a SimplyNUC Ruby R8 - https://simplynuc.com/ruby/. I paid about $1200 USD for it and then threw a 2TB Samsung 980 Pro NVMe drive in it (another ~$300). So total outlay of around $1550-1600.
It’s got a Zen 2 in it and it’s obscenely powerful. I have an Alienware desktop that I use as VM host and the Ruby R8’s mobile CPU scores higher than the Alienware desktop CPU on PassMark and friends.
Right now it’s sitting mostly idle, but I’ve got FreeBSD 13 on it and have been doing some App Dev with bhyve and/or jails.
The Ruby R8 sounds really interesting. How’s the fan noise, both at idle and when maxing out the CPU?
Pretty quiet even when I am compiling something and pegging all the cores. Typical high speed laptop fan sound if you’re doing something intensive.
That said, it gets good airflow where it sits so I don’t often notice an issue.
I wish there was something better there than freebsd-update(8)
which takes hours (!!!!!) to do a major upgrade. It’s so bad, that the FreeBSD infrastructure people rolled their own custom solution.
Sadly (?), PkgBase seems dead.
I just updated 2 systems from 12.2 to 13.0 and it took maybe 20 or 30 minutes to do both systems.
I don’t disbelieve you, but how is it taking hours for you? Slow internet (I have pretty fast internet)? Slow disks (I have either SSD’s or large zpools that are pretty zippy)?
(note: I did have to wait about an hour for my local poudriere to rebuild all packages for 13.0, but that gets done anyway, so I didn’t include it in the time it took).
I agree though, freebsd-update is in general pretty slow, and PkgBase seems like it may never arrive, or at the very least won’t happen for quite a while yet.
I hate freebsd-update with a passion. It forks a child process for every single file that it compares and it does so sequentially and so it can be very slow on systems with slow single-threaded performance, platforms where fork is slower than normal (including some virtualised environments). It’s also heavily limited by random disk read latency because everything it does is sequential: If it tried to the comparisons in parallel then it could at least take advantage of things that are in the buffer cache, prefetched, or just cheaper to read out of order with NCQ in one thread while another is blocked on I/O, but it doesn’t.
I’ve often seen it take an hour to upgrade a few hundred MiBs of the base system and then had pkg
upgrade multiple GiBs of packages in under a minute on exactly the same machine.
The logic for detecting changes is also really unreliable. I’ve had freebsd-upgrade leave me with an unbootable system three times. As far as I can tell, it scans the files, detects that they aren’t what it expects, and then patches them anyway.
I really wish the FreeBSD Foundation would invest in pkg-base. It’s 80% done (it works great for me, but there are some configurations that it doesn’t work well for and there’s some usability that could be improved) but the rest of the work is tedious and involves the incredible pain of touching the FreeBSD build system, so it’s hard to get volunteers to do it.
On my 24 core Xeon with ~250Mbps Internet and NVMe disks, upgrading from 12.2 to 13.0 took over two hours.
I also upgraded some more modest 11.4 machines, and those “only” took about half an hour. Unsure about the discrepancy.
Another machine I have got stuck, and I had to restart freebsd-update(8)
. The second time worked.
freebsd-update(8)
on FreeBSD, and syspatch(8)
and sysupgrade(8)
on OpenBSD are relatively new. Most of my life I have been doing upgrade by building from source, so I guess I shouldn’t complain too much, it’s still progress.
freebsd-update(8) on FreeBSD, and syspatch(8) and sysupgrade(8) on OpenBSD are relatively new. Most of my life I have been doing upgrade by building from source, so I guess I shouldn’t complain too much, it’s still progress.
I already used freebsd-update
back in the day and I stopped using FreeBSD around 2014. It’s not that new.
Two hours on a Xeon machine seems ridiculous; if you have the time, you should really write a bug report or something for that.
I’ve certainly had some systems inexplicably take longer than others, but sounds like you have run into some real bad ones. Yikes!
I’ve got a little Celeron J4125 NUC-style machine that I did the 12.2->13 upgrade on. Took about 10 minutes.
I’ll be curious to learn what impacted a beefy Xeon or similar reports I’ve seen crop up occasionally.
20/30 minutes still seems pretty long to me for a base system update. freebsd-update
seems optimized to reduce network usage with all its binary diff cleverness, which is still useful in various cases, but for many people it’s a lot less useful than it used to be. Even with my regular ADSL connection just doing a download of the ~220M of the full base.tar.xz + kernel.tar.xz will be faster (at 1M/s it’s less than 4 minutes), and the extract/removal of old tools shouldn’t take more than a minute or so.
PkgBase is not dead. People are using it as we speak. It’s just hasn’t been enabled by default as there are still some rough edges to be addressed. Here’s a community PkgBase server you may consider using: https://alpha.pkgbase.live/
Compiler backend are about to get a lot more interesting.
I think LLVM is probably setup well to take advantage of the various new features and the codegen issues they introduce.
I’m less confident that GCC or legacy MSVC will adapt well. It’s a tractable problem but will probably force some substantial investment if it hasn’t already occurred.
Caveat: I haven’t scrutinized any of this in depth, so take it as a conjecture and with a boulder of salt.
Wow, good god! Okay, I hope I don’t need to use Rust for… at least couple of years. It’s funny how many people say that “rust is not changing”, but lib devs require the most fresh rust…
It’s funny how many people say that “rust is not changing”, but lib devs require the most fresh rust…
I’m a “lib dev” (and a member of the library team), and several of my libraries still compile with Rust 1.28, which was released almost 2.5 years ago. That includes the regex crate. If I pushed its MSRV up to the most stable release, not a lot would change. I’d get some new niceties from an updated std and maybe a small convenience in some syntax. But that’s it. And there is absolutely nothing released in the last 2.5 years that has compelled me to upgrade its MSRV. (Nothing in particular is making me stay on Rust 1.28 either. I’ll eventually upgrade, but I do it at my leisure.) The last major feature (platform specific vector functions and runtime CPU feature detection) relevant to the regex crate was released in Rust 1.27.
What’s more, aside from my role as a member of the library team, I’ve never needed to worry, care or use Pin
. Likely because I don’t use async Rust because I don’t have a need to.
There is a really simple explanation for this: different parts of the Rust ecosystem move at different speeds. Async Rust is still pretty young and there’s a lot of rough edges that need to be smoothed out.
Just say no to async (I do) and you will be fine. Rust is a stable language that is joy to use (at least for me). Just avoid async.
Yeah that’s basically my strategy with Python. Similar thoughts from a prominent Python developer:
https://lucumr.pocoo.org/2016/10/30/i-dont-understand-asyncio/
The number of problems that require async/await is very small. You generally need it for huge scalable cloud services, which is a problem most people don’t have. And even then most of the code can dispatch to threads; in fact it’s almost required to dispatch to threads in such settings for utiilization (to use all your cores).
The existing cloud services are already written with non-Rust technologies (C++, Go, Erlang, etc.).
Or maybe if you’re writing a BitTorrent client. You can do that in a bunch of different languages or with a manual event loop.
Honestly there have been so many BitTorrent clients written that I wonder if any of the authors actually thinks async/await is an improvement for that problem (I have not written one). My guess is that 90%+ of them are written without any such language features, given their age.
I mean, you don’t have to use async at all.
I actually done use it in most of my code bases due to the poor patterns/ergonomics around async/await.
I’ve worked on async/await patterns in other languages and Rust’s definitely has the leakiest abstraction. That may or may not be a good thing depending on how much you want that stuff to be transparent.
As I noted in another comment, I find channels and threads easier to reason about.
This is the really weird thing to me; it seems like there was this big push in the last decade towards async that was largely driven by the rise of a runtime (Node and browsers) in which async was literally unavoidable. On a runtime that has access to real threads, there are a handful of cases where the overhead of threads causes bottlenecks, but in my experience these are exceedingly rare and easy to avoid. On top of that, they tend to be domains in which the BEAM runtime dominates so conclusively that it’s difficult to justify building these systems without OTP.
How is it happening that “the async question” is such a dominant factor in the discussions around Rust? Is it just due to people coming from other runtimes and assuming that “you can’t scale without async” or is there more to it?
I’m not entirely sure. I spent most of my career was spent in C/C++ and so I got comfortable with multithreading in those languages early on. I wouldn’t argue that this is the best way, but it’s a way that has familiar patterns. In this model, Rust actually shines due to blocking a lot of the bad behaviors - sharing objects between threads and not properly locking, etc.
However, async/await in Rust has felt awkward from the start to me. I don’t care enough about the async/await pattern to be too broken up about it though. If the community manages to iron it out and make it less rough around the edges, then I’ll invest more of my time into it.
I believe early on two big use-cases without threads influenced the need for async. That’s not a first-hand knowledge, so it might be wrong. First, Fuchsia uses async Rust for its network stack, including the user-space drivers. Fuchsia devs were a major force shaping async. Second, you need async for wasm to interop with JS promises, and wasm was early or recognized as an important thing for Rust.
I don’t know what explains “today”s discussions, I suspect it’s a combination of many less-technical factors. Async vs threads is important for web, and a lot of people do web. Async vs threads is controversial so you can argue a lot which is better than. Async in rust specifically is young and complicated, so there’s a lot to explain.
That makes sense - I’ve definitely encountered the wasm
scenario. You need async there simply because you’ve only got one thread, period. Instead of managing the cooperative multithreading aspect on a single thread, it’s easier to just use async.
I think these are good uses of async.
I still maintain that async/await is probably one of the best “bang-for-your-buck” concurrency patterns out there, up there with Actor models (popularized by BEAM and Akka), balancing developer simplicity with performance. I think the question of whether async is more effective than modern threads (with smaller thread stack sizes, and memory that’s only virtually allocated by the kernel until used) at solving C10K is different altogether. For the average developer, being able to do await func()
to have a function run “asynchronously” is a lot simpler than thinking about thread pool sizes, sharding, and other things dealing with threads. I do think for the average concurrent application (so I’m talking about low-to-medium scale), threads are just as effective as async and perhaps even more so, but the developer experience of working with async is compelling enough that users are interested in async.
Julia also has spent time baking in async into the language. It’s increasingly a higher level pattern that I think many developers enjoy using. I know here a lot of folks tend to prefer threads, but I think that’s not very applicable to the average application developer looking to use concurrency in their application without much work. Libraries like Rust’s rayon
do really offer a compelling way to use threads without thinking too hard about the underlying complexities of spawning and retiring threads, but rayon
style threading is only applicable to certain types of workloads.
Although I like this article because it shows the details of writing your own implementations of Future, it would be unfair to assume that one encounters all these issues in the wild when writing async rust. You may encounter one or two depending on the types involved but this post is contrived to show you them all in one go.
In my experience writing my own futures has been great and not at all as tricky as the article suggests. I think that is because (at least in my case) a Future is a way to provide an async interface to a long process in a different context. Usually you have that context to work with - ie the browser’s DOM or some other callback scenario.
tldr; I do enjoy these articles. They’re more about grokking rust deeply as you won’t run into most of these problem in practice.
I get your point. Having written a couple of Futures used in production code, I have questioned the wisdom of doing so.
I’m not anti async in rust, it just doesn’t feel like the documentation and patterns have caught up enough in a “canonical” way. That is, there’s a lot of smart and well written notes on doing it but at least the last time I did it (~4 months ago), there’s still a lot of rough edges.
For now, I’ve decided to wait for the dust to settle a bit more before investing anymore time with async/await and Rust.
I’ve had good luck with channels and threads the last 3.5 years of writing Rust so I’ll probably just stick to that until the dust settles.
I feel like the article made it fairly clear that this is not something you normally do as a matter of course of developing software.
This post is a good illustration of why I keep an eye on the evolution of Zig. Rust’s core values are performance, safety, and ergonomics. Simplicity is not a core value of Rust and it’s becoming an increasingly complex language. (That’s not to say that Rust doesn’t care about simplicity: only that when simplicity and another core value are in conflict, simplicity will always lose.) If I were still in my twenties, in might not bother me as much, but as I’m inching to 40, I find it hard to keep up with the evolution of the language. I worry that there will come a point where I’m locked out of the ecosystem, because I haven’t been able to keep up. I think that as Rust releases go by, my Rust style will become less and less “modern”. In a sense, Zig is a possible escape hatch.
And it’s a shame, because I really like the first two core values of Rust, performance and safety. Zig has the same focus on performance as Rust, but since Rust’s memory safety often comes at the expense of simplicity, which is a Zig core value, I expect that Zig will not be able to match Rust’s guarantees against use-after-free, double-free, iterator invalidation, data races, etc.
My concern with Rust is that not knowing it in 2029 will be like not knowing C or C++ today; it’s going to be ubiquitous in systems programming and a lack of knowledge will be a really career-limiting state.
My other concern with Rust is that it just collapses under its weight, complexity, feature creep, and speed of change (and this is said by someone who is growing to really like Rust, at least in its synchronous form).
My other concern with Rust is that it just collapses under its weight, complexity, feature creep, and speed of change
C++ has enormous complexity, both intentional and accidental, yet it remains ubiquitous. Language complexity is sometimes justifiable.
Besides, I’d much rather learn numerous obscure details of how Rust keeps me safe, instead of numerous obscure details of how C++ is screwing me over. And I say this as someone who genuinely likes C++.
This is part of what’s pulling me to brush up on Ada again.
Mostly because it is still used to build complex systems but it’d stayed relatively stable. Granted it’s user base is still pretty small but it’s there and enough to sustain a company to back some development of tooling.
My concern with Rust is that not knowing it in 2029 will be like not knowing C or C++ today; it’s going to be ubiquitous in systems programming and a lack of knowledge will be a really career-limiting state.
Good point. But can you imagine many devs knowing all this crazy syntax, I doubt!
I share your feelings and have also been curious about Zig. I’m not excited about going back to the land of memory leaks, use-after-free, and so on, though… I’d be curious to hear about others’ experience here; maybe zig’s additional creature comforts mitigate those issues somewhat.
I have been using Zig in earnest for about 8 months now. At that time my default choice for new projects was Rust, so I picked up the biggest Rust project I had (comrak, perhaps the 2nd most popular CommonMark/GFM formatter in the ecosystem) and converted it to Zig. It was quite refreshing.
Since then I’ve built a number of tools with Zig, and I have honestly not had to deal with memory leaks, use-after-free, double-free, etc. almost ever. The general purpose allocator does what you would otherwise have to reach for Valgrind for, but better. The careful design of defer
, errdefer
and control flow around error handling make it quite a joy to use.
If you already grok manual memory management as a concept (say, you’ve used C quite a bit before), then Zig feels like it gives you all the tools you need to do it sanely without much overhead, like what you always wanted. A lot of people are repelled by the idea of doing it ‘manually’, but my experience is that it is the poor affordances C gives that generates that repulsion. I do recommend it.
Yes, people who want simplicity to win in conflict at least once, should look elsewhere than Rust. This has been the case even before Rust 1.0. It’s just not what Rust is for. Simplicity is the lowest priority item in Rust.
And it’s a shame, because I really like the first two core values of Rust, performance and safety. Zig has the same focus on performance as Rust, but since Rust’s memory safety often comes at the expense of simplicity, which is a Zig core value, I expect that Zig will not be able to match Rust’s guarantees against use-after-free, double-free, iterator invalidation, data races, etc.
…which is kind of the point where I fall of the bandwagon with Zig. Ok, so it’s simple, and in return for that simplicity it will merrily let you shoot entire limbs off in familiar ways that have had familiarly terrible consequences for users for as long as I’ve been alive.
What’s the value proposition here again? C but not better, just different?
What’s the value proposition here again? C but not better, just different?
No macros and no void pointers thanks to comptime, slices (fat pointers) by default and a type system that can encode when a pointer-to-many expects a sentinel value at the end, optionals so no null ptr derefs, runtime checks for array boundaries, sentinels, non-tagged unions, and arithmetic programming errors, error unions to make it harder to mistakenly ignore an error, defer
and errdefer
for ergonomic cleanup of resources, etc.
You might want to take a look at the language reference and report your impressions on how better or not Zig is, compared to C.
merrily
Not quite… Zig purposefully makes the ergonomics of major footguns poor and the resulting code ugly. For example, converting an array of u32 to u8 is 3 lines of ugly code that screams out “please CR this”
Also zig async is amazing. It’s modestly hard, but the way it’s structured forces you to think about what the hardware is actually doing instead of abstracting away. Once you get it, it’s very easy, and there is no colored async, and you will probably have a correct mental model of what your code is doing
I don’t know” and “I don’t care” would be honest answers. The former is not a valid justification for closing the issue. The latter would backfire.
would it backfire? I find these posts on the “people want too much from free software” theme to be kind of confusing. you can just say “sorry out of scope”, or even not reply and lock the issue. What are they gonna do, fire you?
I tend to agree with you though some folks are just wired that way. If my wife worked on open source in her spare time, she’d probably react the same way as the author.
I tend to be quite picky about my time and I generally don’t allow others to dictate how I spend it. I don’t have much open source but what I do have has, at best, a weekly turnaround on responses about issues.
In other cases, I have software that I’ve open sourced that I simply don’t take issues/requests for. I open sourced it in case someone finds it useful but the core repo is for my purposes. I don’t care about anyone else’s use case - if they have one they can fork it and propose a PR.
I’m writing a lot of Rust, and have been for at least a year or two now. I find articles like this curious, because I also place a high value on stability and I feel like I have it in Rust. I’ve spent a lot of time in other ecosystems like Node, and a bit in Python and Scala; I don’t expect any of the software I wrote in those languages will still work today without serious refactoring or even just rewriting. In contrast, I feel the software I’ve written in Rust will continue to compile and operate for years.
I don’t follow every new feature that comes out, and I know there’s a lot of surface area in libraries and language concepts that I don’t have stored in my head. It doesn’t really feel like that’s bad, though. The async/await stuff is a huge positive change, because raw Futures were, charitably, inscrutable. Many other changes have just been small improvements to toolchain performance, or quality of life features, or improved orthogonality, or whatever else. It’s been rare that I’ve needed to ingest and retain every little new detail in order to keep making working software.
My position is kind of the negative space, though, that surrounds whatever drives people to write these articles – it’s hard to think of a way to turn it into an explicit, positive article myself. I’d basically just be saying: It all seems like it’s going pretty well to me.
The thing that struck me the most was “when will it end?” And that’s true. I feel like I’m always playing catch up.
Sure the Edition mechanism helps but, like TFA said, documentation rapidly gets out of sync. No Rust book in print, to my knowledge, discusses async/await. Blog posts from six months ago will show ways of doing things that might no longer be idiomatic.
Rust needs to slow down, and stop adding features for a bit. I love the language and I’m doing work in it, but there is truth to this article, IMHO.
(And it’s important to note that, even if the Rust community doesn’t think that’s the case, this is the view of a seemingly large number of people new to Rust. It’s a complaint that I’ve seen made, and made myself, in several places. True or not, it’s definitely the perception for some people.)
I honestly don’t know why they print paper books about most digital tech. Unless it’s philosophy or history it will be out of date by the time it prints. We invented web pages and blogging for a reason…
documentation rapidly gets out of sync
In the 6 years since 1.0 you’d need to reprint books twice: for the 2018 edition (simplified modules, deprecated try!()
) and for async
. The rest was minor, and didn’t change how average Rust looks like.
I don’t think that’s worse than PHP or C#. Technical books have short shelf-life.
Rust was probably in flux also when you started to learn it. Why weren’t you scared off back then?
I think languages like Standard ML, Go, Ada or Scheme has an advantage over Rust when it comes to standardization, book availability and language stability.
I think languages like Standard ML, Go, Ada or Scheme has an advantage over Rust when it comes to standardization, book availability and language stability.
Notably, all of those languages (except Go, and I’m not sure it belongs on this list – Kernighan’s Go book is already wildly out of date on a number of things, and has been for a couple of years at this point) have a reputation as being somewhat lacking significant industry traction, and (deserving or not) represent something of a career niche and risk of a dead-end.
To a certain extent, I think there’s a cultural (generational?) divide here – if you’re coming from C, where things have barely changed in 30 years and have left known, deprecated footguns in place for longer than I’ve been alive, the pace of Rust no doubt seems psychotic.
Coming from Ruby, or JS, or C#, (or Clojure, or Scala or Swift or…) the pace of change in Rust seems absolutely sedate (and in C, downright sclerotic).
I’m not claiming that either camp is “right”; there’s something to be said for stability, but there’s also something to be said for cleaning up the footguns and not facing an endless influx of new developers reading 30 year old texts and freshly committing the exact same mistakes as their ancestors. Languages that change too fast bleed users who can’t keep up with the ride. Languages that ossify equally seem to face a loss of users and industry attention when the problems become evident and there’s no coordinated response to solving them.
Ultimately, for better or for worse I suspect it’s not realistic to expect that a modern language can “stop”. I think the expectation set by the post-1990s wave of languages for the post-1990s set of developers is that languages and ecosystems will continually evolve to meet new needs. I’m skeptical that we’ll ever see another language put out an ISO or ANSI standard and then simply remain static for the following decade. Not even C++ is governed that way anymore.
We’ve all, I think, underestimated the extent to which the perceived stability of the major 70s-early 80s languages (C, Ada, Standard ML et al) was as much an artifact of the technological limitations that made communicating and distributing changes to languages a slow, high-friction process, as it was any kind of response to a wide-spread demand for languages to “stay put” on the part of users. Internet-born languages never had those limitations baked into their assumptions, and their users seem to universally expect a level of responsiveness on par with what the internet makes possible.
Clojure
Clojure adds features at a very slow rate, currently maybe slower than C on a year by year basis.
Clojure was created in 2007. Clojure has certainly added more feature and changed more radically, 2007-2021, than C has during that time period (C has had only one feature-adding standard released in that period, C11, and that was overall fairly minor. C17 added no new features).
I suppose you could be arguing that all change in C’s 40+ year life, amortized over 2007-2021 is greater than Clojure’s to date, but that doesn’t strike me as an interesting or useful analysis and really misses the point of my remarks.
How about you do a fair comparison and compare the current rate of clojure changes with the rate of C changes 14 years after it was created.
By all means, feel free to do whatever feels “fair” to you. But this is so pedantically irrelevant to my point that I have no interest in continuing the discussion.
I’m learning it and I’m still bothered by the rate of change. Just because I don’t like how often it changes (regardless of the size of the changes) doesn’t mean I don’t want to learn it. :)
Compared to 15 years ago it does feel like we are undergoing a “Cambrian explosion” in both new languages and new features in existing languages entering the mainstream. Back in the day there didn’t seem to be much room for more than a few languages in commercial spaces; everything else was academic, niche, or hobby-only.
I do think there’s significantly more pressure on software teams today to deliver working software faster and at a larger scale and more securely. So everyone’s cramming in features to address any real or perceived scalability and expressibility shortcomings.
Ya - I’ve built a sizable released product using Rust and have generally been slow to adopt new features. I’m not opposed to them but I just don’t pursue most of them.
Basically the ones we do integrate are the result of lints or compiler warnings.
In contrast, I feel the software I’ve written in Rust will continue to compile and operate for years.
A minor point, but there is a difference between code compiling and code being considered idiomatic. C++ has always tried to maintain backwards compatibility while making significant changes in C++11,14 & 17. As a result, you can almost certainly still compile C++ code from decades ago with a current compiler. However, C++ developers talk of “modern C++” and “legacy C++” as though they are two different languages. If you want people to work on your “legacy” C++ code base, you may find that they are unwilling because they prefer reading and writing “modern” C++, or you might end up modernising your code base, in which case the end result isn’t that different to what it would be if the more recent compiler had forced you to update it.
There is also a slight burden placed on C++ developers. If you worked with C++ in the nineties and haven’t been eagerly keeping up with the recent developments, you probably can’t just walk into a job working on modern C++ (although you can probably find a job maintaining some legacy code that no-one else wants to touch any more).
It’s possible that rust could see a similar effect if it keeps evolving rapidly while maintaining backward compatibility.
I think Rust’s biggest problem is that it doesn’t have a story for deprecations and removals, which is in my experience a difference between mature languages and ones like Rust.
async
/await
is certainly another item, where it’s not clear whether the complexity was worth the cost.
I admire Jason’s commitment and approach to doing things. Given this experience, this makes me wonder about the quality of code in FreeBSD for other protocols.
I’d just like to understand how the code got committed to the mainline branch/trunk if it was of such low quality?
I’m certain I’m missing something here but I’m also too busy to go digging around in the repo history + mailing lists to piece together the full story.
In BSDs the main branch is the development branch. If we’re not sure something is ready to be used in production, but is under active development, it will be committed to the main branch, but is either not hooked up to build or otherwise disabled by default. This is to facilitate testing and review and contributions from others who want to test the system as it evolves (“those who run -CURRENT”).
I’m a NetBSD developer, not a FreeBSD developer, so I’m not an authoritative source on their development process. Jason has also objected to NetBSD independently developing a “WireGuard compatible VPN interface” (note the name of the specification is a registered trademark) without his oversight, and then not responded when asked to state his exact problem with the code or identify exact bugs for more than 6 months. The implication that the specification is complex or incomplete enough that his involvement in any implementation is required is worrying at best, but definitely good for job security (though it’s also worth noting serious bugs have been found in implementations he did spend a year visiting developers’ homes for).
For the audience at home, here is the thread. I think Jason looks somewhat suspicious in it, with how he makes vague threads and claims about NetBSD’s implementation, but can’t point out anything concrete.
If you read the linked thread, as well other *BSDs’ mailing lists, you can clearly see that he doesn’t consider anything he hasn’t personally touched to be worthy. WireGuard is clearly his baby - he’s both the author of the specification as well as reference implementation… problem is, that he doesn’t like any other (competing?) implementations of any sort. He actually makes it quite clear that WireGuard isn’t an RFC-style protocol - and every single implementation seems to need his personal blessing.
In that particular thread, it very much looks like he barges in both barrels blazing and makes demands such as Revert this code at once, sir! (I’m paraphrasing here) without actually spending some time reading how development is done in NetBSD - even after they, repeatedly(sic!), try to tell him this is the development branch.
For the record, the right thing to do would be to have sought input from the WireGuard project during those two years, which we would have enthusiastically provided, and maybe NetBSD would have this ready to go years ago. It strikes the project as rude that you’d write some halfbaked code and try to pass it off as “wireguard”, ruining what to date has been a uniform experience for users in using WireGuard across platforms. The fact is, you jumped the gun and didn’t reach out beyond your community.
[…]
Again, while I’m not happy with this situation and the inflexibility here, […]
Who’s being rude and inflexible here?
Beyond the offers of help, time, energy, enthusiasm(!?), etc. all I can read between the lines are ego, grandeur, and the need for benediction.
That is really bad behavior – it’s full-on micro-management that is really destroying any kind of relationships in the long run. I hope Jason will learn to let go, as this will just erode the trust and fail to create a progressive community around WireGuard.
I’ve been talking with him (Jason) in private and can vouch for him. There’s more that’s going on that meets the eye.
A blog post by Netgate now links to the original change request: https://reviews.freebsd.org/D26137 – I think this just proves the maxim: “Make a ten line change and you’ll get 11 comments, make a 1000+ lines change and you’ll get LGTM”. The blog post author boasted that there were 92 comments, which I still think is way way too little for a 40k+ lines change that touches security and networking.
Of all the (main? net, open, free) BSDs, I think FreeBSD is most willing to include all sorts of code.
Continuing retooling of my network.
I rebuilt my two local DNS resolver machines with OpenBSD and FreeBSD. The one with FreeBSD will run a couple Jails as well. The one with OpenBSD hosts my wireguard server. Both are unbound forwarders to NextDNS.io.
Next is provisioning and setting up a VPS - either on vultr or prgmr. It’ll run OpenBSD and have a wireguard peer connection to my local network for easy access.
Last is migrating my site over to the VPS from its current Digital Ocean droplet.
All of this is in an effort to transition from Ubuntu LTS to either FreeBSD or OpenBSD. I have tended to use BSD for my laptops but not my servers and I’m getting annoyed with the differences.
PulseAudio turned out unusable wherever low latency is involved.
I am feeling optimistic about PipeWire, as it is meant to be able to replace JACK for pro audio. Maybe there is hope for Linux audio.
May I ask where your optimism comes from?
Personally, I’m jaded: for every task a computer could have, we seem to be in a cycle where barely anything works for 5 years, interspersed with 2 years of almost-stable, before it gets ripped out and the cycle repeats¹.
It also boggles my mind that people keep writing new software in memory-unsafe languages. I’m super uninterested in that.
¹ Usually with additional limitation built upon some desired fictional, perfectly spherical user.
Working audio was the thing that made me switch from Linux to FreeBSD back around 2002. On FreeBSD 4.x, I had a sysctl that I could use to set the number of vchans. I then got a /dev/dsp.1
, /dev/dsp.2
and so on up to the number of vchans and could point applications at them (generally, I pointed the KDE and GNOME sound daemons at 1 and 2, xmms at 3, and left the raw /dev/dsp
for whatever I ran in the foreground that didn’t use a userspace sound deamon). It was a bit clunky to configure, but in FreeBSD 5 that all became transparent and so multiple things could open /dev/dsp
and sound Just Worked with low-latency in-kernel mixing.
At the same time, Linux was going through a painful transition from OSS to ALSA. Upstream OSS went proprietary, so FreeBSD forked the last BSDL version and maintained feature parity with the proprietary ones, Linux implemented an incompatible thing and told everyone to switch to it. ALSA could do sound mixing, but only if your hardware supported it (though not through the OSS compatibility layer). I had a SoundBlaster Live! but the drivers were too flaky to use it and my motherboard on-board audio controller didn’t do hardware mixing, so only one thing could play audio. KDE and GNOME both came with their own sound daemons, but I had a choice of KDE or GNOME things being able to play sound. XMMS learned to speak ALSA, but didn’t learn to speak the protocols for either of the GNOME or KDE sound daemons, so I had a choice of music, notifications from my (KDE) chat app, or from my (GNOME) mail client, but not more than one of them on Linux.
Fast-forward a decade or so and everyone I knew on Linux was complaining about PulseAudio. Meanwhile, my FreeBSD media center box was happily driving my 5.1 speaker system from anything that could produce 5.1-channel audio (e.g. VLC playing DVDs).
Eventually, PulseAudio got to the state were it mostly worked for most users and so now Linux distros are justifying their existence by replacing it.
I honestly have no idea why people choose to use Linux at this point.
Eventually, PulseAudio got to the state were it mostly worked for most users
The most annoying thing is while pulseaudio mostly works (which explicitly means it doesn’t actually work - i still despise it since it messed up my mic randomly and can’t play sound from two users at once, like come on)… ALSA is actually pretty decent now and its dmix plugin solves the problem you (and I) used to have.
As soon as something starts mostly working, Linux wants to replace it. And by the time the replacement is mostly working, the last generation is actually pretty good… but since it is two generations ago now the culture is you’re some kind of tech-hating dinosaur who refuses to embrace new things in the eyes of half the people.
Drives me nuts. And it happens all over the linux ecosystem.
I had a similar experience. I was getting frustrated with the issues that Linux had only be able to play sound from one source.
I tried out FreeBSD and for a long time, that was my driver up until I got a MacBook for college. Still use FreeBSD and OpenBSD in a few places to this day. I haven’t tried a lot of advanced scenarios but I’ve got a new laptop on order that I’m going to throw FreeBSD or OpenBSD on as the primary driver.
It also boggles my mind that people keep writing new software in memory-unsafe languages.
The primary driver behind multimedia for Linux is in embedded systems, specifically in the consumer and automotive infotainment space. Especially back in 2016 when PipeWire was started, the intersection of “languages that are adequate for real-time processing” and “languages with reliable ports for relevant architectures” was practically zero.
Even today, getting internal buy-in for e.g. Rust in applications like these is extremely difficult. Not just because of current architecture support, but also because of the shape of future applications. When the next i.MX-series application processor shows up, it’s bound to have a GCC port available – NXP will sponsor it, and if you really need that and have the money, they’ll offer support for it, too. Whether that’ll be true for Rust is anyone’s guess – it’s anyone’s guess if NXP will even publish enough documentation to make a good quality port feasible in the first place. A multimedia daemon that may or may not run on the hardware you want, and where support hinges on whether someone will take a chunk of their spare time to port a pretty massive toolchain, is dead in the water as far as this part of the embedded field is concerned.
I’m not saying it’s right, it’s just how it is. A sizeable percentage of the vendors in this space will gladly sell you devices that break just about every bit of security-related best practice out there. Memory safety is so low on their priority list that they probably run out of paper before they get to putting it in writing. What matters to them is reliable support for commodity hardware via commoditised development teams (because they outsource development to the cheapest available source, almost nobody develops things in house in these fields today). Rust’s story for these things is hardly the best.
Edit: also, not parent poster but I share their optimism:
May I ask where your optimism comes from?
You want to put this in the context of Linux. After PulseAudio anything in this space is bound to generate some optimism. It’s hard to make it worse :-D.
i.MX-series application processor
uh, it’s just Arm? The Librem 5 famously uses an i.MX8, which is just aarch64, of course Rust just works there. Even more car-specific SoCs these days are aarch64.
With “deeply” embedded stuff you could argue that C lets you support any weird custom microcontroller, sure. (But, like, avoid those as much as possible anyway?)
But for “Unix class” application processors – where Linux+PipeWire runs – there are basically no options that aren’t supported by Rust/LLVM. No one is building infotainment systems on Itanium, Alpha, or m68k! :D
Ugh, sorry, I either didn’t get a notification 34 hours ago or I missed it…
i.MX was definitely not the best example because, indeed, it’s “just” ARM, although the story is more complicated than that (see below). That being said, virtually all the platforms that were relevant in these spaces back when PipeWire was started, and even today, are (at best) Tier 2 (only aarch64-unknown-linux-gnu is Tier 1) and nobody is going to put that in a box and sell it in a field where product recalls are a thing. And, of course, there is always the question of compiler support for various extensions, and sometimes even for new architectures, albeit certainly not an 1990s level.
But please keep in mind the rest of my comment as well. It’s not just about whether there exists a compiler that treats a platform as a first-class citizen.
Least cynically, getting commercial support for a C toolchain is not a problem. The “golden” (quotes because the actual material is somewhat, uh, browner than gold…) standard you work against is a vendor-supplied BSP. (Edit: “BSP” is used somewhat more loosely nowadays – you generally get it in the form of a Yocto-based distro, which – among other things – will build its own toolchain). The hardware vendor will usually give you a vanilla GCC with some patches. Major Linux consultancies (that’s very common in the automotive and consumer space) will usually give you a BSP based on the one that the vendor supplies, and some of them do give you a toolchain with all sorts of bells and whistles (some even give you a full IDE, with some useful goodies, like a compiler that won’t make you hate your life), in the form of some $MajorVendor Embedded Linux. Either way you get commercial support.
More cynically, but equally important: there are Android-enabled platforms out there that are horrifying to use – think you need a Ubuntu 12.04 machine to build the toolchain, the application-packaging script has to run as root out of /opt/VendorToolchain/user_app/, and the driver code does synchronisation by disabling all IRQs and sleeping for hundreds of milliseconds. They’re popular not on technical grounds – indeed, if that were all that mattered, you’d have to pour gasoline on it and set it on fire – but because they’re Android-based, and Android has Google behind it. That means a big vendor will be behind it ten years from now, that there are hundreds of outsourcing shops out there willing to jump in if the outsourcing shop you’re working with today goes under or wants to charge you 10% more for your next project and so on.
It’s not a pleasant thing to admit but I think you’ll only see Rust being used in these fields when a) a few major vendors will start using it, in a very public manner, in a well-hyped function (that’s sort of starting to happen) and b) it’ll be very easy to find Rust development services in one of the “traditional” outsourcing destinations, like SE Asia or Eastern Europe, because nobody is going to pay US/Western Europe salaries for infotainment systems.
(Note: before yelling “racism” at me, please consider that I’m speaking from my experience of working in one of these traditional outsourcing destinations ;-). )
It also boggles my mind that people keep writing new software in memory-unsafe languages. I’m super uninterested in that.
There isn’t currently a good alternative. Software like Pipewire needs to work in very low-latency space- and memory-constrained settings (garbage-collected languages are out) and needs to be extremely portable (Rust is out).
(I originally deleted this comment because it ended up a bit too resentful and I don’t think it’s fair. “This comment was deleted by its original author” looked equally unfair though :-).
Let me try again, focusing only on the good parts: I have pretty high hopes for PipeWire because it’s written by people with a great deal of understanding and expertise in terms of multimedia and real-life applications. Wim Taymans, the original developer behind the project, is one of the people who started gstreamer back in the day. I also have high hopes about its early roll-out: PipeWire is already pretty old now, and works remarkably well out of the box, but not everybody is rushing to roll it out by default – a far more responsible approach than what happened, uh, the last time).
These days on my local network…
Elsewhere, my website on DigitalOcean.
For everything else - I pay someone else to deal with it (e.g. Fastmail).
I don’t think this is a bad thing; the whole GOPATH/modules schism is confusing, especially for people new to the language; I’ve seen many people mix the workflows and run in to troubles. For those of us around since before modules were a thing it’s all pretty clear, but for everyone else it’s like joining in The Expanse in the middle of season 3, sasa ke?
The big problem now is that a lot of resources are outdated; for example I typically recommend The Go Programming Language to people new to Go, and while the language itself is very compatible and all the code still works, the surrounding tooling changed so much people will have a hard time getting starting. At least in the current schism solution the book still works (even though it’s not the recommended approach). I hope they release a 2nd edition at some point.
My impression last time I was trying to figure this out is also that all the Go official documentation is specified in the form of diffs, “here’s what we changed from the old system”, leading to lots of difficulties figuring out what I was supposed to do now.
Yeah, this sounds about right. Maybe I should write a “Getting started with Go in 2021” post or some such, which gives a concise description in a way that doesn’t assume any prior knowledge. Perhaps something like this already exists(?)
The $GOPATH stuff was one of the things that really irked me when I first learned to 8-10 years ago. It was bothersome enough I didn’t prioritize doing much real work with it.
The first real production code I shipped uses the go modules work. Definitely glad to see it improve over time and reduce the quirkiness of the ecosystem.
I think TGoPL will still be highly recommendable, even without a second edition. Almost all of it is related to the programming language (and braries), not how to build and manage dependencies. There’s one chapter on Go tooling, which will presumably require an update, and a couple of other mentions of go get
and GOPATH
, but I think overall it’s pretty “safe”. Don’t get me wrong; an update would be nice, but also wouldn’t require a rewrite by any means.
A rewrite wouldn’t be needed as the language and stdlib mostly stayed the same, just a update which updates the tooling, and perhaps a few other things like context
or the (upcoming) io/fs
packages, but those things aren’t that hard to learn later on and don’t have too much potential for confusion (certainly not when starting out).
I think the barrier to entry for these kind of things shouldn’t be underestimated; people read the gopl book, but they also read 3 blog posts, read something on HN, and they don’t know what is what. They try things, get errors they don’t understand and can’t solve, get confused/frustrated, and go do something else. At least, that’s what I usually do unless I really need to use something, but perhaps I just have a low tolerance for this kind of stuff.
An interesting list though I’m not sure I agree with not using panic
. It should absolutely be used judiciously but I’m a fan of using it in “impossible state” situations. Failing fast in a known bad state is a really handy tool for ensuring your test coverage and app’s internal models are consistent.
I mostly bring that opinion from years of OS development. When I worked on Windows, it was common for OS API’s not to check for null and to fail immediately on bad pointers. It was really helpful for tracking down bad code because you got a relatively useful stack.
Htmx, to me, seems like a simpler version of hotwire/turbolinks/etc, so I am interested in giving it a try soon.
Ya - it’s the successor to intercooler.is which predates Hotwire/Turbolinks/etc.
I loathe doing most frontend work but when something I’m doing calls for dynamism, htmx (and formerly intercooler) are what I reach for for simple stuff.
it’s very nice, you should try it if you can