On Lobste.rs home there’s yet another article commenting on the sad state of something. This is a quite common way of thinking about ecosystems, communities, tools and entire sectors that utterly disfunctional. IT is ripe for these kind of considerations because indeed, most software is written in contexts that prioritize profit, success or social credit over reliability, accessibility, sustainability, portability, inclusivity, resilience and the ability of your software to age well and grow.
We get it, IT is shit, software is broken, but there must be a case of success that we can take inspiration from.
Therefore I ask you what is, according to you, a field, a context, an environment, an ecosystem, where quality and good software manage to produce meaningful software with an impact on the world, and why you think they differ from the rest of IT.
Some rules to avoid boring answers:
The happy state of 2D painting and 3D modelling open source software.
I would add that even before 2.8, Blender was already an impressive example of software done right. Quite a few people were complaining about UX choices, but if you took the time to learn it (which is not an unreasonable expectation for a professional 3D suite), everything clicked and it made you immensely productive and even… happy.
I assume this is similar to how other people feel about vim or emacs (I have to admit that even though I use emacs daily, I was never blown away by it), but at least for me, Blender felt so right after the initial learning curve that I just wished (and still do) my whole system could work like Blender.
And the work the Blender Foundation is doing by bringing a cohesive vision to the table is extremely impressive.
C compilers.
10 years ago, there was only one C compiler that anybody used. Spurred by pressure (and ideas!) from clang, gcc has been improved enormously since then. We now have two excellent-quality and nearly-compatible C implementations with better-than-ever performance, optimizations, and diagnostics. Tools like asan have worked miracles for correct code.
Since nobody else has written about Rust yet, and you mentioned it, I’ll bite.
I guess one of the problems of waxing poetic about Rust is that it has kind of already been beaten to death. Oodles of people are excited about Rust. Since it’s still growing, and since there is no zeal like the zeal of the newly converted, there are a lot of people singing its praises. To such an extent that an entire culture war has evolved around it, including right here on Lobsters. I’m a biased observer, and I get my hands dirty in it, but if I try to pop up a level, it really is an interesting phenomenon. But that’s a topic for a different day…
I am not new to Rust, so I think if I was ever touched by the zeal of the newly converted, it has assuredly worn off by now. With that in mind, I’ll try to avoid just regurgitating the stuff other people say about Rust, but that’s hard.
I think the Happy State of Rust is its entire package. I’ve written code in a lot of programming languages and ecosystems (with Windows and the APL lineage being at least two of my major blind spots), and I feel reasonably safe in the conclusion that Rust’s complete package is fairly unique. My main shtick is being relentless about being upfront about trade offs, and there are few ecosystems out there that let you express the range of trade offs offered by Rust. On the one hand, you can dip down deep into high performing generic data structures that benefit from the amazing work of optimizing compilers like LLVM, but on the other hand, you can also stay in the relative comfort of a high level application oriented language when you want to. The former might require arcane knowledge of what exactly is and isn’t undefined behavior while the latter might cause you to say, “UB? What’s that?”
If you pop up a level, that’s a really impressive range to encapsulate in one ecosystem. It’s not like that range hasn’t been done before. Look at Python for example. The typical way you might see that range being expressed is with modules written in C with a nice, comfy and safe Pythonic API layered on top of it. But there’s just all sorts of trade offs there. First, there’s the friction of dealing with Python’s C module boundary. Second, there’s the inherent cost in providing that boundary in the frist place. And thirdly, there’s the cost of Python itself.
I am perhaps running the risk of putting up a straw man, but that’s OK. The fact that “write low level code in C and wrap it up in a Python module” is a very popular approach to expressing the range of trade offs I’m referring to is I think enough. An example might help.
Consider something as simple as counting the number of fields in a CSV file. The Python code is deliciously simple:
The Rust code is a smidge bigger, but not materially so:
Running the program on a 445MB CSV file from the Open Policing project with the Python program:
And now Rust:
Despite the fact that Python’s CSV parser is written in C, it just can’t compete. Rust’s CSV parser (disclaimer: I wrote it, I’m using an example I know well) is itself a very complex beast. But as in the Python ecosystem, all of this is well hidden from typical users of the library. It’s got some gnarly internals, but users are presented with an API that is at least close to the simplicity of Python’s API, except it’s also 6 times faster.
And this is just the beginning. Since Rust’s range of trade offs can really be cashed in almost everywhere, you can make small tweaks to your program to make it run even faster, if you’re willing to put up with a bit more complexity:
And running it:
The speed boost is small, but things like this can add up as your data set grows. And keeping in mind this is just an example. The point is that you just can’t do stuff like this with Python because Python doesn’t really let you easily amortize allocation. You’d likely need to drop all the way back down into C to get these kinds of optimizations. That’s a lot of friction to pay for.
OK, so why am I picking on Python? Surely, my point is not to say, “See, look, Rust is faster than Python!” That’s silly. My point is more holistic than that. It’s more like, “See, look, you can be in a comfy, safe high level language, but are also free to trade code complexity for faster programs without a lot of friction.”
Other ecosystems like modern C++ start to approach this. But obviously there are some critical distinctions there. The main one being that C++ is “unsafe by default everywhere.” You don’t get that same assurance of a safe and comfy high level language that you do in Rust. IMO, anyway.
And then there are all the other little things that Rust does well that makes me as a pragmatic programmer very happy. The built in unit testing and API documentation tooling is just lovely. It’s one of those things that Rust didn’t invent, of course, but it feels like I’m constantly missing it (in some form, to varying degrees) from almost every other language or ecosystem I use. There’s always some detail that’s missing or not done well enough or not ubiquitous.
I think I’ll stop it here. I wrote this comment with rose tinted glasses. Rust is not a panacea and there are lots of things about it that are inferior to other languages. For example, I presented one particular example where Rust and Python code have similar levels of complexity, but obviously, that does not necessarily generalize. There are other things that make Rust inferior in certain cases, such as compile times, platform support and certification. Language complexity is hard to fix, but I largely see the other things as at least having the potential to improve significantly. And if they don’t—or don’t improve as much as folks hope—then that’s OK. Rust doesn’t have to be All Things to All People at All Times. The rosy picture I painted above can still be true for men(and hopefully many others) while not being true for others. Trade offs aren’t just purely technical endeavors; costs that may not matter as much to me might matter a lot more to others.
Typo that gives the completely wrong idea: “ can still be true for men”
I’m sure you meant “me” instead of “men”
Hah, yes! Thanks for catching that. (The edit timeout has passed, otherwise I’d fix it.)
Nix and NixOS is what comes to mind, because they kind of rescued my whole interest in computers back in 2015, not by being perfect, indeed being quite rough, but by showing that it’s at least possible to have a packaging and deployment mechanism that makes sense, works reliably by design, and makes previously hard things quite nice.
In a sense it’s a single project (a “pearl in the mud”) but at the same time it’s a big octopus that embraces everything else, almost kind of like an Internet Archive for the whole interconnected free software universe. And it makes it possible, indeed fun, to make projects that span multiple ecosystems.
I recently made (for a client) a small web frontend app whose Nix-based build can run—reliably, on any CI server—a smoke test against a headless Chromium via WebDriver and hitting a fresh local Ethereum testnet with several contracts deployed, all recorded with ffmpeg, etc. Nix easily ties all this together in such a way that I’m confident it will work the same way years from now.
NixOS seems to me also like an optimal base for a potential flourishing ecosystem of custom distributions. You can generate your own virtual machine images, AWS images, live USB ISOs, etc. I’m looking forward to doing fun stuff with this. A couple of years ago just for fun I made a custom NixOS ISO that boots into a configured X system optimized for airgapped blockchain transactions, and I have a few ideas for kiosk-type projects and specialized systems for which NixOS would be perfect.
Does the project you mentioned happen to be open source?
In concept I’ve found Nix interesting but in practice it’s got a lot of detail to ramp up on that I’ve found off putting. It’s not that I want it dumbed down, I would just like to see small bite-sized examples of it built into larger ones.
Computer data interchange in general. It’s far from perfect, but we have good, universal solutions for many very fundamental things that various companies, governments and research institutions fought over basically up until the 1980’s:
These are all problems that I look forward to not having to re-solve in the next 50 or 100 years.
The wonderful world of ShaderToy! A community of graphics programming enthusiasts from beginners to seasoned pros. All code is open, the results are immediate, and everyone is in it for the fun.
The happy state of Haiku OS. It’s a completely open source operating system that is a joy to use, fast, simple, and elegant.
More importantly, the devs have managed to push the state of the art forward (packagefs is the most interesting package management strategy in years); and they’ve managed to build some of the best open source desktop hardware support anywhere.
This is what I’m happy about too - it’s not just a cute nostalgic environment that you’d think would be stuck in the mud, but it’s actually surprisingly innovative.
Is Haiku your daily driver? I used BeOS 5 for a few weeks in 2002 and loved it but it didn’t support my crappy Winmodem so I had to dual boot in order to do much and eventually lost interest. I’ve tried out Haiku every now and then on older laptops, etc. but never stuck for more than a few hours because I feared productivity loss moving from macOS.
Sadly no. Many years ago, BeOS was my daily driver (I built several machines over the years using only compatible hardware). I loved BeOS. Might even still have the Professional Edition box lying around somewhere. Such a great system.
Haiku, sadly, can’t be my daily driver. For work I need to be able to run large numbers of VMs at full speed, video chat using Slack and Google Meet, and have encrypted disk and external monitor support. I know the latter two are being worked on, but I don’t know if the other requirements will ever be there, sadly.
(Speaking of Winmodems, before I was a BeOS user, I was an Amiga user. The Amiga models I used back then couldn’t even use internal non-Winmodems. External modems were often twice the price of internal modems, and three times the price of Winmodems…it hurt.)
3D Printing. It’s a glorious golden age for 3D printing. It’s all even opensource. I have an opensource printer, that I control with opensource software, on a libre operating system, printing designs entirely created in open source software. And the whole thing has gotten so reliable it’s almost unbelievable.
I’m not sure what to attribute all that success to exactly, but I think it’s probably some combination of that fact that the basic tech is fairly basic and very well understood (since the main thing stopping it taking off was patents before they expired), the fact that it overlaps with the general maker community who are used to rolling up their sleeves and getting things done and then giving that knowledge away for free, and the fact that there’s bespoke hardware to be sold & therefore companies that benefit from the sounding software ecosystem being good so they can sell more hardware.
Disclaimer: opinions are my own, and I no longer do IT or software development for a living.
Anyways, I work for an ISP (cable TV) doing installation and troubleshooting. One of our internal tools for telemetry (mostly modem signal level/quality statistics over time, used for provisioning service and diagnosing if there is a problem with someone’s service) is wonderful and checks all boxes.
It makes our jobs in the field infinitely easier. A customer with chronic issues with their internet or telephone service will call in, and have a technician out. When one of us is out at their home or business, we find nothing wrong with any of their cabling from the telephone pole to their modem. At this point, most technicians can only throw their hands up and say that it isn’t a problem here, have a nice day.
Our telemetry allows us to conveniently (from a smartphone) change orders for obscure provisioning issues, and most importantly, view statistics for an entire network (generally bound geographically) and say that yes, we are having issues in the area and that you, dear customer, are not the only person having issues with your service, look and let me explain. Now, I can use the proof I have from our telemetry to present this to our “outside plant” maintenance crew or my boss, and hopefully get things fixed.
Whether or not anything ever gets fixed in a timely matter is a different topic. :)
The development team is always looking for input and for ways to make our telemetry more useful. The user interface is intuitive, and the important information it is forthright with. From the outside looking in, that team is having one hell of a fun time developing a useful product, mostly because their objective is helping out our other departments, not making a profit or gaining internet cred.
The happy state of hacker-friendly hardware! Some examples:
It is possible to program FPGAs using entirely open source tools!
A project called SymbiFlow aims to combine a synthesis tool with multiple bitstream backends, amounting to what it titles itself as the “GCC of FPGAs”.
Of course, open source FPGA work isn’t really taken seriously by the FPGA industry. There are big support contracts involved and so on, but like the prevalence of open source software in industry these days compared to ten or twenty years ago, I suspect it’s a matter of time before this mental model of open source software catches on in the industry.
Here is a nice presentation about what you can use FPGAs for in your own projects.
WebAssembly. While it’s still a nascent technology that has yet to win major mindshare, the work that is being done in this area will help bring us faster, safer and more reliable web applications, as well as making a bevy of existing software more useful and portable.
In my opinion, Berkeley Packet Filter (BPF) was a very elegant and successful project. It introduced a safe way of “scripting” the in-kernel packet filter and formalized an approach on how to do such things in general, which would beforehand be unthinkable: Uploading user code to run in the kernel? You must be mad!
The BPF approach was adopted by Linux, which later added a JIT-compiler to make it run even faster and extended it to be used in the context of defining security policies for applications with seccomp-bpf.
UPDATE: Also, thank you for this. We need more positive notes, especially in times like these.
I want to digress (but see the last two paragraphs).
These “sad state of” things do not, in my opinion, reflect the state of the subject under discussion so much as the expectations of the writer.
In the most recent case, the writer is sad about GUI things, and writes stuff about Qt and others that’s astonishingly similar to what similar people wrote twenty years ago, when I worked at Trolltech, with hardly a change. He’s sad (why do I feel sure it’s a he?) that… something. The something could have been more specific, but I did get the gist of it.
The software he writes about either has a business model that produces income, or doesn’t. As long as I paid attention to this, both kinds behaved as one might expect. The maintainers with a business model paid attention to whatever was compatible with their income, and were reluctant to take on responsibility for anything incompatible. The ones without a business model did whatever interested them, and were nice people who’d spend a bit of time helping people, too.
This is what you should expect. Neither class tended to merge patches that have nothing to do with their business model or interests (for separate, different reasons, and in both cases the reasons make sense). Neither class tended to implement stuff that interests you but not them, or do it in the way you want but they/their customers don’t.
IMO this is fine. It’s what should be expected. People with customers should serve their customers, people without should write some interesting code and then go for a lovely picknick with their beloved. IMO that’s fine, and people who consider it a sad state should adjust their expectations…
Therefore, I’d like to mention the happy state of Qt, which has had better documentation than whatever you work on, and has kept that up for for more than twenty years. That’s praiseworthy longevity. The team has also always been careful about maintaining compatibility — the changes necessary in order to go to a two-years-later version of Qt have always been admirably small.
I could say something similarly positive about most of the items on that list. I choose Qt because it was first on the naysayer’s list, and because I know it better than most other GUI libraries.
While what you say it’s true, it’s also a false dychotomy. Software is generally expected to serve people and produce value. Profit and individual values are a partial proxy for it, but for sure they do not represent the whole spectrum of useful things (useful for somebody) that can be made with Software. Then you can say you’re selfish, you don’t care and you just want to pursue your own good against the interest of humanity (either in the form of profit or in the form of personal satisfaction) but for a Human that participates in society, the expected behavior is that you do care about having a positive impact on other people. So the problem is framed as a gap between this broad goal and your narrow, specific action. The implicit expectation that users, contributors and commentators have is that your software should have a positive impact on something or state that he doesn’t intend to (like a transpiler from brainfuck to K). Because if your project is out there just for your benefit, then people will frame it as such and probably distance themselves, not contribute and not promote it. If you instead do not take this position, you’re expected, as a software developer, to care about the multitude of interests, value systems and needs that your users might have. Publishing software is not a morally neutral act. From great powers come great responsibilities.
Could you describe how this dichotomy is false more concretely? Perhaps using the same example — a GUI library which is available as open source and has the language bindings the developer’s customers pays for, but does not have bindings for other programming languages? It has a positive impact on the people who pay, and it has a positive impact on the people who don’t pay but do want roughly the same as the paying customers.
that’s just one of the possible models. It’s one of the ones that are common in the Open Source community but it’s one among many.
A GUI is somehow neutral as a tool, so it’s not a good example. Most software is not written for other programmers (like a GUI library) but for real-world people. That have complex needs, values and desires. Think of software written for communities, for political goals, to support people in need, to push economy in new directions, to create new cultural, social, artistical spaces. All these softwares are written neither for profit nor for (directly) appeasing a need or desire of the developer, but to address problems you see in the world (that are not necessarily yours) and to create change.
I intended it as two models:
Can you describe some examples of those other models you have in mind? And perhaps explain why it’s reasonable to expect more of those others than of the two I described?
EDIT: Just for clarity, I regard addressing problems you see as a special case of following your interests. (And I regard living by grant or donations as a kind of business model, with the grantor or donors filling the role of customers.) But even if you disregard that taxonomy, my argument applies mutatis mutandis. If software is written because the developer(s) want to appease one problem, expecting the software to do something else is unreasonable. If software is written to grant, expecting it to do something not mentioned in the grant application is unreasonable, etc.
You can model all human behaviors as driven by individual interest if you want, but it won’t bring you far. It’s a narrow, ideological option to model humans but fails to explain many things, for example the expectations people have about software. So if you stick to a narrow perspective it’s clear you won’t be able to understand complex social behaviors that exist outside that mindset.
Well, for example the whole Free Software movement have very clear political propositions, even though their own political positioning was individualistic to begin with, so it might create a lot of confusion. Going to more modern things, I would pick as examples the whole federated software ecosystem, dyne.org, FairBnB, Coopcycle and all the other co-op oriented platforms.
Okay, so let me cook up an example using a well-known free software movement personality with clear political propositions: rms. Suppose developer X is working for an arms manufacturer on a classified-source military system, using an LGPL’d library, and has some problems involving code rms wrote. Do you think X can expect help from rms? Or should be able to?
I think that’s fine. rms gets to decide what he wants to spend time on, and if that’s not what X wants, it’s not a “sad state of” either that library, or of rms’ work in general, or of that kind of library in general, it’s a sad state of X’s miscalibrated expectations.
That’s a whole problem with how software is intended right now and yes, if you frame the situation in these terms, RMS doesn’t have to work on arms manufacturing software as he deosn’t have to work on anything else. The problem is that in such a system regulated only by individual interest, an arm manufacturer is the same of a community of african children that need a patch to keep the software of their well to keep pumping water. RMS is the only judge of his own time and the fact that he’s the only person that can write that patch (absurd example) has no weight on the expectations we have about his behavior.
Anyway there’s a nice article that touches some of these points: https://lipu.dgold.eu/original-sin.html
The scientific Python community is amazing. Astronomy, image processing (usually biology), earth sciences, just learned about magnetoencephalography and electroencephalography, bioinformatics, chemistry… list goes on and on.
the go tooling (compiler, modules, fmt, guru, etc.)
So, I wanted to get back into programming. I was re-learning Python by just using it for several, personal projects. Where I work, we were getting slammed in all areas with horrible technology and processes holding us back. This was causing people misery in and outside the company in large numbers. I figured out a way to speed things up several times with an out-of-band system that was simpler. I got the prototype together, manually entered all the data since I can’t connect it to the network, and our productivity improved several fold. After some practice iterations, I wrote that quick and dirty prototype during about 20 min of a lunch break. It may or may not get adoption on top but coworkers thought it was best day we had. Yay!
The reason for the comment isn’t my program, though. Its interface was horrible even if efficient. My memory is too broken to even program right. So, I was mostly doing Print-Oriented Programming & Debugging on the side projects until they worked. I had to DuckDuckGo all kinds of problems since I was coding before learning the language in the subset I used in all of them. I eventually got Visual Studio Code, pylint, etc. I built up a set of functions to do things like I/O and requests I kept having to deal with. Eventually, it was just easy to write that one program within the IDE. All the problems it immediately found in the bigger, earlier project I did in Gedit made me wish I used it then, too.
What I saw in all this reminds me of a Mr Rogers quote I saw recently. At each point, there were people there helping me. They built the interpreter, made a better IDE, made its error-finding extensions, made pretty good docs, answered folks questions on forums (not just SO), wrote prototypes of code to give me headstarts (big help with Flask), wrote great error messages, made me libraries, etc. Unlike others I tried, the Python ecosystem had all of this in such a way I could keep making progress while barely knowing what I was doing. As I learned for real, I made progress even faster with more code I could keep.
Although I learned its weaknesses, the empowering effect this huge ecosystem had keeps me doing Python with less time on the better language. Gave me a good idea for what other language I need to be using to address those deficiencies. Might learn it, might not. Meanwhile, I cranked out some more code today for my custom, bookmarking system. I used Python knowing others’ got my code’s failings covered with their tools, docs, and helpful write-ups. It’s pretty awesome compared to when I first started programming.
I’m a fan of Prometheus. It’s operationally simple and rock solid. It’s equally easy to set up Prometheus for a traditional bare metal server farm and containerized microservices. Compared to Graphite/Carbon, which despite its merits necessitates the operator to learn how to operate 38 different Python daemons.
We have some big Epyc 512GB RAM regional primary pollers scraping tons of metrics, as well as customized/mini containerized Prometheus + Thanos shipping to Ceph. Versatile.
silverblue has been a really interesting ride for someone looking for a home between conventional linux installs and nix. The benefits of the mature Fedora+rpmfusion repositories with ostree have made me feel like I’ve finally found a combination of stable/rolling that matches up to the lifestyle I want to have with my tooling.
Tell us more… are you using Silverblue in combination with nix?
No, I was examining nix, the issue for me was the “trust” in the packaging (nix packages are largely what I’d say are “hobbyist” right now, there is not as much rigor in the packaging pipeline as say {debian, fedora}, specifically a lot of them are lagging behind upstream). So silverblue is a weird mix of being able to “compose” your system, but utilize existing packaging ecosystems (fedora).
I wondered about that too, but I think they are referring to shared capabilities (immutable /, atomic updates/rollbacks). As far as I know, it’s not yet possible to create top-level directories on Silverblue such as Nix needs (unless you use a wrapper that uses user namespaces).
That said, it’s of course possible to create a container with Fedora Toolbox and run Nix in that.
I am a Nix(OS) user. But I also find Silverblue very interesting. For people who do not want to go over the steep learning curve of Nix, it offers some of the same benefits, while the whole Fedora experience is very smooth.
Installing stuff. Ok, when I started using computers, there was no “installing stuff”, but then if your stuff wasn’t on a ROM cartridge you had to know at least one different command for each model of computer to get it to listen to the tape.
Then when home computers started to get hard disks, you had to know whether to copy the stuff from the source media into the hard disk, or whether to run a thing, and what virtual drives you had to map where to get it to work. If your thing came from the internet or a BBS, you had to have extractors for whichever crazy lha/hqx/sit/lhz algorithm the person on the other end had used. Of course your program might not work if it didn’t have the right file in L: or /lib or wherever. Of course, another program might not work if it did.
And that was after you’d got your operating system into the disk, having had to know things like which IDE device the disk was on and what partition layout you needed. Some of the bad ones asked you how big a partition you wanted for swap, or whether to “bless” a certain file.
These days, almost all operating systems have reasonable installers, and almost all have simple systems for adding software. The good ones give each additional application its own environment so it doesn’t mess up the libraries for everyone else.
The happy state of developers spectrum. I like how if you are a developer you can choose to be a preacher of a new shiny thing (see neighboring comment about rust for example) or a dark matter developer (https://www.hanselman.com/blog/DarkMatterDevelopersTheUnseen99.aspx). You can also easily switch from preacher to dark and back. Freedom and diversity is everything!
For me in recent years:
The Rust ecosystem. The honeymoon is over (I have been using Rust for a few years). But I still like the language, the package manager, and several high-profile crates a lot (ndarray, petgraph, etc.). For me Rust has changed programming a lot in that it’s made it possible to make high-level abstractions which are still performant.
Nix. I am probably still in the overzealous honeymoon phase. But for me there is clearly a tech life before and after Nix. It’s certainly not perfect, but it has made many aspects of software engineering easier for me: defining development environments (with direnv/nix-shell), making CI builds more robust (since you can trivially pull in all kinds of dependencies), defining systems declaratively, having access to a huge package system, and the community is very friendly and makes it easy to start contributing. My only qualm with Nix is that it is probably has too much of a learning curve and is too different to see a wide adoption.
The Linux ecosystem. For a long time, I have been very critical of Linux, despite loving it and having used Linux in some form since 1994. In recent years, things have really been started to get together. System management has become easier, better documented, more uniform thanks to systemd. The graphics stack has really improved with Wayland, Mesa, and desktops that move to Wayland. Flatpaks + portals + pipewire are slowly bringing truly sandboxed applications.