1. 9

    Except that, sadly, the tooling is pretty weak.

    I like the language but I don’t like the experience of developing with it. I’ve grown soft after a couple of years of Rust development.

    1. 6

      I don’t know where Rust thinks it’s going, though. I can’t even update to the latest version of Bitwarden_rs because it requires a rust-nightly compiler which won’t build on FreeBSD because it dies with an invalid memory reference

      error: process didn’t exit successfully: /wrkdirs/usr/ports/lang/rust-nightly/work/rustc-nightly-src/build/bootstrap/debug/rustc -vV (signal: 11, SIGSEGV: invalid memory reference)

      1. 5

        That’s Bitwarden_rs’s fault for using nightly, imo.

        Looks like this is bug has already been reported with bitwarden-rs though: https://github.com/dani-garcia/bitwarden_rs/issues/593

        1. 3

          Every non-trivial rust program I’ve tried to use so far requires a nightly compiler. This ecosystem is just a trash fire.

          1. 8

            I’ve got an 80k+ LOC Rust codebase I work with at Rust that doesn’t use nightly. In fact, we’ve never needed nightly… The program runs on production workloads just fine.

            1. 12

              I’m using Rust professionally and I don’t even have a nightly compiler installed on my computer. Almost all Rust large programs I see don’t require nightly compilers. Any that do tend to be OS kernels, with exception of a few web apps like this project that use Rocket, a web framework (with many good alternatives, I might add, not to disparage Rocket) that requires syntax extensions and loudly states it requires nightly Rust (and is planning to target stable Rust next release apparently). People who use nightly are generally already writing something experimental which is explicitly not production-quality, or they’re writing something that’s working towards being ready for an upcoming feature (which allows the ecosystem to develop well in response to ecosystem changes, vs waiting months or years for trickle down as is common in other languages), and they’re targeting what is explicitly an alpha-quality compiler to do so.

              1. 3

                People who use nightly are […] and they’re targeting what is explicitly an alpha-quality compiler to do so.

                Or they just want to write benchmarks ;)

                1. 7

                  criterion.rs is a better harness and works on stable Rust. I’ve been slowly migrating all my crate benchmarks to it. The only advantage of the built-in harness (other than compile times) is the convenient #[bench] annotation. But criterion will get that too, once custom test framework harnesses are stabilized. See: https://bheisler.github.io/post/criterion-rs-0-3/

                  1. 6

                    …and don’t want to use excellent third party tools that function on stable, like Criterion. ;)

                    I admit, the fact that Criterion works great on stable and the built in cargo bench doesn’t IS pretty dismal.

        1. 2

          If you can find it, Scientific Forth by JV Noble (same author as the article) is a great book on doing practical work in Forth.

            1. 1

              Looks like it. I have a nearly new physical copy that I bought from a friend when he retired.

          1. 3

            The couple of times I’ve written Scala for any sizable chunk of code, I came away underwhelmed. When I was in college, it was interesting because I knew OOP and wanted to learn FP. After using it for actual work, I’ve decided I’d instead write Java or Clojure.

            1. 2

              I’ve been picking up Common Lisp again. I have some work I’d like to do around web services and parsing logs that i feel like CL would match to nicely.

              Slowly learning that when I tried to learn it when I was 19, I definitely did not grok it’s power the way I do now. My day to day work is in Rust (>70k LOC codebase) and that’s helped open up my ideas. CL has mapped nicely to that work.

              1. 2

                I’ve been countering the myth, too. I just think this person did a better job by using a lot of market-grabbing examples that won in this way.

                1. 3

                  Agree vehemently. Most of my experience with people arguing about perf is them misquoting Knuth (”Optimization is the root of all evil in programming.”).

                  In reality, the process for building software should be:

                  • make it correct
                  • make it simple
                  • make it fast

                  Preferably follow that order but you should Perform every step.

                  1. 3

                    “Premature optimization is the root of all evil” is such a misunderstood quote. The full context really highlights what it’s about:

                    Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

                    The noncritical parts section is important: we shouldn’t spend time (initially) making fast the parts of the program that are seldom called, but we should spend the time necessary to make the core functionality fast. The small efficiencies bit is also important: we can let small efficiencies go by, but we probably shouldn’t let big inefficiencies go by.

                    1. 2

                      I get a lot of goodwill at my current job by doing that third step for our user-facing software. Most of what I do is trivial profiling and optimizations. A lot of goodwill.

                      1. 1

                        I’ll add the Richard Gabriel principle to get it 90% of the way to correct… works well enough… before correct. Maybe that, simple, and fast before correct.

                        1. 2

                          Ya - ”correct” is a pretty overloaded word. Sometimes you can simplify, removing unnecessary bits which allows you to get to ”correct” faster.

                    1. 71

                      Here’s a script to migrate your repos to hg.sr.ht:


                      1. 4

                        I like competition in general. Sometime I love watching it happen, though. :)

                        1. 6

                          hey @SirCmpwn thanks a ton for that script. Just imported 19 repos (both git and hg) into sr.ht. Some of those repos were 9 years old. : )

                          1. 3

                            Great :)

                          2. 1

                            I went ahead and got a subscription for Source Hut even if I may only use it as a mirror for now.

                            I think diversity is good and would like to play more with it in the future :)

                            1. 0

                              I’m trying to do this migration and i’m gettin some errors related to JQ, “parse error: Invalid string: control characters from U+0000 through U+001F must be escaped at line 2, column 34”

                              where the bug track or whatever i can report this to you for? :)

                              1. 2

                                Can you pull down the latest script and try again? I haven’t set up a bug tracker for this script.

                            1. 56

                              Fortunately, it’s also the best of currently available major browsers, so it’s not exactly a hardship.

                              1. 22

                                Not on macOS. Sure, it has a whole lot of great features, but it’s just slow. It feels slow, looks slow, and macOS keeps telling me that Firefox is using an excessive amount of power compared to other browsers.

                                I guess it’s too much to ask for, for Firefox to feel like a good, native macOS app, like Safari, but the fact of the matter is that that is why I don’t use it as my main browser.

                                1. 19

                                  I use it on Mac OS X and it doesn’t feel slow to me at all. And it’s not using an excessive amount of power that I can tell. Perhaps it’s the version of Firefox being used?

                                  1. 14

                                    I’ve been sticking to Safari on MacOS because I’ve read that it really does make a difference to battery life (and I’m on a tiny Macbook so, you know, CPU cycles aren’t exactly plentiful). This thread just prompted me to check this for myself.

                                    I opened a typical work mix of 10 tabs in both Safari 12.1 and Firefox 66.0.3 on MacOS 10.14.4: google calendar + drive, an open gdocs file, two jira tabs, this lobsters thread (well, it is lunchtime…) and the rest github. Time for some anec-data! :-)

                                    After leaving both browsers to sit there for 10 mins while I made lunch (neither in the foreground, but both visible and showing a github page as the active tab), these are the numbers I eyeballed from Activity Monitor over about a 30 second period:


                                    • Energy Impact: moving between 3.3 and 15.6, mostly about 4
                                    • CPU: various processes using 0.3, 0.4, 0.5 up to one process using 1.4% CPU


                                    • Energy Impact: moving between 0.1 and 1.3, mostly around 0.5
                                    • CPU: more processes than Firefox, but most using consistently 0.0 or 0.1% CPU

                                    Firefox isn’t terrible but Safari seems really good at frequently getting itself down to a near-zero CPU usage state. I’ll be sticking with Safari, but if I was on a desktop mac instead I think I’d choose differently.

                                    As an aside, Activity Monitor’s docs just say “a relative measure of the current energy consumption of the app (lower is better)”. Does anyone know what the “Energy Impact” column is actually measuring?

                                    1. 5

                                      I have had the same experience with Firefox/Chrome vs Safari.

                                      I use Chrome for work because we’re a google shop and I tend to use Firefox any time my MacBook is docked.

                                      But I’m traveling so much, I generally just use Safari these days.

                                    2. 9

                                      I use it on Mac OS X and it doesn’t feel slow to me at all.

                                      If you can’t feel and see the difference in the experience between, say, Firefox and Safari, I don’t know what to tell you.

                                      And it’s not using an excessive amount of power that I can tell. Perhaps it’s the version of Firefox being used?

                                      Have you tried checking in the battery menubar-thing? There’s an “Using Significant Energy” list, and Firefox is always on it on my machine if it’s running. And that is both Firefox as well as Firefox Nightly, and it is so for all versions since a long time. My two installs are updated per today, and it’s the same experience.

                                      1. 1

                                        If you can’t feel and see the difference in the experience between, say, Firefox and Safari, I don’t know what to tell you.

                                        There are plenty of people who can’t hear the difference between $300 and $2000 headphones. Yes, there are audiophile snobs who’re affronted by the mere idea of using anything but the most exquisitely constructed cans. But those people are a vanishingly small minority of headphone users. The rest of us are perfectly happy with bog standard headphones.

                                        Apple likely had to descend through numerous circles of hell while hand-optimizing Safari for the single platform that it needs to run on. Will Firefox get there? Unlikely. Will most users even notice the difference? Most certainly not.

                                        1. 6

                                          They will when their battery life is abysmal and they start hearing that it’s because of Firefox.

                                          I really want to see Firefox get more adoption, but there are a lot of techies with influence who will keep away because of this, myself included. It’s not a convenience thing - I just can’t get to mains power enough as it is in my job, so more drain is a major problem.

                                          1. 1

                                            They will when their battery life is abysmal and they start hearing that it’s because of Firefox.

                                            The problem is that the feedback cycle isn’t even long enough for them to hear about this. The cause and effect are almost immediate depending on your display resolution settings with bug 1404042.

                                            1. 3

                                              This is what happens when you fight the platform.

                                              1. 2

                                                This is what happens when the platform is hostile to outsiders.

                                                1. 8

                                                  See, I don’t see it that way. I see it as Mozilla deciding on an architecture for their software that renders that software definitely suboptimal on the Mac. It’s just a bad fit. I’m not claiming that Mozilla should have done things differently – they are welcome to allocate their resources as they see fit, and the Mac is most definitely a minority platform. There are many applications that run on the Macintosh that are not produced by Apple that don’t have these problems.

                                                  iOS is a different story, one where hostility to outsiders is a more reasonable reading of Apple’s stance.

                                          2. 2

                                            Now that I’m at work, I’m seeing what hjst is showing. This doesn’t bother me that much because I use the laptop at work more like a desktop (I keep it plugged in). But yes, I can see how Firefox might be a bit problematic to use on the Mac.

                                          3. 1

                                            I’ll have to check the laptop at work. At home I have a desktop Mac (okay, a Mac mini).

                                          4. 4

                                            There are known issues which are taking a long time to fix. Best example is if you change the display resolution on a retina Mac. You can almost see the battery icon drain away on my machine.

                                            1. 3

                                              I find it depends a lot on what FF is doing - usual browsing is fine, but certain apps like Google Docs or anything involving the webcam make it go crazy.

                                              1. 20

                                                Google sites, unsurprisingly if disappointingly, don’t work as well in Firefox as they do in Chrome. But that’s really on Google, not Mozilla.

                                                1. 15

                                                  They used to actively break them - e.g. GMail would deliberately feed Firefox Android a barely-functional version of the site. https://bugzilla.mozilla.org/show_bug.cgi?id=668275 (The excuse was that Firefox didn’t implement some Google-specific CSS property, that had a version in the spec anyway.) They’ve stopped doing that - but Google’s actions go well beyond passively not-supporting Firefox.

                                            2. 5

                                              For me, it feels faster than Chrome on MacOS, but the reason I don’t use it is weird mouse scroll behavior (with Apple mouse). It differs too much from Chrome’s behavior. I don’t know how to debug it, how to compare, what is right behavior (I suspect Chrome’s scrolling is non-standard and it dampens acceleration, while Firefox use standard system scrolling). It just feels very frustrating, but in subtle way: I become nervous after reading lots of pages (not right after the first page). I tried various mouse-related about:config settings but none of them had any effect (and it’s hard to evaluate results because differences are very subtle).

                                              Maybe the answer is to use standard mouse with clicky scroll wheel, but I hate clicky scroll wheels. “Continuous” scrolling is one of the best input device improvements of recent times (however it would be better if it was real wheel/trackball instead of touch surface).

                                              1. 1

                                                Have you tried Nightly yet? I believe there are some great improvements made recently for this. It isn’t all fixed, but it has improved.

                                                1. 3

                                                  I’m on Nightly right now, and it hasn’t improved for me at least.

                                                2. -1

                                                  I think macOS disadvantages apps that compete with Apple products. That’s unfortunate though.

                                                  1. 7

                                                    Any evidence for this statement?

                                                    1. 9

                                                      Do you have any proof?

                                                      Anecdotally I use a lot of third-party apps that are a lot better than Apples contemporaries.

                                                      I just think the truth is that Firefox’ hasn’t spent enough time on optimizing to each platform, and on macOS where feel and look is a huge deal, they simply fall through.

                                                      1. 1

                                                        The reports that Firefox has issues on macOS and Apple’s behaviour with iOS, for starters.

                                                        1. 7

                                                          Often the simplest solution is the correct one, meaning that it’s more likely that Firefox just hasn’t optimized for macOS properly. If you look at the bug reports on the bug tracker, this seems to be the case.

                                                          Also if your theory were to be correct, why is other non-apple browser like chromium not having these issues? Could it perhaps be that they have in fact optimized for macOS, or do you propose that apple is artifically advantaging them?

                                                          1. 13

                                                            pcwalton hints at twitter that gains that e.g. Safari and Webkit have is through the usage of private API in macOS. You could probably use those API as well from Firefox, at the cost of doing tons of research on your own, while Webkit can just use them. (further down the thread, he hints at actually trying to bind to them)


                                                            1. 3

                                                              That’s very interesting, and it’s probably a factor. However these are problems that Firefox have, not all third-party browsers. No Chromium based browser have these issues, at least in my experience. Maybe it’s through privat API that you can optimise a browser the most on macOS, but it doesn’t change the fact that Firefox is under-optimised on macOS, which is why it performs as it does.

                                                              1. 8

                                                                Point being: Chromium inherits optimisations from apples work which Mozilla has to work hard to develop in a fashion working with their architecture. Yes, there’s something to be said about organisational priorities, but also about not being able to throw everyone at that problem.

                                                                I’m really looking forward to webrender fixing a lot of those problems.

                                                                1. 1

                                                                  And it’s a sad fact, because I’d love to use Firefox instead of Safari.

                                                                  1. 7

                                                                    Sure, from a users perspective, all of that doesn’t matter.

                                                                    Just wanted to say that this is hard and an uphill battle, not that people don’t care.

                                                                    The Firefox team is well aware of those two contexts.

                                                            2. 0

                                                              It’s certainly possible. But at the very least Apple has little incentive to have Firefox work well on macOS. Chrom{e|ium} is so widely used, that Apple would hurt themselves if it didn’t work well on macOS.

                                                              I’d be a bit surprised if Mozilla is really falling down on optimising Firefox on macOS. It’s not as if Mozilla is a one man operation with little money. But perhaps they decided to invest resources elsewhere.

                                                        2. 1

                                                          That’s true in cases where apps want you to pay for features (like YouTube not offering Picture-in-Picture since it’s a paid feature and Apple wants money for it to happen) but not true in the case of Firefox. Unfortunately, Firefox’s JavaScript engine is just slower and sucks up more CPU when compared to others.

                                                      2. 7

                                                        Yeah, I’ve switched between Firefox and Chrome every year or two since Chrome came out. I’ve been back on Firefox for about 2 years now and I don’t see myself going back to Chrome anytime soon. It’s just better.

                                                        1. 3

                                                          Vertical tabs or bust.

                                                        1. 2

                                                          Mostly still trying to get my footing as a Dev Lead. Couple months ago, I got moved officially into a Dev Lead role and that’s been interesting - trying to determine costing and coordinate a team is a new and fun ball game I’ve not played yet.

                                                          Besides that, writing a lot of Rust which isn’t new.

                                                          1. 4

                                                            Feel like I’d rather just sit quietly with a notebook and think about code than work with this janky-at-best setup.

                                                            To each their own though.

                                                            1. 30

                                                              I enjoyed the author’s previous series of articles on C++, but I found this one pretty vacuous. I think my only advice to readers of this article would be to make up your own mind about which languages to learn and use, or find some other source to help you make up your mind. You very well might wind up agreeing with the OP:

                                                              Programmers spend a lot of time fighting the borrow checker and other language rules in order to placate the compiler that their code really is safe.

                                                              But it is not true for a lot of people writing Rust, myself included. Don’t take the above as a fact that must be true. Cognitive overheads come in many shapes and sizes, and not all of them are equal for all people.

                                                              A better version of this article might have went out and collected evidence, such as examples of actual work done or experience reports or a real comparison of something. It would have been a lot more work, but it wouldn’t have been vacuous and might have actually helped someone answer the question posed by the OP.

                                                              Both Go and Rust decided to special case their map implementations.

                                                              Rust did not special case its “map implementation.” Rust, the language, doesn’t have a map.

                                                              1. 16

                                                                Hi burntsushi - sorry you did not like it. I spent months before this article asking Rust developers about their experiences where I concentrated on people actually shipping code. I found a lot of frustration among the production programmers, less so among the people who enjoy challenging puzzles. They mostly like the constraints and in fact find it rewarding to fit their code within them. I did not write this sentence without making sure it at least reflected the experience of a lot of people.

                                                                1. 20

                                                                  I would expect an article on the experience reports of production users to have quite a bit of nuance, but your article is mostly written in a binary style without much room for nuance at all. This does not reflect my understanding of reality at all—not just with Rust but with anything. So it’s kind of hard for me to trust that your characterizations are actually useful.

                                                                  I realize we’re probably at an impasse here and there’s nothing to be done. Personally, I think the style of article you were trying to write is incredibly hard to do so successfully. But there are some pretty glaring errors here, of which lack of nuance and actual evidence are the biggest ones. There’s a lot of certainty expressed in this article on your behalf, which makes me extremely skeptical by nature.

                                                                  (FWIW, I like Rust. I ship Rust code in production, at both my job and in open source. And I am not a huge fan of puzzles, much to the frustration of my wife, who loves them.)

                                                                  1. 4

                                                                    I just wanted to say I thought your article was excellent and well reasoned. A lot of people here seem to find your points controversial but as someone who programs C++ for food, Go for fun and Rust out of interest I thought your assessment was fair.

                                                                    Lobsters (and Hacker News) seem to be very favourable to Rust at the moment and that’s fine. Rust has a lot to offer. However my experience has been similar to yours: the Rust community can sometimes be tiresome and Rust itself can involve a lot of “wrestling with the compiler” as Jonathan Turner himself said. Rust also provides some amazing memory safety features which I think are a great contribution so there are pluses and minuses.

                                                                    Language design is all about trade-offs and I think it’s up to us all to decide what we value in a language. The “one language fits all” evangelists seem to be ignoring that every language has strong points and weak points. There’s no one true language and there never can be since each of the hundreds of language design decisions involved in designing a language sacrifices one benefit in favour of another. It’s all about the trade-offs, and that’s why each language has its place in the world.

                                                                    1. 10

                                                                      I found the article unreasonable because I disagree on two facts: that you can write safe C (and C++), and that you can’t write Rust with fun. Interpreted reasonably (so for example, excluding formally verified C in seL4, etc.), it seems to me people are demonstrably incapable of writing safe C (and C++), and people are demonstrably capable of writing Rust with fun. I am curious about your opinion of these two statements.

                                                                      1. 7

                                                                        I think you’re making a straw man argument here: he never said you can’t have fun with Rust. By changing his statement into an absolute you’ve changed the meaning. What he said was “Rust is not a particularly fun language to use (unless you like puzzles).” That’s obviously a subjective statement of his personal experience so it’s not something you can falsify. And he did say up front “I am very biased towards C++” so it’s not like he was pretending to be impartial or express anything other than his opinion here.

                                                                        Your other point “people are demonstrably incapable writing safe C” is similarly plagued by absolute phrasing. People have demonstrably used unsafe constructs in Rust and created memory safety bugs so if we’re living in a world of such absolute statements then you’d have to admit that the exact same statement applies to Rust.

                                                                        A much more moderate reality is that Rust helps somewhat with one particular class of bugs - which is great. It doesn’t entirely fix the problem because unsafe access is still needed for some things. C++ from C++11 onwards also solves quite a lot (but not all) of the same memory safety issues as long as you choose to avoid the unsafe constructs, just like in Rust.

                                                                        An alternative statement of “people can choose to write safe Rust by avoiding unsafe constructs” is probably matched these days with “people can choose to write safe C++17 by avoiding unsafe constructs”… And that’s pretty much what any decent C++ shop is doing these days.

                                                                        1. 5

                                                                          somewhat with one particular class of bugs

                                                                          It helps with several types of bugs that often lead to crashes or code injections in C. We call the collective result of addressing them “memory safety.” The extra ability to prevent classes of temporal errors… easy-to-create, hard-to-find errors in other languages… without a GC was major development. Saying “one class” makes it seem like Rust is knocking out one type of bug instead of piles of them that regularly hit C programs written by experienced coders.

                                                                          An alternative statement of “people can choose to write safe Rust by avoiding unsafe constructs” is probably matched these days with “people can choose to write safe C++17 by avoiding unsafe constructs”

                                                                          Maybe. I’m not familiar with C++17 enough to know. I know C++ was built on top of unsafe language with Rust designed ground-up to be safe-as-possible by default. I caution people to look very carefully for ways to do C++17 unsafely before thinking it’s equivalent to what safe Rust is doing.

                                                                  2. 13

                                                                    I agree wholeheartedly. Not sure who the target survey group was for Rust but I’d be interested to better understand the questions posed.

                                                                    Having written a pretty large amount of Rust that now runs in production on some pretty big systems, I don’t find I’m “fighting” the compiler. You might fight it a bit at the beginning in the sense that you’re learning a new language and a new way of thinking. This is much like learning to use Haskell. It isn’t a good or bad thing, it’s simply a different thing.

                                                                    For context for the author - I’ve got 10 years of professional C++ experience at a large software engineering company. Unless you have a considerable amount of legacy C++ to integrate with or an esoteric platform to support, I really don’t see a reason to start a new project in C++. The number of times Rust has saved my bacon in catching a subtle cross-thread variable sharing issue or enforcing some strong requirements around the borrow checker have saved me many hours of debugging.

                                                                    1. 0

                                                                      I really don’t see a reason to start a new project in C++.

                                                                      Here’s one: there’s simply not enough lines of Rust code running in production to convince me to write a big project in it right now. v1.0 was released 3 or 4 years ago; C++ in 1983 or something. I believe you when you tell me Rust solves most memory-safety issues, but there’s a lot more to a language than that. Rust has a lot to prove (and I truly hope that it will, one day).

                                                                      1. 2

                                                                        I got convinced when Rust in Firefox shipped. My use case is Windows GUI application, and if Firefox is okay with Rust, so is my use case. I agree I too would be uncertain if I am doing, say, embedded development.

                                                                        1. 2

                                                                          That’s fair. To flip that, there’s more than enough lines of C++ running in production and plenty I’ve had to debug that convinces me to never write another line again.

                                                                          People have different levels of comfort for sure. I’m just done with C++.

                                                                    1. 16

                                                                      Having spent the last 6 months working Rust, I’d disagree with his conclusion around Rust being prideful about pushing the borrow checker.

                                                                      The language is explicit in how you should use it - the rules are different than someone coming from C++ or C# might expect. It is to be expected that you fight the compiler - it’s a different way of thinking. That doesn’t mean Rust is right but it also doesn’t mean it’s wrong. It’s an opinion.

                                                                      This is similar to new users of Haskell fighting purity. Just because Haskell is pure doesn’t mean it’s right or wrong - it’s a design choice that you have to learn to work with.

                                                                      1. 1

                                                                        I think we have to admit that, for most people, the just get it to compile->runtime error->printf debugging loop is preferable. Even if only because it feels more productive.

                                                                        1. 8

                                                                          There is quite a large class of bugs that Rust purports to fix in which you only get a runtime error if you’re lucky. At least, when compared to C or C++.

                                                                          1. 0

                                                                            You’re right. It’s closer to a (wait for the world to explode->printf debugging->wait again) loop. This goes along with the metaphor of building software as construction. There’s a whole group of people who just want to duct tape that leak under the sink. They aren’t building a house or even a shed, as much as they are temporary tenants. You know you won’t be living in the house forever, so it’s not worth it to actually fix the problem.

                                                                            1. 2

                                                                              I do think most source codes live too short to care, but shouldn’t systems be built to last? I think Rust is a better systems programming language than C or C++ in that sense.

                                                                              1. 3

                                                                                I do think most source codes live too short to care

                                                                                Even then, Wirth’s languages showed you could get fast compiles, be safe by default, have clean module system, and support concurrency built-in. C the language is still worse if one aims to quickly throw together code that doesn’t crash or get hacked as often.

                                                                              2. 1

                                                                                I’d contend that maybe those folks shouldn’t be building systems :). Until you have to deal with servicing a huge number of client machines, the guarantees don’t really set in as to how much they help.

                                                                            2. 2

                                                                              I’d disagree. That feels considerably less productive for systems programming. In fact, it’s infuriating. I mostly work in client-side software developed on a large-to-huge scale. Runtime failures are the last thing I want to deal with - it means I have to update upwards of 500k clients.

                                                                              While it might be acceptable to deal with compile->runtime error->printf debug on the server side. It’s hardly a good solution on client side – even if it was how we dealt with it for many years.

                                                                              1. 1

                                                                                Yes, I agree that certain tasks require different tools. I was trying to specifically point out that generalization is for most people. E.g. quick data analysis jobs, internal web UI, etc. Obviously dynamic or interpreted languages are better for such tasks, than something like C. Personally I see the future of C being for microcontroller projects or toy ISAs, where you care about ease of implementation, and support for better defined languages take over primary systems. That may take another half-century at this rate, though.

                                                                              2. 1

                                                                                Well, there are those who feel that compile/type errors hold back their unbounded creativity, but that doesn’t mean those analyses are bad.

                                                                            1. 1

                                                                              A very cool project. Not sure I’d have defaulted to translating the ISO in EFS format to a TAR file but in retrospect, this is a good fit for a simple tool. She notes it’s over-engineered at the beginning but I think it’s actually probably a better approach than fighting with dated hardware, having to compile a kernel module or setup an OS vm.

                                                                              1. 2

                                                                                This is neat. While I’m less keen on the APL-like languages, I’ve often felt Forth-like languages would also be quite useful for creating music.

                                                                                Maybe, in my infinite (not) time, I’ll try to create an example stack-based language for music… Maybe…

                                                                                  1. 2

                                                                                    You should. I could see things like chord substitutions lending themselves really well to a stack-based language.

                                                                                  1. 3

                                                                                    For those unaware, https://startpage.com is excellent and has great privacy policy. I actually prefer it to DuckDuckGo these days because I feel its default search is of higher quality.

                                                                                    Reminds me a lot of the Google from 10-15 years ago.

                                                                                    1. 2

                                                                                      The default search quality is higher probably because they act as a Google proxy sometimes (offering privacy by being between you and Google).

                                                                                    1. 8

                                                                                      They did all this before they learned Worse is Better. Now that we know it wins, we have to sneak The Right Thing into what otherwise looks like Worse is Better. Alternatively, do Worse is Better in a way where good, interface design lets us constantly improve on the worse parts inside if the project/product gets adoption. Likewise, I say put new things into products people find useful maybe without those things. Parts of it build on proven principles with the new thing an extra differentiator that might or might not pan out. If it’s a language or environment, they can discover it when trying to modify the product.

                                                                                      One thing that should be considered for this list is Burroughs Architecture. It made low-level operations high-level, safe, and maintainable with OS written in ALGOL. Although it was commercialized, the hardware enforcement got taken out if I’m remembering correctly. The market only cared price/performance for a long time. Only a few projects applied those concepts later on. A recent project was SAFE Architecture which started like it in original proposal but changed to do something more flexible. Dover Microsystems finally released it commercially as CoreGuard in late 2017. Quite a long delay for anyone to deploy a Burroughs-inspired solution despite fact that it was solving many of today’s problems in 1961.

                                                                                      1. 6

                                                                                        One of my favorite courses in college was an OS course where we had to build an OS inside of a VM. The VM was a simplified Burroughs Large System architecture.

                                                                                        I enjoyed that course a lot and learned so much. It was a refreshing change from x86 and MIPS assembly.

                                                                                        1. 2

                                                                                          That’s really neat. I wouldnt have expected people building on Burroughs VM’s unless Unisys had a deal with the college to make them some talent. ;)

                                                                                          Did the VM have the pointer, bounds and argument checks like the B5000? And did the experience teach you anything that impacted later work?

                                                                                          1. 3

                                                                                            The VM was written by our professor - quirky guy but I learned a huge amount from him. My understanding is it was a simplified version of the B5000 but it did have bounds checking.

                                                                                            As to what I learned - I’m not sure I got any insight about computer architecture because it was a Burroughs ISA. I think a lot of what I learned was more around the trade offs you make in process scheduling and building rudimentary filesystems.

                                                                                            One big aspect of this project was he gave us an incomplete compiler for a Pascal-like language. You had to extend it to support things like arrays and loops. The compile target was the Burroughs VM. I recall thinking that the ISA was quite clean to generate for.

                                                                                            I’m sure if I’d have had to reimplement the same project on x86, I’s have seen a lot of the advantages of the B5000.

                                                                                            A lot of what I recall specific to Burroughs ISA was that it was very easy to understand. I was a CS major so I only had 2 or 3 courses that dealt with hardware directly. For me, x86 was very frustrating to work with.

                                                                                      1. 34

                                                                                        I think you could reimplement it easily yourself with a small shell script and some calls to mount; but I haven’t bothered.

                                                                                        I don’t have the expertise to criticize the content itself, but statements like the above make me suspect that the author doesn’t know nearly as much about the problem as they think they know.

                                                                                        1. 32

                                                                                          This reminds me of a trope in the DIY (esp. woodworking DIY) world.

                                                                                          First, show video of a ludicrously well equipped ‘starter shop’ (it always has a SawStop, Powermatic Bandsaw, and inexplicably some kind of niche tool that never really gets used, and a CNC router).

                                                                                          Next, show video of a complicated bit of joinery done using some of the specialized machines.

                                                                                          Finally, audio: “I used for this, but if you don’t have one, you can do the same with hand tools.”

                                                                                          No, asshole, no I can’t. Not in any reasonable timeframe. Usually this happens in the context of the CNC. “I CNC’d out 3 dozen parts, but you could do the same with hand tools.”

                                                                                          I get a strong whiff of that sort of attitude from this. It may be that the author is capable of this. It may be possible to ‘do this with hand tools’ like Shell and some calls to mount. It might even be easy! However, there is a reason docker is so popular, it’s because it’s cheap, does the job, and lets me concentrate on the things I want to concentrate on.

                                                                                          1. 9

                                                                                            As someone who can do “docker with hand tools,” you and @joshuacc are completely correct. Linux does not have a unified “container API,” it has a bunch of little things that you can put together to make a container system. And even if you know the 7 main namespaces you need, you still have to configure the namespaces properly.

                                                                                            For example, it isn’t sufficient to just throw a process in its own network namespace, you’ve got to create a veth pair and put one end of that into the namespace with the process, and attach the other end to a virtual bridge interface. Then you’ve got to decide if you want to allocate an IP for the container on your network (common in kubernetes), or masquerade (NAT) on the local machine (common in single box docker). If you masquerade you must make snat and dnat iptables rules to port forward to the veth interface, and enable the net.ipv4.ip_forward sysctl.

                                                                                            So the “small shell script” is now also a management interface for a network router. The mount namespace is even more delightful.

                                                                                            1. 8

                                                                                              Exactly this! One of the most egregious things about the ‘… you could do it with hand tools’ is that it is dismissive of people who really can do it with hand tools and dismissive of the folks that can do it with CNC.

                                                                                              In woodworking, CNC work is complicated, requires a particular set of skills and understanding, and is prone to a totally different, equally painful class of errors that hand tools are not.

                                                                                              Similarly, Hand tool work is complicated, requires a particular set of skills and understanding, and is prone to a totally different, equally painful class of errors that power/CNC work is not.

                                                                                              Both are respectable, and both are prone to be dismissive of the other, but a hand-cut, perfect half-blind dovetail drawer is amazing. Similarly, a CNC cut of 30 identical, perfect half-blind dovetail drawers is equally amazing.

                                                                                              The moral of this story: I can use the power tool version of containers. It’s called docker. It lets me spit out dozens of identically configured and run services in a pretty easy way.

                                                                                              You are capable of ‘doing it with hand tools’, and that’s pretty fucking awesome, but as you lay out, it’s not accomplishing the same thing. The OP seems to believe that building it artisinally is intrinsically ‘better’ somehow, but it’s not necessarily the case. I don’t know what OP’s background is, but I’d be willing to bet it’s not at all similar to mine. I have to manage fleets of dozens or hundreds of machines. I don’t have time to build artisanal versions of my power tools.

                                                                                            2. 2

                                                                                              And then you have Paul Sellers. https://www.youtube.com/watch?v=Zuybp4y5uTA

                                                                                              Sometimes, doing things by hand really is faster on a small scale.

                                                                                              1. 2

                                                                                                He’s exactly the guy I’m talking about though in my other post in this tree – he’s capable of doing that with hand tools and that’s legitimately amazing. One nice thing about Paul though is he is pretty much the opposite of the morality play from above. He has a ludicrously well-equipped shop, sure, but that’s because he’s been doing this for a thousand years and is also a wizard.

                                                                                                He says, “I did this with hand tools, but you can use power tools if you like.” Which is also occasionally untrue, but the sentiment is a lot better.

                                                                                                He also isn’t elitist. He uses the bandsaw periodically, power drillmotors, and so on. He also uses panel saws and brace-and-bit, but it’s not an affectation, he just knows both systems cold and uses whatever makes the most sense.

                                                                                                Paul Sellers is amazing and great and – for those people in the back just watching – go watch some Paul Sellers videos, even if you’re not a woodworker (or a wannabe like me), they’re great and he’s incredible. I like the one where he makes a joiner’s mallet a lot. Also there’s some floating around of him making a cabinet to hold his planes.

                                                                                            3. 1

                                                                                              My reaction was “if you had to write this much to convince me that there are easier ways than Docker, then it sounds like this is why Docker has a market.”

                                                                                              I’m late to the Docker game - my new company uses it heavily in our infrastructure. Frankly, I was impressed at how easy it was for me to get test environments up and running with Docker.

                                                                                              I concede it likely has issues that need addressing but I’ve never encountered software that didn’t.

                                                                                            1. 2

                                                                                              (Preface: I didn’t know much, and still don’t, about the *Solaris ecosystem.)

                                                                                              So it seems like the evolution of *Solaris took an approach closer to Linux? Where there’s a core chunk of the OS (kernel and core build toolchain?) that is maintained as its own project. Then there’s distributions built on top of illumos (or unleashed) that make them ready-to-use for endusers?

                                                                                              For some reason, I had assumed it was closer to the *BSD model where illumos is largely equivalent to something like FreeBSD.

                                                                                              If I wanted to play with a desktop-ready distribution, what’s my best bet? SmartOS appears very server oriented - unsurprising given *Solaris was really make more in-roads there in recent years. OpenIndiana?

                                                                                              1. 3

                                                                                                If Linux (kernel only) and BSD (whole OS) are the extremes of the scale, illumos is somewhere in the middle. It is a lot more than just a kernel, but it lacks some things to even build itself. It relies on the distros to provide those bits.

                                                                                                Historically, since Solaris was maintained by one corporation with lots of release engineering resources and many teams working on subsets of the OS as a whole, it made sense to divide it up into different pieces. The most notable one being the “OS/Net consolidation” which is what morphed into what is now illumos.

                                                                                                Unleashed is still split across more than one repo, but in a way it is closer to the BSD way of doing things rather than the Linux way.

                                                                                                Hope this helps clear things up!

                                                                                                If I wanted to play with a desktop-ready distribution, what’s my best bet? SmartOS appears very server oriented - unsurprising given *Solaris was really make more in-roads there in recent years. OpenIndiana?

                                                                                                OI would be the easiest one to start with on a desktop. People have gotten Xorg running on OmniOS (and even SmartOS), but it’s extra work vs. just having it.

                                                                                                1. 1

                                                                                                  Solaris is like BSD in that it includes the kernel + user space. In Linux, Linux is just the kernel and the distros define user space.

                                                                                                  1. 1

                                                                                                    So…. is there no desktop version of Illumos I can download? Why does their “get illumos” page point me at a bunch of distributions?

                                                                                                    Genuine questions - I’m just not sure where to start if I want to play with illumos.

                                                                                                    1. 3

                                                                                                      illumos itself doesn’t have an actual release. You’re expected to use one of its distributions as far as I can tell, which should arguably be called “derivatives” instead. OpenIndiana seems to be the main desktop version.

                                                                                                      1. 1

                                                                                                        I don’t know. I know there are some people who run SmartOS on their desktop, but I get the feeling it’s not targeting that use case, or at least there isn’t a lot of work going into supporting it.

                                                                                                  1. 1

                                                                                                    Despite the dopey cover, Smith’s book is IMO the best introductory text in the subject.

                                                                                                    1. 2

                                                                                                      Based on the index and introduction, that definitely looks like a good starting point. I will have a look. Thanks!

                                                                                                    1. 7

                                                                                                      I have done some audio programming, and am studying engineering, so I guess I have some knowledge about it. There are many who are better than me, though. I hope this isn’t too mathematical, but you need to have some grasp on differentiation, integration, complex numbers and linear algebra anyway. Here’s a ‘short’ overview of the basics:

                                                                                                      First of all, you need to know what happens when an analog, continuous signal is converted to digital data and back. The A->D direction is called sampling. The amount of times the data can be read out per second (the sampling rate) and the accuracy (bit depth) are limited for obvious reasons, and this needs to be taken into account.

                                                                                                      Secondly, analysing a signal in the time domain doesn’t yield much interesting information, it’s much more useful to look analyse the frequencies in the signal instead.

                                                                                                      Fourier’s theorem states that every signal can be represented as a sum of (co)sines. Getting the amplitude of a given freqency is done through the Fourier transform (F(omega) = integrate(lambda t: f(t) * e^-omega*j*t, 0, infinity)). It works a bit like the following:

                                                                                                      1. Draw the function on a long ribbon
                                                                                                      2. Twist the ribbon along its longest axis, with an angle proportional to the desired frequency you want the amplitude of (multiplying f(t) by e^-omega*j*t, omega is the pulsation of the desired frequency, i.e. omega = 2pi*f, and j is the imaginary unit. j is used more often than i in engineering.)
                                                                                                      3. Now smash it flat. In the resulting (complex) plane, take the average of all the points (i.e. complex numbers). (This is the integration step.)
                                                                                                      4. The sines will cancel out themselves, except for the one with the desired freqency. The resulting complex number’s magnitude is the amplitude of the sine, and its angle is the sine’s phase.

                                                                                                      (Note: the Fourier transform is also known as the Laplace transform, when substituting omega*j with s (or p, or z, they’re “implicitely” complex variables), and as the Z-transform, when dealing with discrete signals. It’s still basically the same, though, and I’ll be using the terms pretty much interchangably. The Laplace transform is also used when analyzing linear differential equations, which is, under the hood, what we’re doing here anyway. If you really want to understand most/everything, you need to grok the Laplace transform first, and how it’s used to deal with differential equations.)

                                                                                                      Now, doing a Fourier transform (and an inverse afterwards) can be costly, so it’s better to use the information gained from a Fourier transform while writing code that modifies a signal (i.e. amplifies some frequencies while attenuating others, or adding a delay, etc.), and works only (or most of the time) in the time domain. Components like these are often called filters.

                                                                                                      Filters are linear systems (they can be nonlinear as well, but that complicates things). They are best thought of components that scale, add, or delay signals, combined like this. (A z^-1-box is a delay of one sample, the Z-transform of f(t-1) is equal to the Z-transform of f(t), divided by z.)

                                                                                                      If the system is linear, such a diagram can be ‘transformed’ into a bunch of matrix multiplications (A, B, C and D are matrices):

                                                                                                      • state [t+1] = A*state[t] + B*input[t]
                                                                                                      • output[t ] = C*state[t] + D*input[t]

                                                                                                      with state[t] a vector containing the state of the delays at t.

                                                                                                      Analyzing them happens as follows:

                                                                                                      1. Take the Z-transform of the input signal (Z{x(t)}=X(z)) and the output signal (Z{y(t)}=Y(z)).
                                                                                                      2. The proportion between Y and X is a (rational) function in z, the transfer function H(z).
                                                                                                      3. Now find the zeros of the numerator and denominator. The zeros of the latter are called the poles, signals of (or near to) that frequency are amplified. Zeros of the numerator are (boringly) called zeros, and they attenuate signals. These poles and zeros are also related to the eigenvectors and -values of the matrix A.

                                                                                                      However, if the poles are outside of the unit circle, the system is ‘unstable’: the output will grow exponentially (i.e. “explode”). If the pole is complex or negative, the output will oscillate a little (this corresponds to complex eigenvalues, and complex solutions to the characteristic equation of the linear differential equation).

                                                                                                      What most often is done, though, is making filters using some given poles and zeros. Then you just need to perform the steps in reverse direction.

                                                                                                      Finally, codecs simply use that knowledge to throw away uninteresting stuff. (Eg. data is stored in the frequency domain, and very soft sines, or sines outside the audible range are discarded. With images and video, it’s the same thing but in two dimensions.) I don’t know anything specific about them, though, so you should look up some stuff about them yourself.

                                                                                                      Hopefully, this wasn’t too overwhelming :). I suggest reading Yehar’s DSP tutorial for the braindead to get some more information (but it doesn’t become too technical), and you can use the Audio EQ Cookbook if you want to implement some filters. [This is a personal mirror, as the original seems to be down - 509.]

                                                                                                      There’s also a copy of Think DSP lying on my HDD, but I never read it, so I don’t know if it’s any good.

                                                                                                      1. 3

                                                                                                        The amount of times the data can be read out per second (the sampling rate) and the accuracy (bit depth) are limited for obvious reasons

                                                                                                        Interesting post. I wanted to highlight this part where you say it’s limited for “obvious reasons.” It’s probably better to explain that since it might not be obvious to folks trained to think transistors are free, the CPU’s are doing billions of ops a second, and everything is working instantly down to nanosecond scale. “How could such machines not see and process about everything?” I thought. What I learned studying hardware design at a high-level, esp on the tools and processes, was that the digital cells appeared to be asleep a good chunk of the time. From a software guy’s view, it’s like the clock signal comes as a wave, starts lighting them up to do their thing, leaves, and then they’re doing nothing. Whereas, the analog circuits worked non-stop. If it’s a sensor, it’s like the digital circuits kept closing their eyes periodically where they’d miss stuff. The analog circuits never blinked.

                                                                                                        After that, the ADC and DAC tutorials would explain how the system would go from continouous to discrete using the choppers or whatever. My interpretation was the digital cells were grabbing a snapshot of the electrical state as bit-based input kind of like requesting a picture of what a fast-moving database contains. It might even change a bit between cycles. I’m still not sure about that part since I didn’t learn it hands on where I could experiment. So, they’d have to design it to work with whatever its sampling rate/size was. Also, the mixed-signal people told me they’d do some components in analog specifically to take advantage of full-speed, non-blinking, and/or low-energy operation. Especially non-blinking, though, for detecting things like electrical problems that can negatively impact the digital chips. Analog could respond faster, too. Some entire designs like control systems or at least checking systems in safety-critical stuck with analog since the components directly implemented mathematical functions well-understood in terms of signal processing. More stuff could go wrong in a complex, digital chip they’d say. Maybe they just understood the older stuff better, too.

                                                                                                        So, that’s some of what I learned dipping my toes into this stuff. I don’t do hardware development or anything. I did find all of that really enlightening when looking at the ways hardware might fail or be subverted. That the digital stuff was an illusion built on lego-like, analog circuits was pretty mind-blowing. The analog wasn’t dead: it just got tamed into a regular, synthesizable, and manageable form that was then deployed all over the place. Many of the SoC’s still had to have analog components for signal processing and/or power competitiveness, though.

                                                                                                        1. 3

                                                                                                          You’re right, of course. On the other hand, I intended to make it a bit short (even though it didn’t work out as intended). I don’t know much about how CPUs work, though, I’m only in my first year.

                                                                                                          I remember an exercise during maths class in what’s probably the equivalent of middle or early high school, where multiple people were measuring the sea level at certain intervals. To one, the level remained flat, while for the other, it was wildly fluctuating, while to a third person, it was only slightly so, and at a different frequency.

                                                                                                          Because of the reasons you described, the ADC can’t keep up when the signal’s frequency is above half the sampling frequency (i.e. the Nyqvist frequency).

                                                                                                          (Interestingly, this causes the Fourier transform of the signal to be ‘reflected’ at the Nyqvist frequency. There’s a graph that makes this clear, but I can’t find it. Here’s a replacement I quickly hacked together using Inkscape. [Welp, the text is jumping around a little. I’m too tired to fix it.])

                                                                                                          The “changing a bit between cycles” might happen because the conversion doesn’t happen instantaneously, so the value can change during the conversion as well. Or, when converting multiple values that should happen “instantaneously” (such as taking a picture), the last part will be converted a little bit later than the first part, which sounds analogous to screen tearing to me. Then again, I might be wrong.

                                                                                                          P.S. I’ll take “interesting” as a compliment, I just finished my last exam when I wrote that, so I’m a little tired now. Some errors are very probably lurking in my replies.

                                                                                                          1. 3

                                                                                                            I’ll take “interesting” as a complimen

                                                                                                            You were trying to explain some hard concepts. I enjoy reading these summaries since I’m an outsider to these fields. I learn lots of stuff by reading and comparing explanations from both students and veterans. Yeah, it was a compliment for the effort. :)

                                                                                                        2. 3

                                                                                                          Even though I learned about the Fourier transformation in University this video gave me a new intuition: https://www.youtube.com/watch?v=spUNpyF58BY

                                                                                                          1. 2

                                                                                                            Thanks very much for your detailed reply :). The math doesn’t scare me, it’s just very rusty for me since a lot of what I do doesn’t have as much pure math in it.

                                                                                                            I appreciate the time you put into it.

                                                                                                            1. 2

                                                                                                              Speaking specifically of Fourier transform: it behaves well for infinite signals and for whole numbers of periods of strictly periodic signals.

                                                                                                              But in reality the period usually doesn’t divide the finite fragment we have (and also there are different components with different periods). If we ignore this, we effectively multiply the signal by a rectangle function (0… 1 in the interval… 0…) — and Fourier transform converts pointwise multiplication into convolution (an operation similar to blur). Having hard edges is bad, so the rectangle has a rather bad spectrum with large amplitudes pretty far from zero, and it is better to avoid convolution with that — this would mix rather strongly even frequencies very far from each other.

                                                                                                              This is the reason why window functions are used: the signal is multipled by something that goes smoothly to zero at the edges. A good window has a Fourier transform that falls very quickly as you go away from zero, but this usually requires the spectrum to have high intensity on a wide band near zero. This tradeoff means that if you want less leak between vastly different frequencies, you need to mix similar frequencies more. It is also one of the illustrations of the reason why a long recording is needed to separate close frequencies.

                                                                                                            1. 2

                                                                                                              I’d really like to write more D. In my particular case, I couldn’t have a GC in play (self-imposed memory constraints), but there’s a lot about it that’s attractive to me. I don’t have any desire to choose Go over it - power of the language is considerably greater from my limited experience.

                                                                                                              That said, Go does have a big package community behind it like Rust.

                                                                                                              1. 14

                                                                                                                Stick a @nogc on your main function and you have a compile-time guarantee that no GC allocations will happen in your program.

                                                                                                                1. 6

                                                                                                                  Neat - I didn’t realize this. Too late now for the current project, but good to know for the future. I’m particularly interested in its C++ FFI story. There’s a couple of specialized C++ libraries I’d like to use without having to write flat-C style wrappers just to call them sanely from Rust.

                                                                                                                  Thanks for that!

                                                                                                                  1. 3

                                                                                                                    That’s exactly the kind of tip I was hoping for in the comments. Thanks!

                                                                                                                    1. 5

                                                                                                                      It always the same arguments with D discussions:

                                                                                                                      • I don’t like D that has a GC!
                                                                                                                      • Just use @nogc!
                                                                                                                      • But then some stuff from the standard library does not work anymore!
                                                                                                                      • How much of the standard library?
                                                                                                                      • Nobody knows and how would you measure it anyway?
                                                                                                                      1. 1

                                                                                                                        It’s at least a pattern that’s solvable. Someone just has to attempt to compile the whole standard library with no GC option. Then, list the breakage. Then, fix in order of priority for the kind of apps that would want no-GC option. Then, write this up into a web page. Then, everyone shares it in threads where pattern shows up. Finally, the pattern dies after 10-20 years of network effects.

                                                                                                                        1. 2

                                                                                                                          People are doing that. Well, except for the “write this up into a web page” part. I guess you are thinking of web pages like http://www.arewewebyet.org/

                                                                                                                          1. 1

                                                                                                                            Yeah, some way for people to know that they’re doing it with what level of progress. Good to know they’re doing it. That you’re the first to tell me illustrates how a page like that would be useful in these conversations. People in D camp can just drop a link and be done with it.

                                                                                                                  2. 3

                                                                                                                    I find D has a lot of packages too. Not an explosive smörgåsbord, but sufficient for my purposes.


                                                                                                                    The standard library by itself is fairly rich already.


                                                                                                                    1. 1

                                                                                                                      I guess the question would be whether unsafe or smart pointers are about as easy to use in D as C or C++. If so, the GC might not be a problem. In some languages, GC is really hard to avoid.

                                                                                                                      Maybe @JordiGH, who uses D, can tell us.

                                                                                                                      1. 5

                                                                                                                        I write D daily. Unsafe pointers work the same as in C or C++. I wrote a GC-less C++-like smart pointer library for D. It’s basically std::unique_ptr and std::shared_ptr, but no std::weak_ptr because 1) I haven’t needed it and 2) One can, if needed, rely on the GC to break cycles (although I don’t know how easy that would be do to currently in practice.

                                                                                                                        1. 1

                                                                                                                          D is a better C++, so pointers easier to use than C++. As I understand, the main problem is that it used to be the case that the standard library used GC freely, making GC hard to avoid if you used the standard library. I understand there is an ongoing effort to clear this but I don’t know the current status.

                                                                                                                          1. 3

                                                                                                                            It depends on which part of the standard library. These days, the parts most often used have functions that don’t allocate. In any case it’s easy to avoid by using @nogc.