Threads for kristoff

  1. 3

    RAII comes with its own set of interesting footguns. Not to say that it’s a bad feature, but it’s not perfect. Languages that don’t employ RAII have a right to exist, and not just in the name of variety.

    1. 11

      That particular example is not a problem with RAII though, it is specific to API of shared_ptr and C++ flexible evaluation order.

      1. 5

        This should be fixed in C++17, at least partially. TL;DR – see the “The Changes” section.

        https://www.cppstories.com/2021/evaluation-order-cpp17/

        1. 4

          As that article points out, this is solved in C++11 with std::make_shared: any raw construction of std::shared_ptr is code smell. This kind of footgun is not really intrinsic to RAII, but to the way that C++ reports errors from constructors: the only mechanism is via an exception, which means that anything constructing an object as an argument needs to be excitingly exception safe. The usual fix for this is to have factory methods that validate that an object can be constructed from the arguments and return an option type.

          The more subtle footgun with RAII is that it requires the result to be bound to a variable that is not necessarily used. In C++, you can write something like:

          {
            std::lock_guard(mutex);
            // Some stuff that expects the lock to be held
          } // Expect the lock to be released here.
          

          Only, because you didn’t write the first line as std::lock_guard g{mutex} you’ve locked and unlocked the mutex in the same statement and now the lock isn’t held. I think the [[nodiscard]] attribute on the constructor can force a warning here but I’ve seen this bug in real-world code a couple of times.

          The root cause here is that RAII isn’t a language feature, it’s a design pattern. The problem with a defer statement is that it separates the initialisation and cleanup steps. A language that had RAII as a language feature would want something that allowed a function to return an object that is not destroyed until the end of the parent scope even if it is not bound to a variable.

        1. 8

          It is pretty damming to the Go language that you can’t use any existing code. Just about every other language provides relatively straightforward (if not seamless) interop with any other C ABI. Only in Go have I heard such consistent and negative opinions on the FFI. Java and JNI are close, but it still seems better received and in that case at least there is a decent reason because ruins your portability once you add native code.

          The fact that someone would recomend “reimplementing a large piece of C code in Go” instead of just binding to it is exposing a huge downside of the language.

          1. 4

            The fact that someone would recomend “reimplementing a large piece of C code in Go” instead of just binding to it is exposing a huge downside of the language.

            The main reason is so you “get” effortless portability as a result. I can only think of zig where you get some out-of-the-box portability without re-writing your C in zig (since it ships with libc for various platforms/archs and has all the toolchain nicely setup).

            1. 2

              i immediately thought of zig’s self contained compiler when i saw this post… and i recall things being posted to show how you can integrate zig cc in w/ go/cgo to have portable c compilers

              seems like it would be a good thing for these project maintainers to get on board with…

              1. 6

                I wrote a blog post where I cross-compiled with Zig the CGo SQLite library for all the major OSs without too much fuss.

                https://zig.news/kristoff/building-sqlite-with-cgo-for-every-os-4cic

            2. 3

              I can’t wait for Go to have an FFI some day!

              1. 1

                As mentioned above, I believe this to be simply untrue: Go has an FFI today and it’s called cgo. What is it about cgo that does not make it an FFI?

                1. 1

                  cgo is basically a separate language. It is a separate implementation.

                  1. 1

                    I can’t see how it’s a separate language. You embed a bit of C code in a special place within a Go file. The C code is compiled by a C compiler, the Go code by a compiler and cgo. And from the C and the Go code, cgo generates some interface code to make C names known to the Go compiler and some Go names known to the C compiler. How is cgo (which, to me, is a program) a separate language?

                    It is a separate implementation.

                    cgo is a separate implementation of what?

              2. 2

                Yes 100%, here is my lament from 4 years ago on that topic.

                https://news.ycombinator.com/item?id=16741043

                A big part of my pain, and the pain I’ve observed in 15 years of industry, is programming language silos. Too much time is spent on “How do I do X in language Y?” rather than just “How do I do X?”

                For example, people want a web socket server, or a syntax highlighting library, in pure Python, or Go, or JavaScript, etc. It’s repetitive and drastically increases the amount of code that has to be maintained, and reduces the overall quality of each solution (e.g. think e-mail parsers, video codecs, spam filters, information retrieval libraries, etc.).

                There’s this tendency of languages to want to be the be-all end-all, i.e. to pretend that they are at the center of the universe. Instead, they should focus on interoperating with other languages (as in the Unix philosophy).

                One reason I left Google over 6 years ago was the constant code churn without user visible progress. Somebody wrote a Google+ rant about how Python services should be rewritten in Go so that IDEs would work better. I posted something like <troll> … Meanwhile other companies are shipping features that users care about </troll>. Google+ itself is probably another example of that inward looking, out of touch view. (which was of course not universal at Google, but definitely there)

                This is one reason I’m working on https://www.oilshell.org – with a focus on INTEROPERABILITY and stable “narrow waists” (as discussed on the blog https://www.oilshell.org/blog/2022/02/diagrams.html )

                (copy of HN comment in response to almost the same observation!)

                I’m also excited about Zig for this reason. e.g. “maintain it with Zig” https://kristoff.it/blog/maintain-it-with-zig/

                1. 1

                  On the other hand, oilshell is not(?) compatible with the piles of bash (and sh, and…) scripts out in the world, so folks have to rewrite it to be compatible with your shell. Is this not contradicting what you said earlier?

                  1. 3

                    Hm honest question: Why do you think it’s not compatible?

                    It’s actually the opposite – it’s the ONLY alternative shell that’s compatible with POSIX sh and bash. It’s the most bash compatible shell by a mile.

                    Try running osh myscript.sh on your shell scripts and tell me what happens!


                    The slogan on the front page is supposed to emphasize that, but maybe it’s not crystal clear:

                    It’s our upgrade path from bash to a better language and runtime.

                    Also pretty high up on the FAQ is the statement:

                    http://www.oilshell.org/blog/2021/01/why-a-new-shell.html#introduction

                    OSH is a shell implementation that’s part of the Oil project. It’s compatible with both POSIX and bash. The goal is to run existing shell scripts. It’s done so since January 2018, and has matured in many regular releases since then.


                    Nonetheless I think it could be clearer, so I filed a bug to write a project tour and put it prominently on the home page:

                    https://github.com/oilshell/oil/issues/1127

                    It is disappointing to me that this hasn’t been communicated after so many years … I suspect that some people actually think the project is impossible. It’s both compatible AND it’s a new language.

                    I described in the latest blog post how that works:

                    http://www.oilshell.org/blog/2022/05/release-0.10.0.html#backward-compatibility-the-eternal-puzzle

                    Here are all your other alternative shell choices, NONE of which have the compatibility of OSH.

                    https://github.com/oilshell/oil/wiki/Alternative-Shells

                    (That is why the project is so large and long)

                    1. 1

                      Ah, I’m sorry. I skimmed the FAQ but missed that sentence. For some reason, the impression I got from your FAQ is that it’s basically yet another shell that doesn’t offer backwards compatibility. Obviously I was terribly wrong. I’m not sure how to suggest changes that may have prevented that (other than it’s completely my fault for misreading/skimming and getting the wrong impression.) So, sorry for the noise.

                      1. 1

                        OK no worries … I think it actually did point out that this crucial fact about the project is somewhat buried. Not entirely buried but “somewhat”.

                2. 2

                  It is simply not true that “you can’t use any existing code” in Go. There’s cgo and it allows you to to call into C code and provides way for C code to call into go code - that’s pretty much the definition of using existing code. I think a big reason people are complaining about JNI is the the same for people complaining about cgo: Because you are dealing with garbage collected language, there are rules about what you can do with memory and pointers. The same applies to .NET as well.

                  The fact that someone would recomend “reimplementing a large piece of C code in Go” instead of just binding to it is exposing a huge downside of the language.

                  As the article points out, in the very first sentence, most people use mattn/go-sqlite3 which is in fact a wrapper around the canonical C SQLite implementation. A “decent reason” (your words) to not use that library is because “it ruins your portability” because “you add native code”. This reason is at play here.

                  This being said, the shown port to Go is a respectable effort. While being impressive, I’d probably use one of the bindings to the canonical C code if possible as it uses a highly tested implementation. If not possible the cznic provides an interesting alternative.

                  1. 1

                    Yes and no. I mean there is CGO. Which you can use. While it’s worse in Go, also because of threading, especially on Linux you’ll still find “pure” implementations of things that would usually use a C library, sometimes they are faster because calling the FFI might still be slow. Database interfaces are one such an example, where people sometimes find the bottleneck to be the FFI.

                    You also get certain benefits from not using C. I already mentioned the threading part which sometimes bites people in Go, but also you can be sure about memory safety, debugging will be easier, all the people using the project can contribute even when they are not fluent in C, etc.

                    And if you still want/need to use C, there is CGO.

                    There certainly have been cases in other languages where I wished a library wasn’t just a wrapper around C, be it Python, Java, Objective-C/Swift or in node.js-projects. Given circumstances they can be a source for headaches.

                  1. 22

                    I imagine that many people will be wondering how Hare differs from Zig, which seems similar to me as an outsider to both projects. Could someone more familiar with the goals of Hare briefly describe why (assuming a future in which both projects are reasonably mature) someone may want to choose Hare over Zig, and Zig over Hare?

                    1. 19

                      I imagine that many people will be wondering how Hare differs from Zig, which seems similar to me as an outsider to both projects.

                      As someone who used Hare briefly last year when it was still in development (so this may be slightly outdated), I honestly see no reason to use Hare for the time being. While it provides huge advances over C, it just feels like a stripped-down version of Zig in the end.

                      1. 18

                        My understanding is that Hare is for people who want a modern C (fewer footguns, etc) but who also want a substantially more minimalist approach than what Zig offers. Hare differs from Zig by having a smaller scope (eg it doesn’t try to be a C cross-compiler), not using LLVM, not having generic/templated metaprogramming, and by not having async/await in the language.

                        1. 2

                          That definitely sounds appealing to me as someone who has basically turned his back on 95% of the Rust ecosystem due to it feeling a bit like stepping into a candy shop when I just wanted a little fiber to keep my programs healthier by rejecting bad things. I sometimes think about what a less-sugary Rust might be like to use, but I can’t practically see myself doing anything other than what I am doing currently - using the subset of features that I enjoy while taking advantage of the work that occasionally improves the interesting subset to me. And every once in a while, it’s nice to take a bite out of some sugar :]

                          1. 2

                            If I remember correctly, there was some discussion about a kind of barebones Rust at some point around here. Is that what you would ideally have/work with? Which features would survive, and which be dropped?

                        2. 14

                          It looks like it’s a lot simpler. Zig is trying to do much more. I also appreciate that Hare isn’t self-hosting and can be built using any standard C compiler and chooses QBE over LLVM, which is simpler and more light-weight.

                          1. 13

                            As I understand it, the current Zig compiler is in C++; they are working on a self-hosting compiler, but intend to maintain the C++ compiler alongside it indefinitely.

                            1. 4

                              Ah, thanks for the correction!

                              1. 2

                                Indefinitely, as in, there will always be two official implementations?

                                1. 3

                                  Well, at some point it would make sense, much like C compilers are ubiquitously self-hosted. As long as it doesn’t make it too hard to bootstrap (for instance, if it has decent cross-compilation support), it should be fine.

                            1. 19

                              We have just published the Zig Roadmap 2023 talk that Andrew gave at the recent Zig meetup in Milan. It talks about what’s next for the self-hosted compiler. (spoilers: tons of speedups)

                              https://youtu.be/AqDdWEiSwMM

                              1. 17

                                I apologize in advance for giving a lazy answer to this (I’m running a Zig meetup atm), but I’ll just quote my comment from HN, written in reply to this comment.

                                That’s exactly it. It just enables code reuse. You still have to think about how your application will behave, but you won’t have to use an async-flavored reimplementaion of another library. Case in point: zig-okredis works in both sync and async applicatons, and I don’t have to maintain two codebases.

                                https://github.com/kristoff-it/zig-okredis

                                I thought using “colorblind” in the title of my original blogpost would be a clear enough hint to the reader that the colors still exist, but I guess you can never be too explicit.

                                1. 7

                                  I enjoyed the progression from Amdahl’s Law to Brook’s to Conway’s, but I find the conclusion of the talk to be too weak to be worth using as ammunition when discussing the problems with modern software development. To be clear: I too believe that hype-driven modern software development is bad, but I also think that we need better arguments than the ones presented in this video.

                                  The most poignant example of this is how Casey mentions microservices at the end of the video in the naughty list of software practices. What he seems to ignore, is that all good literature about microservices recommends placing the boundaries between different systems precisely where the boundaries in the company org chart are. This is done in order to minimize the amount of effort required to react to a change request by the business (Conway’s Law). So why break everything into services in the first place? To lower communication overhead between development teams (Brook’s Law) in order to parallelize the development work as much as possible (Amdahl’s Law). This is not to say that microservices are great and the futureTM, but it’s a reasonable approach to enterprise software development even within the mental framework that Casey himself delineated in this talk, which instead seems it was supposed to argue against the practice with enough clarity to justify a self-evident naughty list at the end.

                                  Microservices (in enterprise software development) are just one example of a situation where software is a support activity and not what’s supposed to drive the design, which in my opinion is the key insight missing from this talk.

                                  This video itself for example was most probably created using OBS and a window capture of powerpoint + color keying. In a stringent interpretation of the talk, surely a custom-built integrated solution would be better than this mess of abstraction layers, but the reality is that from the perspective of just wanting to make a video, color keying is good enough, and the same applies to most companies that just want to get their business done.

                                  So in my opinion the fact that Windows has that many glaring flaws is more of a reflection of how Microsoft’s priorities have shifted over time away from selling software, more than how bad they are at software architecture.

                                  To me the real unbreakable law is:

                                  The less software is core to a company’s business model, the more it will suck. Conversely, the more software sucks, the less it is core to a company’s business model (despite of how the company wants to position itself).

                                  1. 18

                                    This article incorrectly states that Zig has “colored” async functions. In reality, Zig async functions do not suffer from function coloring.

                                    Yes, you can write virtually any software in Zig, but should you? My experience in maintaining high-level code in Rust and C99 says NO.

                                    Maybe gain some experience with Zig in order to draw this conclusion about Zig?

                                    1. 5

                                      Not sure if he changed the text but the article mentions the async color problem such that it could be considered applying generally. But the article doesn’t link that to Zig explicitly or did I miss it?

                                      It would be fair to mention how Zig solved it as he mentions it for Go.

                                      1. 9

                                        This response illustrates the number one reason I am not a fan of Zig: its proponents, like the proponents of Rust, are not entirely honest about it.

                                        In reality, Zig async functions do not suffer from function coloring.

                                        This is a lie. In fact, that article, while a great piece of persuasive writing, is also mostly a lie.

                                        It tells the truth in one question in the FAQ:

                                        Q: SO I DON’T EVEN HAVE TO THINK ABOUT NORMAL FUNCTIONS VS COROUTINES IN MY LIBRARY?

                                        No, occasionally you will have to. As an example, if you’re allowing your users to pass to your library function pointers at runtime, you will need to make sure to use the right calling convention based on whether the function is async or not. You normally don’t have to think about it because the compiler is able to do the work for you at compile-time, but that can’t happen for runtime-known values.

                                        In other words, Zig still suffers from the function coloring problem at runtime. If you do async in a static way, the compiler will be able to cheese the function coloring problem away. In essence, the compiler hides the function coloring problem from you when it can.

                                        But when you do it at runtime and the compiler can’t cheese it, you still have the function coloring problem.

                                        I think it is a good achievement to make the compiler able to hide it most of the time, but please be honest about it.

                                        1. 17

                                          Calling this dishonest and a lie is incredibly uncharitable interpretation of what is written. Even if you’re right that it’s techncially incorrect, at worst it’s a simplification made in good faith to be able to talk about the problem, no more of a lie than teaching newtoniam mechanics as laws of physics in middle school is a lie because of special relativity, or teaching special relativity in high school is a lie because of general relativity.


                                          Also, I’m not familiar with zig, but from your description I think you’re wrong to claim that functions are colored. Your refutation of that argument is that function pointers are colored, but functions are a distinct entity from function pointers - and one used much more frequently in most programming languages that have both concepts. Potentially I’m misunderstanding something here though, there is definitely room for subtlety.

                                          1. 11

                                            Calling this a dishonest and a lie is incredibly uncharitable interpretation of what is written.

                                            No, it’s not. The reason is because they know the truth, and yet, they still claim that Zig functions are not colored. It is dishonest to do so.

                                            It would be completely honest to claim that “the Zig compiler can make it appear that Zig functions are not colored.” That is entirely honest, and I doubt it would lose them any fans.

                                            But to claim that Zig functions not colored is a straight up lie.

                                            Even if you’re right that it’s techncially incorrect,

                                            I quoted Kristoff directly saying that Zig functions are colored. How could I be wrong?

                                            at worst it’s a simplification made in good faith to be able to talk about the problem, no more of a lie than teaching newtoniam mechanics as laws of physics in middle school is a lie because of special relativity, or teaching special relativity in high school is a lie because of general relativity.

                                            There are simplifications that work, and there are simplifications that don’t.

                                            Case in point: relativity. Are you ever, in your life, going to encounter a situation where relativity matters? Unless you’re working with rockets or GPS, probably not.

                                            But how likely is it that you’re going to run into a situation where Zig’s compiler fails to hide the coloring of functions? Quite likely.

                                            Here’s why: while Kristoff did warn library authors about the function coloring at runtime, I doubt many of them pay attention because of the repetition of “Zig functions are not colored” that you hear all of the time from Andrew and the rest. It’s so prevalent that even non-contributors who don’t understand the truth jump into comments here on lobste.rs and on the orange site to defend Zig whenever someone writes a post about async.

                                            So by repeating the lie so much, Zig programmers are taught implicitly to ignore the truthful warning in Kristoff’s post.

                                            Thus, libraries get written. They are written ignoring the function coloring problem because the library authors have been implicitly told to do so. Some of those libraries take function pointers for good reasons. Those libraries are buggy.

                                            Then those libraries get used. The library users do not pay attention to the function coloring problem because they have been implicitly told to do so.

                                            And that’s how you get bugs.

                                            It doesn’t even need to be libraries. In my bc, I use function pointers internally to select the correct operation. It’s in C, but if it had been in Zig, and I had used async, I would probably have been burned by it if I did not know that Zig functions are colored.

                                            Also, I’m not familiar with zig, but from your description I think you’re wrong to claim that functions are not colored. Your refutation of that argument is that function pointers are colored, but functions are a distinct entity from function pointers - and one used much more frequently in most programming languages that have both concepts. Potentially I’m misunderstanding something here though, there is definitely room for subtlety.

                                            You are absolutely misunderstanding.

                                            How can function pointers be colored? They are merely pointers to functions. They are data. Data is not colored; code is colored. Thus, function pointers (data that just points to functions) can’t be colored but functions (containers for code) can be.

                                            If data could be colored, you would not be able to print the value of the pointer without jumping through hoops, but I bet if you did the Zig equivalent of printf("%p\n", function_pointer); it will work just fine.

                                            So if there is coloring in Zig, and Kristoff’s post does admit there is, then it has to be functions that are colored, not function pointers.

                                            In Kristoff’s post, there is this comment in some of the example code:

                                            // Note how the function definition doesn't require any static
                                            // `async` marking. The compiler can deduce when a function is
                                            // async based on its usage of `await`.
                                            

                                            He says “when a function is async…” An async/non-async dichotomy means there is function coloring.

                                            What the compiler does is automagically detect async functions (as Kristoff says) and inserts the correct code to call it according to its color. That doesn’t mean the color is gone; it means that the compiler is hiding it from you.

                                            For a language whose designer eschews operator overloading because it hides function calls, it feels disingenuous to me to hide how functions are being called.

                                            All of this means that Zig functions are still colored. It’s just that, at compile time, it can hide that from you. At runtime, however, it can’t.

                                            And that is why Zig functions are colored.

                                            1. 7

                                              I have a hard time following all the animosity in your replies. Maybe I’m just not used to having fans on the internet :^)

                                              In my article, and whenever discussing function coloring, I, and I guess most people, define “function coloring” the problem of having to mark functions as async and having to prepend their invocation with await, when you want to get their result. The famous article by Bob Nystrom, “What Color is Your Function?” also focuses entirely on the problem of syntactical interoperability between normal, non-async code and async, and how the second infects codebases by forcing every other function to be tagged async, which in turn forces awaits to be sprinkled around.

                                              In my article I opened mentioning aio-libs, which is a very clear cut example of this problem: those people are forced to reinvent the wheel (ie reimplement existing packages) because the original codebases simply cannot be reasonably used in the context of an async application.

                                              This is the problem that Zig solves. One library codebase that, with proper care, can run in both contexts and take advantage of parallelism when available. No async-std, no aio-libs, etc. This works because Zig changes the meaning and usage of async and await compared to all other programming languages (that use async/await).

                                              You seem to be focused on the fact that by doing async you will introduce continuations in your program. Yes, you will. Nobody said you won’t. What you define as “cheesing” (lmao) is a practical tool that can save a lot of wasted effort. I guess you could say that levers and gears cheesed the need for more physical human labor, from that perspective.

                                              Sure, syntax and the resulting computational model aren’t completely detached: if you do have continuations in your code, then you will need to think about how your application is going to behave. Duh, but the point is libraries. Go download OkRedis. Write an async application with it, then write a blocking applicaton with it. You will be able to do both, while importing the same exact declarations from my library, and while also enjoying speedups in the async version, if you allowed for concurrent operations to happen in your code.

                                              But how likely is it that you’re going to run into a situation where Zig’s compiler fails to hide the coloring of functions? Quite likely.
                                              Thus, libraries get written. They are written ignoring the function coloring problem because the library authors have been implicitly told to do so. Some of those libraries take function pointers for good reasons. Those libraries are buggy.

                                              No. Aside from the fact that you normally just pass function identifiers around, instead of pointers, function pointers have a type and that type also tells you (and the compiler) what the right calling convention is. On top of that, library authors are most absolutely not asked to ignore asyncness. In OkRedis I have a few spots where I explicitly change the behavior of the Redis client based on whether we’re in async mode or not.

                                              The point, to stress it one last time, is that you don’t need to have two different library codebases that require duplicated effort, and that in the single codebase needed, you’re going to only have to make a few changes to account for asyncness. In fact, in OkRedis I only have one place where I needed to account for that: in the Client struct. Every other piece of code in the entire library behaves correctly without needing any change. Pretty neat, if you ask me.

                                              1. 2

                                                I have a hard time following all the animosity in your replies. Maybe I’m just not used to having fans on the internet :^)

                                                The “animosity” (I was more defending myself vigorously) comes from Andrew swearing at me and accusing me, which he might have had a reason.

                                                In his post, he claimed I said he was maliciously lying, but I only said that he was lying. I separate unintentional lies from intentional lies, and I believe all of you are unintentionally lying. Because I realized he thought that, I made sure to tell him that and tell him what I would like to see.

                                                In my article, and whenever discussing function coloring, I, and I guess most people, define “function coloring” the problem of having to mark functions as async and having to prepend their invocation with await, when you want to get their result. The famous article by Bob Nystrom, “What Color is Your Function?” also focuses entirely on the problem of syntactical interoperability between normal, non-async code and async,

                                                In Bob Nystrom’s post, this is how he defined function coloring:

                                                The way you call a function depends on its color.

                                                That’s it.

                                                Most people associate color with async and await because that’s how JavaScript, the language from his post, does it. But that’s not how he defined it.

                                                After playing with Zig’s function pointers, I can say with confidence that his definition, “The way you call a function depends on its color,” does apply to Zig.

                                                and how the second infects codebases by forcing every other function to be tagged async, which in turn forces awaits to be sprinkled around.

                                                This is what Zig does better. It limits the blast radius of async/await. But it’s still there. See the examples from my latest reply to Andrew. I had to mark a call site with @asyncCall, including making a frame. But then, I couldn’t call the blue() function because it still wasn’t async. So if I were to make it work, I would have to make blue() async. And I could do that while still making the program crash half the time.

                                                (Side note: I don’t know how to write out the type of async function. Changing blue() to async is not working with the [2]@TypeOf(blue) trick that I am using. It’s still giving me the same compile error.)

                                                In my article I opened mentioning aio-libs, which is a very clear cut example of this problem: those people are forced to reinvent the wheel (ie reimplement existing packages) because the original codebases simply cannot be reasonably used in the context of an async application.

                                                This is the problem that Zig solves. One library codebase that, with proper care, can run in both contexts and take advantage of parallelism when available. No async-std, no aio-libs, etc. This works because Zig changes the meaning and usage of async and await compared to all other programming languages (that use async/await).

                                                This is not what you are telling people, however. You are telling them that Zig does not have function colors. Those two are orthogonal.

                                                And I also doubt that Zig actually solves that problem. I do not know Zig, and it took me all of 30 minutes to 1) find a compiler bug and 2) find an example where you cannot run code in both contexts.

                                                You seem to be focused on the fact that by doing async you will introduce continuations in your program. Yes, you will. Nobody said you won’t. What you define as “cheesing” (***) is a practical tool that can save a lot of wasted effort. I guess you could say that levers and gears cheesed the need for more physical human labor, from that perspective.

                                                I have no idea what swear word you used there (I have a filter that literally turns swear words into three asterisks like you see there), but this is why I am not happy with Andrew. Now, I am not happy with you.

                                                I used “cheesing” because while it is certainly a time saver, it’s still cheating. Yes, levers and gears cheese the application of force. That’s not a bad thing. Computers are supposed to be mental levers or “bicycles for the mind.” Cheesing is a good thing.

                                                And yes, I am focused on introducing continuations into the program because there is a better way to introduce continuations and still get concurrency.

                                                In fact, I am going to write a blog post about that better way. It’s called structured concurrency, and it introduces continuations by using closures to push data down the stack.

                                                Sure, syntax and the resulting computational model aren’t completely detached: if you do have continuations in your code, then you will need to think about how your application is going to behave. Duh, but the point is libraries. Go download OkRedis. Write an async application with it, then write a blocking applicaton with it. You will be able to do both, while importing the same exact declarations from my library, and while also enjoying speedups in the async version, if you allowed for concurrent operations to happen in your code.

                                                Where’s the catch? There’s always a catch. Please tell me the catch.

                                                In fact, this whole thing is about me asking you, Andrew, and the others to be honest about what catches there are in Zig’s async story.

                                                Likewise, I’m going to have to be honest about what catches there are to structured concurrency, and you can hold me to that when the blog post comes out.

                                                No. Aside from the fact that you normally just pass function identifiers around, instead of pointers, function pointers have a type and that type also tells you (and the compiler) what the right calling convention is.

                                                That is just an admission that functions are colored, if they have different types.

                                                On top of that, library authors are most absolutely not asked to ignore asyncness. In OkRedis I have a few spots where I explicitly change the behavior of the Redis client based on whether we’re in async mode or not.

                                                They are not explicitly asked. I said “implicitly” for a reason. “It’s not what programming languages do, it’s what they [and their communities] shepherd you to.” By telling everyone that Zig does not have function colors, you are training them to not think about it, even the library authors. As such, you then have to find those library authors, tell them to think about it, and explain why. It would save you and Andrew time if you just were upfront about what Zig does and does not do. And you would have, on average, better libraries.

                                                The point, to stress it one last time, is that you don’t need to have two different library codebases that require duplicated effort, and that in the single codebase needed, you’re going to only have to make a few changes to account for asyncness. In fact, in OkRedis I only have one place where I needed to account for that: in the Client struct. Every other piece of code in the entire library behaves correctly without needing any change. Pretty neat, if you ask me.

                                                That is neat. I agree. I just want Zig users to understand that, not be blissfully unaware of it.

                                                1. 1

                                                  The “animosity” (I was more defending myself vigorously) comes from Andrew swearing at me and accusing me, which he might have had a reason.

                                                  You called me a liar in the first comment you wrote.

                                                  Where’s the catch? There’s always a catch. Please tell me the catch.

                                                  Since I’m such a liar, why don’t you write some code and show me, and everyone else, where the catch is.

                                                  1. 1

                                                    Since I’m such a liar, why don’t you write some code and show me, and everyone else, where the catch is.

                                                    Well, I don’t need to write code, but I can use your own words. You said that, “Every suspend needs to be matched by a corresponding resume” or there is undefined behavior. When asked if that could be a compiler warning, you said, “That’s unfortunately impossible, as far as I know.”

                                                    That’s the catch.

                                                    1. 2

                                                      Why would you even use suspend and resume in a normal application? Those are low level primitives. I didn’t use either in any part of my blog post, and in fact you won’t find them inside OkRedis either. Unless you’re writing an event loop and wiring it to epoll or io_uring, you only need async and await.

                                                      This is not a philosophical debate: talk is cheap, as they say, so show me the code. I showed you mine, it’s OkRedis.

                                                      1. 1

                                                        Why would you even use suspend and resume in a normal application? Those are low level primitives.

                                                        Then why are they the first primitives you introduce to new users in the Zig documentation? They should have been last, with a clear warning about their caveats, if you even have them in the main documentation at all.

                                                        This is not a philosophical debate: talk is cheap, as they say, so show me the code. I showed you mine, it’s OkRedis.

                                                        I’m not going to download OkRedis or write code with it. I only learned enough Zig to make my examples to Andrew compile, and I have begun to not like Zig at all. It’s confusing and a mess, in my opinion.

                                                        But if you think that the examples I gave Andrew are not good enough, I don’t know what to tell you. I guess we’ll see if they are good enough for the people that read my blog post on it.

                                                        But I do have another question: people around Zig have said that its async story does not require an event loop, but none have explained why. Can you explain why?

                                                        1. 3

                                                          Then why are they the first primitives you introduce to new users in the Zig documentation? They should have been last, with a clear warning about their caveats, if you even have them in the main documentation at all.

                                                          They’re the basic building block used to manipulate async frames (Zig’s continuations). First you complained that my blog post didn’t talk about how async frames work, and that I meant to deceive people by not talking about it, then you read the language reference and say it should not even mention the language features that implement async frames.

                                                          With your attitude in this entire discussion, you put yourself in a position where you have an incentive to not understand things, even well established computer science concepts such as continuations. If we talk at a high level, it’s a lie, if we get into the details, it’s confusing (and at this point we know what you mean to say: designed to be confusing). I can’t help you once you go there.

                                                          I’m looking forward to reading your blog post, although in all frankness you should consider doing some introspection before diving into it.

                                                          1. 1

                                                            They’re the basic building block used to manipulate async frames (Zig’s continuations). First you complained that my blog post didn’t talk about how async frames work, and that I meant to deceive people by not talking about it, then you read the language reference and say it should not even mention the language features that implement async frames.

                                                            That’s the language reference? I thought it was the getting started documentation. Those details are not good to put in documentation for getting started, but I agree that they are good for a language reference. I would still put them last, though.

                                                            With your attitude in this entire discussion, you put yourself in a position where you have an incentive to not understand things, even well established computer science concepts such as continuations.

                                                            That’s a little ad hominem. I can understand continuations and not understand how they are used in Zig because the language reference is confusing. And yes, it is confusing.

                                                            If we talk at a high level, it’s a lie, if we get into the details, it’s confusing

                                                            It turns out that the problem is in your documentation and in your blog post. You can talk about it at a high level as long as your language about it is accurate. You can talk about the low level details once the high level subtleties are clarified.

                                                            (and at this point we know what you mean to say: designed to be confusing). I can’t help you once you go there.

                                                            I do not believe Zig was designed to be confusing, but after using it, I can safely say that the language design was not well done to prevent such confusion.

                                                            As an example, and as far as I understand at the moment, the way Zig “gets around” the function colors problem is to reuse the async and await keywords slightly differently than other languages and uses suspend to actually make a function async. So in typical code, async and await do not have the function coloring problem. Which is great and all, but the subtleties of using them are usually lost on programmers coming from other languages.

                                                            When I first heard about Zig, by the way, I was excited about it. This was back in 2018, I think, during the part of its evolution where it had comptime but not much more complexity above C. I thought comptime was great (that opinion has changed, but that’s a different story), and that the language looked promising.

                                                            Fast forward to today: Zig is immensely more complex than it was back then, and I don’t see what that complexity has bought.

                                                            That’s not a problem in and of itself, but complexity does make things harder, which means the documentation should be clearer and more precise. And the marketing should be the same.

                                                            My beef with Zig boils down to those things not happening.

                                                            Well, okay, I do have another beef with Zig: it sets the wrong tone. Programming languages, once used, set the tone for the industry, and I think Zig sets the wrong tone. So does Rust for that matter. But I can talk about that more in my blog post.

                                                            I’m looking forward to reading your blog post, although in all frankness you should consider doing some introspection before diving into it.

                                                            I have done introspection. I’ve learned where the function coloring problem actually is in Zig, and I’ve adopted new language to not come off in the wrong way. And I’ll do that in my blog post.

                                        2. 3

                                          For me, the coloring problem describes both the static and runtime semantics. Does Zig handle the case where a function called with async enters some random syscall or grabs a mutex that blocks for a long time and isn’t explicitly handled by whatever the runtime system is or does that end up blocking the execution of other async tasks?

                                          The reason why the runtime semantics matter to me when it comes to concurrency is because if you can block threads, then you implicitly always have a bounded semaphore (your threadpool) that you have to think about at all times or your theoretically correct concurrency algorithm can actually deadlock. That detail is unfortunately leaked.

                                          1. 6

                                            If you grab a standard library mutex in evented I/O mode then it interacts with the event loop, suspending the async function rather than e.g. a futex() syscall. The same code works in both contexts:

                                            mutex.lock();
                                            defer mutex.unlock();
                                            

                                            There are no function colors here; it will do the correct thing in evented I/O mode and blocking I/O mode. The person who authored the zig package using a mutex does not have to be aware of the intent of the application code.

                                            This is what the Zig language supports. Let me check the status of this feature in the standard library… looks like it’s implemented for file system reads/writes but it’s still todo for mutexes, sleep, and other kinds of I/O. This is all still quite experimental. If you’re looking for a reason to not use Zig, it’s that - not being stable yet. But you can’t say that Zig has the same async function coloring problem as other languages since it’s doing something radically different.

                                            1. 4

                                              Thanks for the explanation and standard library status information.

                                              I think the ability to make a function async at call time rather than at definition time is the best idea in Go’s concurrency design, and so, bringing something like that to a language with a much smaller runtime and no garbage collector is exciting. I look forward to seeing how this, and all of the other interesting ideas in Zig, comes together.

                                              (p.s. thanks so much for zig cc)

                                        1. 8

                                          zig force the programmer to worry about error handling. The way I see it, things should not error, if they do, stop everything and print a stack trace, please don’t bother me with it…

                                          run_this_erroring_function() catch @panic("nope");
                                          

                                          I believe this will produce a stack trace in zig

                                          Otherwise, a good article!

                                          1. 21

                                            I think that the author’s mindset is perfectly understandable for a data scientist and at the same time absolutely inadequate for software engineering (the author does point out their appreciation for that disinction right after that quote). The two fields (data science, swe) both make use of a computer but for very different goals.

                                            A high-quality application needs to be able to gracefully handle failures and explicitness in the language helps do the methodical work required to get there. This is obviously true for stuff like kernel modules, but it’s also true for user-facing applications. I’ve used recently a video editing software that would use a lot of ram while encoding a video and it would have been absolutely unacceptable for it to crash the instant it hits system limits. Coincidentally, encoding a video would instantly crash Discord and Firefox for me.

                                            And it’s not just allocation problems, but also all kinds of errors when there’s unsaved user data at hand, critical systems, etc. In data science crashing is the best strategy because at that point the program will not be computing the right answer anyway, but the same doesn’t hold as true for other applications.

                                            Also explicitness helps with deciding when an error can be safely ignored or not. If you’re writing the above mentioned video editing application and trying to print to a log file fails, you might want to continue running anyway as it should not be a show-stopper for a lengthy encoding process, for example. If instead you’re writing a command line tool whose main job is to print to stdout and printing fails, then that’s a different story, which is why Zig has no print statement.

                                            1. 3

                                              In data science crashing is the best strategy

                                              Also you know very well how your target system looks like and want to perform as fast as possible, using as many speed improvements as you can get. For example we did test openmpi stuff at a local node (64+ cores), using the “let it crash” style was easy there, while also compiling it on that target hardware. And then submitted the batch job to the supercomputer of the university, which distributed it along all nodes. Later on you got your reports back.

                                            2. 9

                                              I might be missing something, but it seems to me that using try everywhere is the most concise way to let an error bubble up without dealing with it.

                                              Coming from Python, I admit I was a bit annoyed to use it in places like adding an item to a list, as it can fail to allocate, but in the end you get to like it.

                                              1. 2

                                                Yeah, you’re absolutely right, I didn’t even think of that!

                                                1. 1

                                                  With ArrayList specifically, you can also use initCapacity() and/or ensureTotalCapacity()/ensureUnsusedCapacity() and then appendAssumeCapacity() (without try), which can lead to nicer code in some situations.

                                              1. 7

                                                The author about generics with comptime functions:

                                                Despite my enthusiasm, I am simultaneously a bit apprehensive about how far this can be extended to stuff that can’t be fully inferred at compile time.

                                                Funnily, my concerns would nearly point into the opposite direction: The mechanism is too general and therefore an IDE or other tool needs to evaluate Zig code for code completion etc.

                                                But it does seem super elegant! (No Zig experience here yet.)

                                                1. 13

                                                  That’s true, comptime can do a lot of things that an IDE would have a hard time understanding. The current plan is to add a subcommand to the Zig compiler to make it provide this information to the IDE. This is something that will be explored once the main work on the self-hosted compiler is complete.

                                                1. 1

                                                  I really like where Zig is going. My main worry design-wise at this point is anytype duck typing. This relates to the recently discussed “allocgate” because it’s another way to do polymorphism. It’ll be great if there’s a standard way to do runtime polymorphism, as planned in ziglang/zig#10184. But for compile-time, you have to use (foo: anytype) or (T: type, foo: T). Either way, zls can’t provide completion or anything else because it knows nothing about T. This is just like C++ template duck typing, which C++20 Concepts is fixing.

                                                  I asked about this in the Discord and people seemed to think that std.io.{Reader,Writer} is the only case where this pattern is used pervasively. But I have a hard time imagining it will be restricted to that as people start to write lots of libraries in Zig. Imagine reading/editing Rust code with all trait bounds hidden (to you and to rls), it would be a nightmare. I’m happy to be proven wrong!

                                                  1. 2

                                                    Either way, zls can’t provide completion or anything else because it knows nothing about T. This is just like C++ template duck typing, which C++20 Concepts is fixing.

                                                    We plan to basically upstream zls into the self-hosted compiler and have it provide compile-time “understanding” to zls using the same --watch mechanism that should enable incremental compilation.

                                                    1. 2

                                                      I want to clarify that I have not looked at the ZLS source code and cannot vouch for its quality or whether or not we will literally upstream it. Also, the protocol that the compiler will support will be our own language-specific protocol which is more powerful and performant than LSP. There will need to be a third party proxy/adapter server to convert between what e.g. VSCode supports and what the Zig compiler provides.

                                                      1. 1

                                                        ZLS is a bit of a red herring, I didn’t actually mean to focus on it. Consider C++ and Rust here:

                                                        // C++: runtime polymorphism
                                                        void foo(MyReader* r) { ... }
                                                        // C++: compile-time polymorphism
                                                        template <typename R> void foo(R r) { ... }
                                                        // C++: compile-time polymorphism with concepts
                                                        template <typename R> requires MyReader<R> void foo(R r) { ... }
                                                        
                                                        // Rust: runtime polymorphism
                                                        fn foo(r: &mut dyn MyReader) { ... }
                                                        // Rust: compile-time polymorphism
                                                        fn foo<R: MyReader>(r: &mut R) { ... }
                                                        

                                                        C++ templates are a lot more powerful than Rust traits. You can do all kinds of things with SFINAE, static assertions, etc. In Rust you can’t express even simple logic like negative trait bounds. However, pre-c++20 templates suck because you have no idea what R is. It’s duck typed: if it compiles, it compiles. This means:

                                                        • You have to rely on non-machine-readable comments, much like types in dynamic languages before TypeScript/Mypy/Sorbet/etc. These can be inaccurate or outdated.

                                                        • Editor tooling is hamstrung: no jumping to the definition of MyReader, no autocompletion after typing “r.” in the body of foo, no way to find all uses of MyReader (it’s just in a comment).

                                                        Zig feels similar to C++ without concepts in this respect. If you want to change from dynamic dispatch (e.g. Allocator) to static (e.g. reader/writer), you sacrifice a lot. If you take reader: anytype, you have a Turing-complete language at your disposal to determine what types are allowed. It seems to me that in order to get the benefits of a restricted system, e.g. “any type that implements MyReader”, you need language support.

                                                        I suppose there could be a convention of calling (e.g.) std.io.AssertReader(@TypeOf(reader)) at the beginning of methods. This would give nice error messages, at least. But it feels a bit hacky for something like ZLS to hardcode recognizing this convention.

                                                      2. 1

                                                        This sounds cool, but can you elaborate on how that helps the situation? Will it get extra “understanding” from all the call sites? It seems to me that by design, we’re stuck with English prose describing what sort of types T are expected, and it won’t be possible to (e.g.) command-click into the Reader interface or get autocompletion for its methods. I know Zig’s comptime programming is a lot more powerful than Rust traits, but I wonder if there’s a way to get the best of both worlds.

                                                        1. 2

                                                          I think I can’t really give you a good answer until we start working on the thing but in general ZLS right now can only look at the AST to reason about types, while the compiler also implements semantic analysis so ideally some of that machinery can be used to provide suggestions etc.

                                                    1. 16

                                                      Mitchell Hashimoto has published an insanely easy way to get an amazing setup on M1 machines. Basically you develop inside a NixOS VM and use graphical applications in the macOS host. Best of both worlds IMO.

                                                      https://twitter.com/mitchellh/status/1452721115009191938?s=20

                                                      1. 9

                                                        not sure what the point of NixOS here though…

                                                        1. 13

                                                          Relentless advocacy. You can’t have a thread without a Nix user promoting it.

                                                          1. 4

                                                            Can’t really speak for Mitchell, but AFAIK he’s also generally using nix and nix-darwin on macOS, so I’d guess it’s NixOS for continuity and code re-use.

                                                            Personally, I would still re-iterate your question–a big part of what motivated me to climb Nix mountain was to be able to uninstall VirtualBox and Vagrant (sorry, mitchellh!) and reclaim the storage, memory, and battery-life that they’d usually squander. (There’s obviously some stuff that just won’t run on macOS; if one of those is a pillar of your dev env I understand settling on a VM.)

                                                            1. 3

                                                              Developing on macOS in a baremetal language is becoming increasingly annoying and slow. NixOS, even when virtualized, feel snappier and you can use the usual tooling (gdb, valgrind, etc) without issues, while on macOS there’s a new hurdle put by Apple every release. As for NixOS specifically vs another distro, I guess that was just his preference but I’m personally falling in love with it too because of the reproducibility of changes. I now have this setup both on a M1 air and a M1 mini and I can sync them with a single command.

                                                              1. 3

                                                                Hmm, it’s pretty painless for me. The LLVM toolchain is pretty good, you have lldb and address sanitizer, and homebrew for libraries. It’s not as nice as developing on Linux, but I’d say it seems to be getting better over time as llvm matures, not worse.

                                                                The lack of valgrind is sometimes a bit painful, but AFAIU that’s mostly just because it hasn’t been ported to macOS-on-ARM yet, not due to anything fishy Apple is doing. Besides, ASan has mostly replaced valgrind for me.

                                                                I may be wrong though, if you have any concrete examples of hurdles Apple has introduced recently I’d be very interested to hear about them.

                                                                1. 1

                                                                  literally one button to download Xcode… What the prob. May be learn some LLVM tooling.

                                                                  1. 1

                                                                    Curious…. I am new to NixOS and keep seeing reference to it here on lobsters, I want to try it but seem to be hitting a hurdle as there appears to be no ARM ISO available (that I can tell anyhow).

                                                                    I am running Parallels, and machine is a MacBook Pro M1 Max.

                                                                    Can I get some more details on your setup?

                                                                    1. 2

                                                                      My setup is based on the github repo mentioned in the tweet I linked above.

                                                                2. 3

                                                                  I wonder what this setup does to your battery life?

                                                                  1. 2

                                                                    I haven’t used it for long enough on the laptop yet (I mainly use it on a M1 mini), but the fact that once you’re done programming you can pause the VM and just browse stuff on the host machine makes is seem a reasonable compromise also from that angle.

                                                                1. 4

                                                                  I posted this partially because of the new Unicode version, and partially as an answer to people who ask why Zig doesn’t have a built-in Unicode string type.

                                                                  My argument is that if you want to support Unicode, you have to do so knowingly.
                                                                  No built-in type can exempt you from that.

                                                                  1. 2

                                                                    My argument is that if you want to support Unicode, you have to do so knowingly. No built-in type can exempt you from that.

                                                                    From this comment, it sounds like Swift successfully exempts developers from thinking about Unicode – if they work on non-performance-sensitive programs. Swift’s abstraction over Unicode strings could lead to unexpectedly slow operations on certain strings, so I understand why Zig wouldn’t want that.

                                                                    To avoid the impression that Zig doesn’t support Unicode at all, I’ll note that though the Zig language doesn’t have a Unicode type, the Zig standard library has a std.unicode struct with functions that perform Unicode operations on arrays of bytes.

                                                                    Do you know if there are any plans to update std.unicode given the issues raised by the author of Ziglyph in this comment – that graphemes would be a better base unit than codepoints? I only just started trying to write Unicode-aware code in Zig, but after reading about the available libraries, I wish for Zigstr or something like it to replace std.unicode in the standard library. Otherwise, I worry about developers finding std.unicode in the standard library, using it to read strings a codepoint at a time, and thinking they’ve handled everything they need to.

                                                                    The comments I linked to were left on an issue that was closed because no Zig language changes were needed. Would it be well-received if I opened a new issue about updating the Zig standard library as I described above?

                                                                    1. 2

                                                                      I think that followup comment isn’t quite correct. Swifts strings cannot be indexed using a plain numeric index as in other languages. Instead, they are indexed using the String.Index type which must be constructed using the String instance in question, advanced and manipulated using String index methods. All this ceremony makes it rather obvious that it’s not an O(1) operation.

                                                                      1. 2

                                                                        I added a comment to one of the threads you linked just now, and I will reproduce it here:

                                                                        @jecolon thank you for your comments. Before tagging 1.0, I will be personally auditing std.unicode (and the rest of std) while inspecting ziglyph carefully for inspiration. If you’re available during that release cycle I would love to get you involved and work with you an achieving a reasonable std lib API.

                                                                        In fact, if you wanted to make some sweeping, breaking changes to std.unicode right now, upstream, I would be amenable to that. The only limitation is that we won’t have access to the Unicode data for the std lib. If you want to make a case that we should add that as a dependency of zig std lib, I’m willing to hear that out, but for status quo, that is a limitation because of not wanting to take on that dependency.

                                                                        In summary, std.unicode as it exists today is mainly used to serve other APIs such as the file system on Windows. It is one of the APIs that I think is far from its final form when 1.0 is tagged, and someone who has put in the work to make ziglyph is welcome to go in and make some breaking changes in the meantime.

                                                                    1. 35

                                                                      return err is almost always the wrong thing to do. Instead of:

                                                                      if err := foo(); err != nil {
                                                                      	return err
                                                                      }
                                                                      

                                                                      Write:

                                                                      if err := foo(); err != nil {
                                                                      	return fmt.Errorf("fooing: %w", err)
                                                                      }
                                                                      

                                                                      Yes, this is even more verbose, but doing this is what makes error messages actually useful. Deciding what to put in the error message requires meaningful thought and cannot be adequately automated. Furthermore, stack traces are not adequate context for user-facing, non-programming errors. They are verbose, leak implementation details, are disrupted by any form of indirection or concurrency, etc.

                                                                      Even with proper context, lots of error paths like this is potentially a code smell. It means you probably have broader error strategy problems. I’d try to give some advice on how to improve the code the author provided, but it is too abstract in order to provide any useful insights.

                                                                      1. 18

                                                                        I disagree on a higher level. What we really want is a stacktrace so we know where the error originated, not manually dispensed breadcrumbs…

                                                                        1. 32

                                                                          maybe you do, but I prefer an error chain that was designed. A Go program rarely has just one stack, because every goroutine is its own stack. Having the trace of just that one stack isn’t really a statement of the program as a whole since there’s many stacks, not one. Additionally, stack traces omit the parameters to the functions at each frame, which means that understanding the error means starting with your stack trace, and then bouncing all over your code and reading the code and running it in your head in order to understand your stack trace. This is even more annoying if you’re looking at an error several days later in a heterogeneous environment where you may need the additional complication of having to figure out which version of the code was running when that trace originated. Or you could just have an error like “failed to create a room: unable to reserve room in database ‘database-name’: request timed out” or something similar. Additionally, hand-crafted error chains have the effect that they are often much easier to understand for people who operate but don’t author something; they may have never seen the code before, so understanding what a stack trace means exactly may be difficult for them, especially if they’re not familiar with the language.

                                                                          1. 6

                                                                            I dunno. Erlang and related languages give you back a stack trace (with parameters) in concurrently running processes no problem

                                                                            1. 5

                                                                              It’s been ages since I wrote Erlang, but I remember that back then I rarely wanted a stack trace. My stack were typically 1-2 levels deep: each process had a single function that dispatched messages and did a small amount of work in each one. The thing that I wanted was the state of the process that had sent the unexpected message. I ended up with some debugging modes that attached the PID of the sending process and some other information so that I could reconstruct the state at the point where the problem occurred. This is almost the same situation as Go, where you don’t want the stack trace of the goroutine, you want to capture a stack trace of the program at the point where a goroutine was created and inspect that at the point where the goroutine failed.

                                                                              This isn’t specific to concurrent programs, though it is more common there, it’s similar for anything written in a dataflow / pipeline style. For example, when I’m debugging something in clang’s IR generation I often wish I could go back and see what had caused that particular AST node to be constructed during parsing or semantic analysis. I can’t because all of the state associated with that stack is long gone.

                                                                          2. 10

                                                                            FWIW, I wrote a helper that adds tracing information.

                                                                            I sort of have two minds about this. On the one hand, yeah, computers are good at tracking stack traces, why are we adding them manually and sporadically? OTOH, it’s nice that you can decide if you want the traces or not and it gives you the ability to do higher level things like using errors as response codes and whatnot.

                                                                            The thing that I have read about in Zig that I wish Go had is an error trace which is different from the stack trace, which shows how the error was created, not the how the error propagates back to the execution error boundary which is not very interesting in most scenarios.

                                                                            1. 7

                                                                              The nice thing about those error traces is that they end where the stack trace begins, so it’s seamless to the point that you don’t even need to know that they are a thing, you just get exactly the information that otherwise you would be manually looking for.

                                                                            2. 8

                                                                              In a multiprocess system that’s exchanging messages: which stack?

                                                                              1. 2

                                                                                see: erlang

                                                                              2. 5

                                                                                You don’t want stack traces; you want to know what went wrong.

                                                                                A stack trace can suggest what may have gone wrong, but an error message that declares exactly what went wrong is far more valuable, no?

                                                                                1. 8

                                                                                  An error message is easy, we already have that: “i/o timeout”. A stack trace tells me the exact code path that lead to that error. Building up a string of breadcrumbs that led to that timeout is just a poorly implemented, ad-hoc stack trace.

                                                                                  1. 5

                                                                                    Indeed and I wouldn’t argue with that. I love a good stack trace, but I find they’re often relied upon in lieu of useful error messages and I think that’s a problem.

                                                                                    1. 2

                                                                                      Building up a string of breadcrumbs that led to that timeout is just a poorly implemented, ad-hoc stack trace.

                                                                                      That’s a bit of an over-generalization. A stack trace is inherently a story about the construction of the program that originated the error, while an error chain is a story about the events that led to an error. A stack trace can’t tell you what went wrong if you don’t have access to the program’s source code in the way that a hand crafted error chain can. A stack trace is more about where an error occurred, while an error chain is more about why an error occurred. I think they’re much more distinct than you are suggesting.

                                                                                      and of course, if people are just bubbling up errors without wrapping them, yeah you’re going to have a bad time, but I think attacking that case is like suggesting that every language that has exceptions encourages Pokémon exception handling. That’s a bad exception-handling pattern, but I don’t think that the possibility of this pattern is a fair indictment of exceptions generally. Meanwhile you’re using examples of bad error handling practices that are not usually employed by Go programmers with more than a few weeks experience to indict the entire paradigm.

                                                                                  2. 4

                                                                                    Stack traces are expensive to compute and inappropriate to display to most users. Also, errors aren’t exceptions.

                                                                                    1. 1

                                                                                      That’s why Swift throws errors instead. Exceptions immediately abort the program.

                                                                                    2. 3

                                                                                      What really is the “origin” of an error? Isn’t that somewhat arbitrary? If the error comes from a system call, isn’t the origin deeper in the kernel somewhere? What if you call in to a remote, 3rd party service. Do you want the client to get the stack trace with references to the service’s private code? If you’re using an interface, presumably the purpose is to abstract over the specific implementation. Maybe the stack trace should be truncated at the boundary like a kernel call or API call?

                                                                                      Stack traces are inherently an encapsulation violation. They can be useful for debugging your internals, but they are an anti-feature for your users debugging their own system. If your user sees a stack trace, that means your program is bugged, not theirs.

                                                                                      1. 5

                                                                                        I get a line of logging output: error: i/o timeout. What do I do with that? With Ruby, I get a stack trace which tells me exactly where the timeout came from, giving me a huge lead on debugging the issue.

                                                                                        1. 6

                                                                                          I get a line of logging output: error: i/o timeout. What do I do with that?

                                                                                          Well, that’s a problem you fix by annotating your errors properly. You don’t need stack traces.

                                                                                          1. 3

                                                                                            When your Ruby service returns an HTTP 500, do you send me the stack trace in the response body? What do I do with that?

                                                                                            Go will produce stack traces on panics as well, but that’s precisely the point here: these are two different things. Panics capture stack traces as a “better than nothing” breadcrumb trail for when the programmer has failed to account for a possibility. They are for producers of code, not consumers of it.

                                                                                          2. 2

                                                                                            There’s definitely competing needs between different audiences and environments here.

                                                                                            A non-technical end user doesn’t want to see anything past “something went wrong on our end, but we’re aware of it”. Well, they don’t even want to see that.

                                                                                            A developer wants to see the entire stack trace, or at least have it available. They probably only care about frames in their own code at first, and maybe will want to delve into library code if the error truly doesn’t seem to come from their code or is hard to understand in the first place.

                                                                                            A technical end user might want to see something in-between: they don’t want to see “something was wrong”. They might not even want to see solely the outer error of “something went wrong while persisting data” if the root cause was “I couldn’t reach this host”, because the latter is something they could actually debug within their environment.

                                                                                        2. 9

                                                                                          This is one reason I haven’t gone back to Go since university - There’s no right way to do anything. I think I’ve seen a thousand different right ways to return errors.

                                                                                          1. 10

                                                                                            Lots of pundits say lots of stuff. One good way to learn good patterns (I won’t call them “right”), is to look at real code by experienced Go developers. For instance, if you look at https://github.com/tailscale/tailscale you’ll find pervasive use of fmt.Errorf. One thing you might not see – at least not without careful study – is how to handle code with lots of error paths. That is by it’s very nature harder to see because you have to read and understand what the code is trying to do and what has to happen when something goes wrong in that specific situation.

                                                                                            1. 6

                                                                                              there is a right way to do most things; but it takes some context and understanding for why.

                                                                                              the mistake is thinking go is approachable for beginners; it’s not.

                                                                                              go is an ergonomic joy for people that spend a lot of time investing in it, or bring a ton of context from other languages.

                                                                                              for beginners with little context, it is definitely a mess.

                                                                                              1. 9

                                                                                                I thought Go was for beginners, because Rob Pike doesn’t trust programmers to be good.

                                                                                                1. 19

                                                                                                  I’d assume that Rob Pike, an industry veteran, probably has excellent insight into precisely how good the average programmer at Google is, and what kind of language will enable them to be productive at the stuff Google makes. If this makes programming languages connaisseurs sad, that’s not his problem.

                                                                                                  1. 9

                                                                                                    Here’s the actual quote:

                                                                                                    The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.

                                                                                                    So I have to wonder who is capable of understanding a “brilliant language” …

                                                                                                    1. 8

                                                                                                      So I have to wonder who is capable of understanding a “brilliant language” …

                                                                                                      Many people. They don’t work at Google at an entry-level capacity, that’s all.

                                                                                                      There’s a subtle fallacy at work here - Google makes a lot of money, so Google can afford to employ smart people (like Rob Pike!) It does not follow that everyone who works at Google is, on average, smarter than anyone else.

                                                                                                      (edited to include quote)

                                                                                                      1. 8

                                                                                                        Let’s say concretely we are talking about OCaml. Surely entry-level Googlers are capable of understanding OCaml. Jane Street teaches it to all new hires (devs or not) in a two-week bootcamp. I’ve heard stories of people quickly becoming productive in Elm too.

                                                                                                        The real meaning of that quote is not ‘entry-level Googlers are not capable of it’, it’s ‘We don’t trust them with it’ and ‘We’re not willing to invest in training them in it’. They want people to start banging out code almost instantly, not take some time to ramp up.

                                                                                                        1. 8

                                                                                                          Let’s say concretely we are talking about OCaml. Surely entry-level Googlers are capable of understanding OCaml. Jane Street teaches it to all new hires (devs or not) in a two-week bootcamp.

                                                                                                          I suspect that Jane Street’s hiring selects for people who are capable of understanding OCaml; I guarantee that the inverse happens and applicants interested in OCaml self select for careers at Jane Street, just like Erlang-ers used to flock towards Ericsson.

                                                                                                          Google has two orders of magnitude more employees than Jane Street. It needs a much bigger funnel and is likely far less selective in hiring. Go is “the law of large numbers” manifest as a programming language. That’s not necessarily bad, just something that is important for a massive software company and far less important for small boutiques.

                                                                                                          1. 3

                                                                                                            And I remember when Google would require at minimum a Masters Degree before hiring.

                                                                                                            1. 1

                                                                                                              I had a master’s degree in engineering (though not in CS) and I couldn’t code my way out of a paper bag when I graduated. Thankfully no-one cared in Dot Com Bubble 1.0!

                                                                                                            2. 2

                                                                                                              applicants interested in OCaml self select for careers at Jane Street,

                                                                                                              As I said, they teach it to all hires, including non-devs.

                                                                                                              Google has two orders of magnitude more employees than Jane Street. It needs a much bigger funnel and is likely far less selective in hiring

                                                                                                              Surely though, they are not so loose that they hire Tom Dick and Harry off the street. Why don’t we actually look at an actual listing and check? E.g. https://careers.google.com/jobs/results/115367821606560454-software-developer-intern-bachelors-summer-2022/

                                                                                                              Job title: Software Developer Intern, Bachelors, Summer 2022 (not exactly senior level)

                                                                                                              Minimum qualifications:

                                                                                                              Pursuing a Bachelor’s degree program or post secondary or training experience with a focus on subjects in software development or other technical related field. Experience in Software Development and coding in a general purpose programming language. Experience coding in two of C, C++, Java, JavaScript, Python or similar.

                                                                                                              I’m sorry but there’s no way I’m believing that these candidates would be capable of learning Go but not OCaml (e.g.). It’s not about their capability, it’s about what Google wants to invest in them. Another reply even openly admits this! https://lobste.rs/s/yjvmlh/go_ing_insane_part_one_endless_error#c_s3peh9

                                                                                                            3. 3

                                                                                                              They want people to start banging out code almost instantly, not take some time to ramp up.

                                                                                                              Yes, and? The commodification of software developers is a well-known trend (and goal) of most companies. When your assets are basically servers, intangible assets like software and patents, and the people required to keep the stuff running, you naturally try to lower the costs of hiring and paying salary, just like you try to have faster servers and more efficient code.

                                                                                                              People are mad at Rob Pike, but he just made a language for Google. It’s not his fault the rest of the industry thought “OMG this is the bee’s knees, let’s GO!” and adopted it widely.

                                                                                                              1. 1

                                                                                                                Yes, I agree that the commodification of software developers is prevalent today. And we can all see the result, the profession is in dire straits–hard to hire because of bonkers interview practices, hard to keep people because management refuses to compensate them properly, and cranking out bugs like no tomorrow.

                                                                                                              2. 3

                                                                                                                on the contrary, google provides a ton of ramp up time for new hires because getting to grips with all the internal infrastructure takes a while (the language is the least part of it). indeed, when I joined a standard part of the orientation lecture was that whatever our experience level was, we should not expect to be productive any time soon.

                                                                                                                what go (which I do not use very much) might be optimising for is a certain straightforwardness and uniformity in the code base, so that engineers can move between projects without having to learn essentially a new DSL every time they do.

                                                                                                                1. 1

                                                                                                                  You may have a misconception that good programming languages force people to ‘essentially learn a new DSL’ in every project. In any case, as you yourself said, the language is the least part of the ramp-up of a new project, so even if that bit were true, it’s still optimizing for the wrong thing.

                                                                                                                  1. 1

                                                                                                                    no, you misunderstood what i was getting at. i was saying that go was optimisng for straightforwardness and uniformity so that there would be less chance of complex projects evolving their own way of doing things, not that better languages would force people to invent their own DSLs per project.

                                                                                                                    also the ramp-up time i was referring to was for new hires; a lot of google’s internal libraries and services are pretty consistently used across projects (and even languages via bindings and RPC) so changing teams requires a lot less ramp up than joining google in the first place.

                                                                                                                    1. 1

                                                                                                                      i was saying that go was optimisng for straightforwardness and uniformity so that there would be less chance of complex projects evolving their own way of doing things,

                                                                                                                      Again, the chances of that happening are not really as great as the Go people seem to be afraid it is, provided we are talking about a reasonable, good language. So let’s say we leave out Haskell or Clojure. The fear of language-enabled complexity seems pretty overblown to me. Especially considering the effort put into the response, creating an entirely new language and surrounding ecosystem.

                                                                                                        2. 9

                                                                                                          No, Rob observed, correctly, that in an organization of 10,000 programmers, the skill level trends towards the mean. And so if you’re designing a language for this environment, you have to keep that in mind.

                                                                                                          1. 4

                                                                                                            it’s not just that. It’s a language that has to reconcile the reality that skill level trends toward the mean, with the fact that the way that google interviews incurs a selection/survival bias towards very junior programmers who think they are the shit, and thus are very dangerous with the wrong type of power.

                                                                                                            1. 4

                                                                                                              As I get older and become, presumably, a better programmer, it really does occur to me just how bad I was for how long. I think because I learned how to program as a second grader, I didn’t get how much of a factor “it’s neat he can do it all” was in my self-assessment. I was pretty bad, but since I was being compared to the other kids who did zero programming, it didn’t matter that objectively I was quite awful, and I thought I was hot shit.

                                                                                                            2. 4

                                                                                                              Right! But the cargo-cult mentality of the industry meant that a language designed to facilitate the commodification of software development for a huge, singular organization escaped and was inflicted on the rest of us.

                                                                                                              1. 4

                                                                                                                But let’s be real for a moment:

                                                                                                                a language designed to facilitate the commodification of software development

                                                                                                                This is what matters.

                                                                                                                It doesn’t matter if you work for a company of 12 or 120,000: if you are paid to program – that is, you are not a founder – the people who sign your paychecks are absolutely doing everything within their power to make you and your coworkers just cogs in the machine.

                                                                                                                So I don’t think this is a case of “the little fish copying what big bad Google does” as much as it is an essential quality of being a software developer.

                                                                                                                1. 1

                                                                                                                  Thank you, yes. But also, the cargo cult mentality is real.

                                                                                                            3. 3

                                                                                                              Go is for compilers, because Google builds a billion lines a day.

                                                                                                        3. 2

                                                                                                          return errors.Wrapf(err, "fooing %s", bar) is a bit nicer.

                                                                                                          1. 13

                                                                                                            That uses the non-standard errors package and has been obsolete since 1.13: https://stackoverflow.com/questions/61933650/whats-the-difference-between-errors-wrapf-errors-errorf-and-fmt-errorf

                                                                                                            1. 1

                                                                                                              Thanks, that’s good to know.

                                                                                                            2. 8

                                                                                                              return fmt.Errorf("fooing %s %w", bar, err) is idiomatic.

                                                                                                              1. 9

                                                                                                                Very small tweak: normally you’d include a colon between the current message and the %w, to separate error messages in the chain, like so:

                                                                                                                return fmt.Errorf("fooing %s: %w", bar, err)
                                                                                                                
                                                                                                            3. 1

                                                                                                              It makes error messages useful but if it returns a modified err then I can’t catch it further up with if err == someErr, correct?

                                                                                                              1. 2

                                                                                                                You can use errors.Is to check wrapped errors - https://pkg.go.dev/errors#Is

                                                                                                                Is unwraps its first argument sequentially looking for an error that matches the second. It reports whether it finds a match. It should be used in preference to simple equality checks

                                                                                                                1. 2

                                                                                                                  Thanks! I actually didn’t know about that.

                                                                                                                2. 2

                                                                                                                  Yes, but you can use errors.Is and errors.As to solve that problem. These use errors.Unwrap under the hood. This error chaining mechanism was introduced in Go 1.13 after being incubated in the “errors” package for a long while before that. See https://go.dev/blog/go1.13-errors for details.

                                                                                                              1. 1

                                                                                                                One thing that LLVM can’t do, is link MachO executables for Apple Silicon (the new Apple ARM chips)

                                                                                                                Wait, what? I’ve not seen this mentioned anywhere else. I’m sure the XCode version of ld is customized by Apple, but I’d be amazed if vanilla lld doesn’t work on an M1…

                                                                                                                1. 3

                                                                                                                  Apple doesn’t yet use the LLVM linker in the Apple-provided toolchains. Their own linker, ld64, does not yet have a completely finished drop-in replacement in LLVM, though one is underway. Every pair of binary format and architecture needs custom code in the linker. Apple’s linker is pretty good and so this hasn’t been a priority for anyone who works outside of Apple (just use ld64 - it works fine with LLVM LTO modes and it’s pretty fast, much faster than GNU BFD ld) or inside Apple (just use ld64, it’s their system linker). I believe the Apple folks would like to move to an lld-based linker at some point so that they’re not the only ones maintaining the linker but moving in that direction will always be lower priority than making sure that the system linker works well.

                                                                                                                  1. 1

                                                                                                                    When Jakub explained it to me, I too was amazed both at that fact, and at the horrible choice of names.

                                                                                                                  1. 19

                                                                                                                    Yesterday on Twitter, someone said:

                                                                                                                    The success of docker was always based in the fact that the most popular web technologies of that day and age sucked so bad at dependency management that “just download a tarball of a stripped down os image” seemed like a reasonable solution.

                                                                                                                    This is true, but it’s sort of more true that as TFA says,

                                                                                                                    The reason why we can often get away with using languages like Python or JavaScript to drive resource-intensive computations, is because under the hood somebody took years to perfect a C implementation of a key procedure and shared it with the world under a permissive license.

                                                                                                                    And C/C++ have an ugly Makefile where an actual dependency manager should be, which makes Docker feel like a solution and not a bandaid.

                                                                                                                    I think TFA is correct that moving forward, it’s not going to be possible to boil the ocean and throw out all existing unsafe software, but we can at least simplify things by using simpler and more direct dependency management in C/C++.

                                                                                                                    1. 29

                                                                                                                      And C/C++ have an ugly Makefile where an actual dependency manager should be, which makes Docker feel like a solution and not a bandaid.

                                                                                                                      I completely disagree. Makefile/CMake/Meson/whatever are convoluted, difficult to learn, etc but they are fundamentally different from what docker gives you. They plug in to the existing ecosystem, they compose nicely with downstream packages, they’re amenable to distro packaging, they offer well-defined, stable, and standardized interfaces for consumption. They are part of an ecosystem and are great team players.

                                                                                                                      A docker image says “f this, here’s everything and the kitchen sink in the exact version and configuration that worked for me, don’t change anything, good luck maintaining dependencies when we don’t bother fast enough. Screw your system preferences for the behavior of dependency x, y, or z (which they rightly have no need to know about or concern themselves with - but the user very much has the right to), this is what works for me and you’re on your own if you want to diverge in the slightest.”

                                                                                                                      I write and maintain open source software (including things you might use). It’s hard to use system dependencies and abstract away our dependency on them behind well-defined boundaries. But it’s important because I respect that it’s not my machine the code will run under, it’s the users’.

                                                                                                                      Docker - like Electron but let’s not get into that here - isn’t about what’s better in principal or even in practice - it’s solely about what’s easier. At some point, it was universally accepted that things should be easy for the user even if that makes the developer’s job a living hell. It’s what we do. Then sometime in the past ten years, it all became about what’s easiest and most pain-free for developers. Software development (don’t you dare say software engineering) became lazy.

                                                                                                                      We can argue about the motives but I don’t blame the developers, I think they are following a path that was paved by corporations that realized users don’t know any better and developers were their only advocates. It was cheaper to invent these alternatives that let you push software out the door faster with greener and greener developers than it was investing in the existing ecosystem and holding the industry to a higher standard. Users have no one advocating for them and they don’t even realize it.

                                                                                                                      1. 4

                                                                                                                        Software development (don’t you dare say software engineering) became lazy.

                                                                                                                        This sentiment is as old as Unix: https://en.wikipedia.org/wiki/Worse_is_better

                                                                                                                        1. 10

                                                                                                                          Docker is neither simple nor correct nor consistent nor complete in either the New Jersey or MIT sense.

                                                                                                                          I think that if the takeaway from reading Worse Is Better is that lazy software development is acceptable, then that is the incorrect takeaway. The essay is about managing complexity in order to get a rough fit sooner than a perfect fit perhaps too late to matter. Read the essay.

                                                                                                                          1. 8

                                                                                                                            I read the essay. The essay itself codifies a position that it opposes, based on the author’s observations about the state of the New Jersey/MIT split. It’s one person’s idea of what “Worse Is Better” means, with the essay created to criticize the self-defined argument, not the definitive idea. But we can split semantics about the essay some other time.

                                                                                                                            When someone says that “software development has become lazy” and adds a bunch of supporting information around that for a specific situation, what I read is “I am frustrated with the human condition”. Software developers have been lazy, are lazy, and will continue to be lazy. Much like a long-standing bug becomes an expectation of functionality. Fighting human nature results in disappointment. To ignore the human factors around your software is to be willingly ignorant. Docker isn’t popular in a vacuum and there’s no grand capitalist conspiracy to convince you that Docker is the way to make software. Docker solves real problems with software distribution. It may be a hamfisted solution, but railing against the human condition and blaming corporate interests is not the way to understand the problems that Docker solves, it’s just an ill-defined appeal to a boogeyman.

                                                                                                                            1. 8

                                                                                                                              Docker isn’t popular in a vacuum and there’s no grand capitalist conspiracy to convince you that Docker is the way to make software.

                                                                                                                              You, uh, you sure about that? Like, really sure?

                                                                                                                              1. 4

                                                                                                                                Our community’s discourse is so dominated by cynicism, we need to find a way to de-escalate that, not add fuel to the fire. So the benefit of the doubt is more important now than ever. That means that whenever there’s a benign explanation for something, we should accept it.

                                                                                                                                1. 11

                                                                                                                                  Our community is split into two groups:

                                                                                                                                  • Those exploiting software and human labor for financial gain at the expense of the Earth and its inhabitants.
                                                                                                                                  • Those engaging in craftsmanship and improving the state of technology for end users, by creating software you can love.

                                                                                                                                  Think carefully before choosing to defend the former group.

                                                                                                                                  1. 3

                                                                                                                                    I don’t think it’s that simple. I definitely feel the pull of the second group and its ideals, but sometimes the practices of the first group can be put to good use to, as you say, improve the state of technology for end-users. Consider: if there’s an unsolved problem affecting end-users, e.g. one caused by the sudden changes that happened in response to the pandemic, and the most practical way to solve that problem is to develop and deploy a web application, then if I spend time crafting an elegant, efficient solution that I would be proud to show to people here, then I’ve likely done the actual users a disservice, since I could get the solution out to them sooner by taking the shortcuts of the first group. That’s why I defend those practices.

                                                                                                                                    1. 3

                                                                                                                                      This fast-to-market argument only has a home because the world is run so much by the former group.

                                                                                                                                      Consider the case of email vs instant messaging. Email was standardized and made ubiquitous at a time before market forces had a chance to spoil it with vendor lock-in. Meanwhile, text messaging, and messaging in general is incredibly user-hostile. But it didn’t have to be this way. If messaging were orchestrated by the second group, with the end-user experience in mind as the primary concern, we would have widely popular federated messaging with robust protocols. Further, many other technologies would exist this way, with software of the world, in general, being more cooperative and reusable. In such case, total time to develop and deploy a web application would be decreased from where it is today, and furthermore it would have more capabilities to aid the end-user.

                                                                                                                                      All this “glue” code that needs to be written is not fundamentally necessary in a technical sense; it’s a direct result of the churn of venture capital.

                                                                                                                                      1. 8

                                                                                                                                        The friendliest ways of building websites, with the least amount of code, right now are things like Wix, Wordpress, cPanel, and so forth. These are all very much commercial ventures, squarely from the first camp.

                                                                                                                                        Your example of messaging is also questionable, because the successful messaging stuff was taken over by the first camp while the second camp was screwing around with XMPP and IRCv3 and all the rest.

                                                                                                                                        The killer advantage the first camp has over the craftsmen in the second camp is that they’re not worried about “quality” or “products people love”…they are worried about the more straightforward (and sustainable) goal of “fastest thing we can put out with the highest profit margin the most people want”.

                                                                                                                                        I wish–oh how much do I wish!–that the second group was favored, but they aren’t as competitive as they need to be and they aren’t as munificent or excellent as they think they are.

                                                                                                                                2. 2
                                                                                                                                  1. 5

                                                                                                                                    In my eyes that’s proof that Docker failed to build a moat more than anything else, and in fact it has greater chances to be evidence in support of friendlysock’s theory than the opposite: companies don’t go gently into the night, VC funded ones especially, so you can be sure that those billions fueled pantagruelian marketing budgets in a desperate scramble to become the leading brand for deploying distributed systems.

                                                                                                                                    Unfortunately for them the open source game didn’t play out in their favor.

                                                                                                                                    1. 4

                                                                                                                                      Unfortunately for them the open source game didn’t play out in their favor.

                                                                                                                                      I don’t think there’s any actual disagreement here; just differences about how snarky we want to be when talking about the underlying reality. Yes, Docker is a company with VC cash that had an incentive to promote its core offering. But no, Docker can’t actually make the market accept its solutions, so e.g. Docker Swarm was killed by Kubernetes.

                                                                                                                                      Okay, maybe you can say, but Kubernetes was just promoted by Google, which is an even bigger capitalist nightmare, which okay, fine is true, but at the end of the day, propaganda/capitalism/whatever you want to call it can only go so far. You can get to a certain point by just being big and hyped, but then if you aren’t actually any good, you’ll eventually end up crashing against reality, like Docker Swarm or Windows Mobile or XML or the Soviet Union or whoever else tries to substitute a marketing budget for reality.

                                                                                                                                      1. 2

                                                                                                                                        but at the end of the day, propaganda/capitalism/whatever you want to call it can only go so far.

                                                                                                                                        I do agree that containers are a solution to a problem. An imperfect solution to a problem we should not have in the first place but, regardless, it’s true that they can be a useful tool in the modern development world. That said, I fear that it’s the truth that can only go so far, and that skilled use of a communication medium can produce much bigger impact in the short to medium term.

                                                                                                                                    2. 5

                                                                                                                                      That article suggests they raised more than a quarter of a billion dollars, and then talks about how they lost to the even more heavily propagandized (by Google) Kubernetes meme when they couldn’t figure out how to monetize all the victims. Neither of those seems a clear counter to there being a vast capitalist conspiracy.

                                                                                                                                      Like, devs get memed into dumb shit all the time by sales engineers.

                                                                                                                                      If they didn’t there wouldn’t be devrel/devangelist positions.

                                                                                                                                      Edit:

                                                                                                                                      (and just to be clear…I’m not denying that Docker has some use cases. I myself like it for wrapping up the seeping viscera of Python projects. I’m just disagreeing that it was from some spontaneous outpouring of developer affection that it got where it is today. See also, Java and React.)

                                                                                                                                      1. 4

                                                                                                                                        Like, devs get memed into dumb shit all the time by sales engineers.

                                                                                                                                        If they didn’t there wouldn’t be devrel/devangelist positions.

                                                                                                                                        Yeah, true enough based on my experience as a former dev advocate.

                                                                                                                                        1. 2

                                                                                                                                          Neither of those seems a clear counter to there being a vast capitalist conspiracy.

                                                                                                                                          There can’t be two vast capitalist conspiracies. If there are two, it’s not a vast conspiracy. Calling it a “capitalist conspiracy” either means that there is only one or that you like using snarky names for perfectly ordinary things.

                                                                                                                                          1. 2

                                                                                                                                            I would call a conspiracy of half the capitalists pretty vast, FWIW.

                                                                                                                                3. 2

                                                                                                                                  Yes. But that was only the conclusion of my argument; I think it’s fair to say that the actual points I was making regarding dependencies are pretty objective/factual and specific to the docker situation.

                                                                                                                                4. 2

                                                                                                                                  While I agree and am loathe to defend docker in any way, if instead of a docker image we were talking about a Dockerfile then that is comparable to a build system that declares dependencies also.

                                                                                                                                  1. 2

                                                                                                                                    I completely disagree. Makefile/CMake/Meson/whatever are convoluted, difficult to learn, etc but they are fundamentally different from what docker gives you.

                                                                                                                                    Agreed.

                                                                                                                                    They plug in to the existing ecosystem, they compose nicely with downstream packages, they’re amenable to distro packaging, they offer well-defined, stable, and standardized interfaces for consumption.

                                                                                                                                    I disagree. The interfaces aren’t stable or standardized at all. Distros put a huge amount of effort into trying to put fingers into the leaking dam, but the core problem is that Make is a Turing complete language with extreme late binding of symbols. The late binding makes it easy to write a Makefile that works on one machine but not another. Adding more layers of autoconf and whatnot does not really solve the core problem. The thing C/C++ are trying to do is… not actually that hard at all? It’s just linking and building files and trying to cache stuff along the way. Newer languages just include this as part of their core. But because every C/C++ project has its own Turing complete bespoke solution, they are incompatible and can’t be moved to new/different platforms without a ton of human effort. It’s a huge ongoing PITA for everyone.

                                                                                                                                    The thing that would actually be good is to standardize a descriptive non-Turing complete, configuration language that can just describe dependencies between files and version constraints. If you had that (big if!), then it wouldn’t be a big deal to move to new platforms, deal with platforms changing, etc. Instead we get duplication where every distro does their own work to fill in the gaps by being the package manager that C/C++ need.

                                                                                                                                    1. 2

                                                                                                                                      Sorry if I wasn’t clear: the abstracted interfaces I’m referring to aren’t provided by the Makefile or whatever. I meant standardized things like pkgconf definition files in their place, man files in their place, using the packages made available by the system package manager rather than bringing in your own deps, etc.

                                                                                                                                1. 6

                                                                                                                                  I agree with the sentiment that rewrite-it-in-X is not viable for many software projects, though the reasoning of why Zig and not Rust and can’t fully agree with. Rust support for interfacing C is really good and for C++ there is https://cxx.rs emerging. Don’t get me wrong, I like Zig and want it to succeed, just that for that specific purpose Rust might be the better target because of the guaranteed absence of undefined behavior of safe Rust code. See e.g. https://daniel.haxx.se/blog/2020/10/09/rust-in-curl-with-hyper/ of how curl allows to be configured with certain Rust-based components.

                                                                                                                                  1. 15

                                                                                                                                    The post is almost in its entirety about using Zig to compile (zig cc) and build (zig build) C/C++ projects. This is not something that Rust intends to offer and has very little to do with interfacing with C. It’s about being a C/C++ compiler.

                                                                                                                                    The only part where I mention extending a C project with Zig I also mention that you can do the same with Rust.

                                                                                                                                    1. 8

                                                                                                                                      Rust has a weaker build.rs and cc crate, but even this is often sufficient to throw away the C build system and Ship-Of-Theseus it in Rust.

                                                                                                                                      1. 2

                                                                                                                                        Indeed, my comment was tangential and in regards to

                                                                                                                                        Instead of running away from the C/C++ ecosystem, we must find a way of moving forward that doesn’t start by throwing in the trash everything that we have built in the last 40 years.

                                                                                                                                        with which I very much agree. If you talk about this then of course the question of C/C++ interop comes up. From my own cursory attempts of making zig talk to C libs I ran into problems pretty quickly trying to consume Win32 APIs as the header files could not be parsed. This of course can be worked around and fixed but it is pretty central to the question. (And Rust’s cbindgen has it limits as well of course). My understanding is also that zig provides no direct way to call into c++, which is undoubtedly a tricky subject but also this is quite central to the topic.

                                                                                                                                        But back to the build system: My very incomplete understanding of the example of implementing a redis command that you provide is that this also required reworking some of the Makefiles. This is expected of course but I doubt a little that you can just throw in zig into any sized decades old codebase without issues. It might still be easier than to integrate with Rust/cargo but there is work there either way. And so far having a custom build.rs that uses the cc and/or cmake crate has provided good build support for the things I attempted.

                                                                                                                                        And don’t get me wrong, I like zig and the approach it takes. The compile time meta programming is really cool and the focus on a fast and versatile toolchain is great. I definitely want it to succeed and to offer a real “systems programming” alternative.

                                                                                                                                    1. 6

                                                                                                                                      We have already seen how disruptive changing language can be when the Python cryptography package added a Rust dependency which in turn changed the list of supported platforms and caused a lot of community leaders to butt heads.

                                                                                                                                      My understanding of the situation was that it only broke the package on unsupported platforms, that others were unofficially supporting downstream. Said others also missed the warning on the mailing list months in advance (AFAICT because they simply weren’t following it and/or it wasn’t loud enough), and frankly that’s kind of alarming given that it’s a security package.

                                                                                                                                      Link to the previous discussion of this whole controversy: https://lobste.rs/s/f4chm2/dependency_on_rust_removes_support_for

                                                                                                                                      1. 5

                                                                                                                                        Since a few people on HN misinterpreted the purpose of mentioning that example, I’ll preemptively quote here my reasoning behind its addition to the article:

                                                                                                                                        The point about the Python package example is not to say that Zig can get on platforms where Rust can’t, but rather that the C infrastructure that we all use is not that easy to replace and every time you touch something, regardless of how decrepit and broken it might have been, you will irritate and break someone else’s use case, which can be a necessary evil sometimes but not always.

                                                                                                                                        1. 3

                                                                                                                                          I’m still not sure you’ve chosen a good example – that Python cryptography package was already defacto-broken on those unsupported platforms (in the sense that crypto that the developers have never even attempted to make work correctly with the build dependencies that were in use should never have been trusted to encrypt anything) long before Rust ever showed up on the scene. The entire controversy around that package was about people mistaking “can be coerced into compiling on” with “works correctly on”, and thereby assuming and foisting on to end users a lot of ill-conceived, dangerous risks.

                                                                                                                                          “Zig would have allowed people to keep lying to themselves in this case” seems…uncompelling. There must be better examples & arguments that could be made here? Not “irritating and breaking” somebody’s broken, dangerous usecase is just not a selling point. It’s rather the opposite if you’re a package author who would prefer that people didn’t continue to build an unsupported footgun for users to shoot themselves with out of your work.

                                                                                                                                          And this all misses teh_cyanz’s larger point, which is that the quote

                                                                                                                                          changed the list of supported platforms

                                                                                                                                          is simply incorrect. It continued to build on all supported platforms. It broke the build for some people on platforms that had never been supported (this was explicitly stated by the maintainer), but who had just happened to hack together builds that may or may not have ever even worked correctly.

                                                                                                                                      1. 2

                                                                                                                                        Standups are a tool to make tactical decisions between team members working on the same set of tasks, where one’s progress might depend on someone else’s. Unless you have this precise situation happening, nothing can be gained by having standup meetings.

                                                                                                                                        Frankly, based on my experience, this post feels like a knee jerk reaction against the agile fad but while it correctly identifies that there is a lot of superficiality around those topics, just being a contrarian doesn’t really make you any better.

                                                                                                                                        I wrote about standups a while ago: https://kristoff.it/blog/good-bad-ugly-standup/

                                                                                                                                        1. 3

                                                                                                                                          This was really cool. I’ll need to watch that a few times to understand Zig’s callsite behavior. I’ve never seen that design before, but it seems nice to have this kind of flexibility.

                                                                                                                                          1. In Zig, if you naively wrote an async function and then called it without async, would it behave as expected synchronously?

                                                                                                                                          2. What are the historical influences on each language’s async design? What was the first language with Zig’s callsite behavior?

                                                                                                                                          1. 3
                                                                                                                                            1. Calling without async means that you await the function immediately.
                                                                                                                                            2. Not sure, but to make this work, Zig uses comptime lazyness a lot and I don’t know of other languages that have that same approach to metaprogramming.

                                                                                                                                            If you want to read an introductory post to Zig’s Async/Await, I wrote this a while ago: https://kristoff.it/blog/zig-colorblind-async-await/