Threads for riki

    1. 24

      I was hoping this article would compare if err != nil to more modern approaches (Rust’s ?) and not just Java-style exceptions, but unfortunately it doesn’t.

      I’d be more interested to read an article that weighs the approaches against each other.


      One point the article misses is how value-based error handling works really nicely when you don’t have constructors (either in your language, or at least your codebase, in case you’re using C++.)

      1. 8

        I’ve been pretty disappointed by Rust’s approach to error handling. It improves upon two “problems” in Go which IMHO are not actually problems in practice: if err != nil boilerplate and unhandled return values, but making good error types is fairly hard–either you manually maintain a bunch of implementations of the Error trait (which is a truly crushing amount of boilerplate) or you use something like anyhow to punt on errors (which is generally considered to be poor practice for library code) or you use some crate that generates the boilerplate for you via macros. The latter seems idyllic, but in practice I spend about as much time debugging macro errors as I would spend just maintaining the implementations manually.

        In Go, the error implementation is just a single method called Error() that returns a string. Annotating that error, whether in a library or a main package, is just fmt.Errorf("calling the endpoint: %w", err). I don’t think either of them do a particularly good job of automating stack trace stuff, and I’m not sure about Rust, but at least Go does not have a particularly good solution for getting more context out of the error beyond the error message–specifically parameter values (if I’m passing a bunch of identifiers, filepaths, etc down the call stack that would be relevant for debugging, you have to pack them into the error message and they often show up several times in the error message or not at all because few people have a good system for attaching that metadata exactly once).

        A lot of people have a smug sense of superiority about their language’s approach to error handling, which (beyond the silliness of basing one’s self-esteem on some programming language feature) always strikes me as silly because even the best programming languages are not particularly good at it, or at least not as good as I imagine it ought to be.

        1. 9

          a bunch of implementations of the Error trait (which is a truly crushing amount of boilerplate)

          you usually just need to impl Display, which I wouldn’t call a “crushing” amount of boilerplate

          or you use some crate that generates the boilerplate for you via macros.

          thiserror is pretty good, although tbqh just having an enum Error and implementing display for it is good enough. I’ve done some heavy lifting with error handling before but that’s usually to deal with larger issues, like making sure errors are Clone + Serialize + Deserialize and can keep stacktraces across FFI boundaries.

          1. 3

            It’s pretty rarely “just” impl Display though, right? If you want automatic conversions from some upstream types you need to implement From, for example. You could not do it, but then you’re shifting the boilerplate to every call site. Depending on other factors, you likely also need Debug and Error. There are likely others as well that I’m not thinking about.

            1. 3

              #[derive(Debug)] and impl Display makes the impl of Error trivial (impl Error for E {}). If you’re wrapping errors then you probably want to implement source(). thiserror is a nice crate for doing everything with macros, and it’s not too heavy so the debugging potential is pretty low.

              One advantage of map_err(...) everywhere instead of implementing From is that it gives you access to file!() and line!() macros so you can get stack traces out of your normal error handling.

              1. 4

                thiserror is a nice crate for doing everything with macros, and it’s not too heavy so the debugging potential is pretty low.

                I’ve used thiserror and a few other crates, and I still spend a lot more time than I’d like debugging macro expansions. To the point where I waffle between using it and maintaining the trait implementations by hand. I’m not sure which of the two is less work on balance, but I know that I spend wayyy more time trying to make good error types in Rust than I do with Go (and I’d like to reiterate that I think there’s plenty of room for improvement on Go’s side).

                One advantage of map_err(…) everywhere instead of implementing From is that it gives you access to file!() and line!() macros so you can get stack traces out of your normal error handling.

                Maybe I should try this more. I guess I wish there was clear, agreed-upon guidance for how to do error handling in Rust. It seems like lots of people have subtly different ideas about how to do it–you mentioned just implementing Display while others encourage thiserror and someone else in this thread suggested Box<dyn Error> while others suggest anyhow.

                1. 2

                  The rule of thumb I’ve seen is anyhow for applications and thiserror or your own custom error type for libraries, and if thiserror doesn’t fit your needs (for example, needing clone-able or serializable errors, stack traces, etc). Most libraries I’ve seen either use thiserror if they’re wrapping a bunch of other errors, or just have their own error type which is usually not too complex.

          2. 8

            a smug sense of superiority about their language’s approach to error handling

            Surprisingly, you don’t see people mention Common Lisp’s condition system in these debates

            1. 3

              That’s too bad, I genuinely enjoy learning about new (to me) ways of solving these problems, I just dislike the derisive fervor with which these conversations take place.

            2. 6

              You discount anyhow as punting on errors, but Go’s Error() with a string is the same strategy.

              If you want that, you don’t even need anyhow. Rust’s stdlib has Box<dyn Error>. It supports From<String>, so you can use .map_err(|err| format!("calling the endpoint: {err}")). There’s downcast() and .source() for chaining errors and getting errors with data, if there’s more than a string (but anyhow does that better with .context()).

              1. 2

                Ah, I didn’t know about downcast(). Thanks for the correction.

              2. 3

                One source of differences in different languages’ error handling complexity is whether you think errors are just generic failures with some human-readable context for logging/debugging (Go makes this easy), or you think errors have meaning that should be distinguishable in code and handled by code (Rust assumes this). The latter is inherently more complicated, because it’s doing more. You can do it either way in either language, of course, it’s just a question of what seems more idiomatic.

                1. 1

                  I don’t think I agree. It’s perfectly idiomatic in Go to define your own error types and then to handle them distinctly in code up the stack. The main difference is that Rust typically uses enums (closed set) rather than Go’s canonical error interface (open set). I kind of think an open set is more appropriate because it gives upstream functions more flexibility to add error cases in the future without breaking the API, and of course Rust users can elect into open set semantics–they just have to do it a little more thoughtfully. The default in Go seems a little more safe in this regard, and Go users can opt into closed set semantics when appropriate (although I’m genuinely not sure off the top of my head when you need closed set semantics for errors?). I’m sure there are other considerations I’m not thinking of as well–it’s interesting stuff to think about!

                  1. 3

                    Maybe “idiomatic” isn’t quite the right word and I just mean “more common”. As I say, you can do both ways in both languages. But I see a lot of Go code that propagates errors by just adding a string to the trace, rather than translating them into a locally meaningful error type. (E.g.,

                    return fmt.Errorf("Couldn't do that: %w", err)
                    

                    so the caller can’t distinguish the errors without reading the strings, as opposed to

                    return &ErrCouldntDoThat{err} // or equivalent
                    

                    AFAIK the %w feature was specifically designed to let you add strings to a human-readable trace without having to distinguish errors.

                    Whereas I see a lot of Rust code defining a local error type and an impl From to wrap errors in local types. (Whether that’s done manually or via a macro.)

                    Maybe it’s just what code I’m looking at. And of course, one could claim people would prefer the first way in Rust, if it had a stdlib way to make a tree of untyped error strings.

                    1. 2

                      But I see a lot of Go code that propagates errors by just adding a string to the trace, rather than translating them into a locally meaningful error type

                      Right, we usually add a string when we’re just passing it up the call stack, so we can attach contextual information to the error message as necessary (I don’t know why you would benefit from a distinct error type in this case?). We create a dedicated error type when there’s something interesting that a caller might want to switch on (e.g., resource not found versus resource exists).

                      AFAIK the %w feature was specifically designed to let you add strings to a human-readable trace without having to distinguish errors.

                      It returns a type that wraps some other error, but you can still check the underlying error type with errors.Is() and errors.As(). So I might have an API that returns *FooNotFoundErr and its caller might wrap it in fmt.Errorf("fetching foo: %w", err), and the toplevel caller might do if errors.As(err, &fooNotFoundErr) { return http.StatusNotFound }.

                      Whereas I see a lot of Rust code defining a local error type and an impl From to wrap errors in local types. (Whether that’s done manually or via a macro.)

                      I think this is just the open-vs-closed set thing? I’m curious where we disagree: In Go, fallible functions return an error which is an open set of error types, sort of like Box<dyn Error>, and so we don’t need a distinct type for each function that represents the unique set of errors it could return. And since we’re not creating a distinct error type for each fallible function, we may still want to annotate it as we pass it up the call stack, so we have fmt.Errorf() much like Rust has anyhow! (but we can use fmt.Errorf() inside libraries as well as applications precisely because concrete error types aren’t part of the API). If you have to make an error type for each function’s return, then you don’t need fmt.Errorf() because you just add the annotation on your custom type, but when you don’t need to create custom types, you realize that you still want to annotate your errors.

                      1. 1

                        This is true, usually you create a specific error type on the fly when you understand that the caller needs to distinguish it.

                  2. 3

                    I tend to agree that rusts error handling is both better and worse. In day to day use I can typically get away with anyhow or dyn Error but it’s honestly a mess, and one that I really dread when it starts barking at me.

                    On the other hand… I think being able to chain ‘?’ blocks is a god send for legibility, I think Result is far superior to err.

                    I certainly bias towards Rusts overall but it’s got real issues.

                    1. 5

                      There is one thing to be said against ?: it does not encourage the addition of contextual information, which can make diagnosing an error more difficult when e.g. it gets expect-ed (or logged out) half a dozen frames above with no indication of the path it took.

                      However I that is hardly unsolvable. You could have e.g. ?("text") which wraps with text and returns, and ?(unwrapped) which direct returns (the keyword being there to encourage wrapping, one could even imagine extending this to more keywords e.g. ?(panic)` would be your unwrap).

                      1. 1

                        In a chain i’ll just map_err which as soon as the chain is multiline looks and works well. Inline it’s not excellent ha.

                        1. 1

                          Oh yeah I’m not saying it’s not possible to decorate things (it very much is), just pointing out that the incentives are not necessarily in that direction.

                          If I was a big applications writer / user of type-erased errors, I’d probably add a wrapping method or two to Result (if I was to use “raw” boxed error, as IIRC anyhow has something like that already).

                  3. 4

                    I’ve often wondered if people would like Java exceptions more if it only supported checked exceptions. You still have the issue of exceptions being a parallel execution flow / go-to, but you lose the issue of random exceptions crashing programs. In my opinion it would make the language easier to write, because the compiler would force you to think about all the ways your program could fail at each level of abstraction. Programs would be more verbose, but maybe it would force us to think more about exception classes.

                    Tl;Dr Java would be fine if we removed RuntimeException?

                    1. 12

                      You’d need to make checked exceptions not horrendous to use to start with e.g. genericity exception transparency, etc…

                      It would also necessarily yield a completely different language, consider what would happen if NPEs were checked.

                      1. 1

                        consider what would happen if NPEs were checked.

                        Basically, Kotlin, so yeah, totally agree with you.

                      2. 6

                        No, Go has unchecked exceptions. They’re called “panics”.

                        What makes Go better than Java is that you return the error interface instead of a concrete error type, which means you can add a new error to an existing method without breaking all your callers and forcing them to update their own throws declarations.

                        The creator of C# explains the issue well here: https://www.artima.com/articles/the-trouble-with-checked-exceptions#part2

                        1. 3

                          You can just throw Exception (or even a generic) in Java just fine, though if all you want is an “error interface”.

                          Java’s problem with checked exceptions is simply that checked exceptions would probably require effect types to be ergonomic.

                      3. 4

                        Looks like it’s been updated:

                        Rust, for example, has a good compromise of using option types and pattern matching to find error conditions, leveraging some nice syntactic sugar to achieve similar results.

                        I’m also personally quite fond of error handling in Swift.

                        1. 2

                          Rust, Zig, and Swift all have interesting value-oriented results. Swift more so since it added, well, Result and the ability to convert errors to that.

                          1. 5

                            Zig’s not really value oriented. It’s more like statically typed error codes.

                      4. 13

                        I like this project. I held hands with a friend, and it made me contemplate on the nature of this simple gesture. Something that’s natural to do with your lover, but awkward to do with a friend.

                        It was an oddly comforting experience. We should let ourselves hold our hands more irl. And also hug each other more, too. They’re nice feelings.

                        1. 45

                          The qmark-noglob feature, introduced in fish 3.0, is enabled by default. That means ? will no longer act as a single-character glob.

                          Hell yeah! This is gonna make pasting links into the shell so much easier ^^

                          1. 30

                            I think I’ve used ? as a shell metacharacter on purpose about twice in my entire life. I strongly agree that dropping it is nice.

                            1. 3

                              I wonder if it would work well to use paste bracketing for something like this. If you are pasting something that is mostly text but has a wildcard or two auto-escape it. I can also imagine similar features like if you type a quote, then paste, it will auto-escape any quotes in the pasted text.

                              It would probably do what you want most of the time but would probably have false-positives commonly enough that it would be negative overall. But maybe there are specific cases that are clear enough to be handled (like pasting something that starts with “https:” or immediately after a single quote).

                              1. 11

                                I can also imagine similar features like if you type a quote, then paste, it will auto-escape any quotes in the pasted text.

                                Fish 3.7.1 already does this.

                                1. 2

                                  Kitty just asks you every time.

                                  But I think a control sequence which pastes raw would also work.

                                2. 2

                                  Wouldn’t many URLs still contain the & character which would incorrectly break off the URL part-way and spawn background jobs that would almost certainly fail?

                                  1. 7

                                    No, echo a&b works for me in fish, as does echo a&b://%20?q. I think fish might require a space before and/or after the & for it to create a background process.

                                    In bash, echo a&b does not work.

                                3. 3

                                  I’ve been trying to make a game for the past few years. Every single time I’d fall into the trap of overengineering and getting bored of not having a nice toy to play around with.

                                  So this year I’m making something dumb. Like, really dumb. A renderer full of global variables, static arrays everywhere, a single Entity struct with data for all types of entities at once.

                                  It’s really fun. My dumb, ugly code is fun, easy to navigate, easy to read, easy to maintain. And I have a nice interactive toy to play around with on the screen. A level editor, a player I can move around the level, a ball I can toss. It reacts to what I’m doing!

                                  I should write more ugly code.

                                  1. 1

                                    I wonder how hard it is to make those sorts of mods. The PS Vita’s Bluetooth support is notoriously unreliable - maybe it could be fixed with a software patch too?

                                    1. 3

                                      I tried porting VVVVVV to the PSP once. It was hella hard, because there’s barely any documentation for the APIs that were reverse engineered by homebrewers.

                                      No API docs, no source code. Now imagine you’re not writing a game, but modifying the system. Sounds like something that’s not for the faint of heart.

                                      Could be that PS Vita has better docs, but I don’t own one so I haven’t checked.

                                    2. 7

                                      I mostly agree, but aren’t a few of these (a little bit) mutually exclusive? Like

                                      We are destroying software telling new programmers: “Don’t reinvent the wheel!”. But, reinventing the wheel is how you learn how things work, and is the first step to make new, different wheels.

                                      versus

                                      We are destroying software pushing for rewrites of things that work.

                                      Like obviously they’re not exact opposites, but re-writing something that works is re-inventing the wheel.

                                      1. 2

                                        I feel like this isn’t a binary choice. There’s a balance that can be had here.

                                      2. 4

                                        C++17 string_view came late to the party, but the same applies: Every function that takes a const std::string& should take a std::string_view instead. There is just much more inertia and concern among C++ devs. The main concern I’ve heard – that C++ doesn’t have a borrow checker – isn’t relevant, because both types are non-owning anyway.

                                        1. 7

                                          I hear you re const std::string&, but it’s a bit more complex than that sometimes. That is, std::string is null terminated, and std::string_view is not, so if you’re interacting with libraries that expect C strings, this isn’t always possible.

                                          That said, I agree that the borrow checker argument is a red herring.

                                          Also, things are a little dicier for std::span than std::string_view, at least until C++26 is out. The former doesn’t gain .at until then, whereas the latter had it since it was introduced.

                                          1. 4

                                            Another limitation of std::string_view is you cannot use std::unordered_map<std::string, int>::operator[] with an std::string_view as an argument. There is no equivalent of Rust’s Borrow<T> in C++, and as such looking up a string in a map forces you to allocate a full std::string.

                                            1. 2

                                              C++20 sort of fixed this, but with suboptimal ergonomics: defining is_transparent on the hasher and comparer, which are not the default (and probably never will be) will enable a find(), contains(), etc, that don’t need to construct a string just for lookup. For some reason, the corresponding at() and operator[] overloads weren’t added until C++26.

                                              https://en.cppreference.com/w/cpp/container/unordered_map/find

                                            2. 3

                                              Honestly, when I need a 0-terminated string, I always take a const char * and document that it must be 0-terminated. It would have been nice to have a null_terminated_string_view type just to get the implicit cast from std::string, but calling .c_str() isn’t that bad. I typically don’t take a const std::string & because I don’t know if the caller has a std::string or maybe some stack buffer or something else; why should I require that the caller creates a std::string when all I care about is the pointer to null-terminated string?

                                              1. 2

                                                Yeah, I can totally see that.

                                          2. 15

                                            I think one point a lot of these discussions miss is that we don’t need a single tool that is good for everything. There are perfectly reasonable use cases both for a tool like Zig and a tool like Rust. I wouldn’t use Rust for game development. Zig on the other hand, is a much more compelling language for me in that regard. Which doesn’t prevent me from writing a game in C++, because I consider Zig not quite stable enough yet for my project.

                                            Use cases matter.

                                            1. 13

                                              This reminded me of the famous Guy Steele quote about his work on the Java spec:

                                              We were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp.

                                              I don’t think Zig is out to win over the Rust programmers; it’s going after the C++ programmers. In fact, I’m a little surprised that someone with extensive Haskell background, currently working in Rust, would take this level of interest in Zig. I’m not surprised at his conclusion, and I doubt that it will deter many prospective Zig programmers.

                                              1. 8

                                                Why is that surprising you? I’m a Haskeller in the heart; Rustacean in the mind, but what defines me is not the languages I like the most. I keep putting things into question, and there is no doubt that, someday, Rust will be dethroned by a more correct approach, I’m 100% sure. And same with Haskell. So it’s a necessity to keep your eyes open and try out new things, even if you have an initial negative apriori. Being curious and passionate about programming probably helps, and it also explains why I have very very few patience with old C programmers that refuse to learn something new — but it’s a personality difference and there’s nothing to judge there.

                                                1. 3

                                                  It’s an admirable attitude, and I wish it were more common. But it seems like, for most people, having climbed the learning curve of an ML-derived type system, there’s little or no desire to come back. I moved from Haskell to Scala, myself, at least professionally. If I take on learning another big language at this point, it will almost certainly be Rust, although I’ve had an eye on Ocaml for a while. I know I wouldn’t want to go back to weaker type systems for “serious” work. If I had taken a different career path and come up through C and C++, it’s hard to say what my preferences would be.

                                                  At the same time, I retain a love for Scheme and I still get a lot of faster and more ephemeral work done in Python, so I see the value of “simple”. I just wouldn’t want to have to ship production code without strong compiler support for correctness.

                                                  1. 3

                                                    Yeah, honestly, I have never found a language any better than Haskell, and I’ve been coding in Haskell for even more than Rust (around 13 years now). Sometimes I read some articles about Haskell being faster than Rust or even C for a the same amount of time spent on the code (which is expected; ghc is an amazing piece of software), and I have to resist a lot to get back to it. :)

                                                    1. 2

                                                      I definitely have my issues with Haskell, but they’re more about the culture and library ecosystem than the language itself. I wouldn’t have chosen Scala, all else being equal, but it’s hard to get paid to write Haskell.

                                                      1. 2

                                                        I had that chance (in France) once and it was probably my best professional experience — because of the language but also because I think the average skillset of Haskellers is way higher than the engineering total population, so working with talented and interesting team mates — who all became friends — was an excellent part of my career.

                                                        What don’t you like about the Haskell culture?

                                                        1. 3

                                                          Lackluster tooling, fragmented library ecosystem, lots of cool but abandoned packages, nobody can agree on GHC extensions or custom Preludes or even a consistent style. Kinda like C++ in that way! And the “avoid success at all costs” attitude is pretty deeply embedded. I didn’t even think about that when I was in school, learning the language. It’s an academic culture at its core, with a crust of industry doing plate tectonics on the surface.

                                                          Maybe this was just local circumstance. I was working at a Haskell shop from 2016 through 2021, so the cabal / stack drama was at its height. Also I saw a lot of smart people chasing after shiny things but without much engineering discipline. It was kind of a turnoff, after I realized that I didn’t want to go down that PhD track. Learned a lot though, and it deeply influenced how I think about programming.

                                                2. 4

                                                  I don’t think Zig is out to win over the Rust programmers; it’s going after the C++ programmers.

                                                  I’m skeptical that a language without an answer to RAII-based automatic cleanup will win over many programmers of C++ dialects other than “C with classes”. It looks more like an attempt to win over C programmers to me.

                                                  1. 4

                                                    Andrew himself has been quiet explicit that he created Zig to replace C++ for his own use. And I believe he has pushed back against the idea that Zig is just a “C replacement” or “for C programmers”.

                                                    1. 6

                                                      Huh. OK then. I guess it’s a C++ replacement in the same way that Go is a C++ replacement… very tuned to the needs of a specific user of C++.

                                                      1. 3

                                                        As a C programmer I’m still waiting for a new language written for me and not C++ programmers.

                                                        1. 4

                                                          Hare seems to be quite a bit more C style than Zig, and if that is too much like C++ for you, then I don’t know if there will ever be a new language for you, because it screams *C but with some hindsight” for me.

                                                          1. 1

                                                            Hare is OK, but the standard library is really lacking. It’s 25% slower than C and there’s no reentrancy in the standard library.

                                                  2. 2

                                                    Zig on the other hand, is a much more compelling language for me in that regard.

                                                    Fair. Given the management philosophy in the gaming industry, I don’t think anything less than having companies like Valve force sandboxing on games would be enough to prevent another Dark Souls III exploit.

                                                  3. 7

                                                    It’s a bold decision, but it’s well-argued. I wish them luck!

                                                    That opening argument does seem to miss out an important point about compiler rewrites though…

                                                    When a language gets rewritten in itself, it’s not just a proof that rewrites can be successful, or that “older + wiser = better”. It’s also a chance to put the language’s design and implementation to the test.

                                                    Think you’ve designed the best language for writing programs? Prove that design by using it to write your compiler! Maybe you’ll find it’s not as pleasant to use as you thought, and you’ll have to rework the design.

                                                    Think your compiler’s fast? Prove it by having to live with the compile times yourself! Maybe you’ll find it’s not as fast in practice as it is in benchmarks, and you’ll have to optimise the implementation.

                                                    In writing your language in itself, you’ll inevitably find problems that you missed, and that will force you to make improvements to your language. It’s classic dogfooding, with every improvement you make for yourselves translating into benefits for your users too.

                                                    Or to put that another way, a project this large will inevitably find problems with Zig, and any feedback will go into making Zig better. That’s good for programming in general, but is it a lost opportunity to have made Roc better? Perhaps… 🤔

                                                    1. 9

                                                      There’s a flip side to this. By dogfooding your language this way, you will inevitably make its design more catered towards writing compilers. Which may not be what you want, depending on your target audience.

                                                        1. 3

                                                          Yeah, there are some use cases that I’m just intentionally not targeting with Roc, and compilers are one of them.

                                                          That’s not to say I would discourage someone from writing a compiler in Roc if they want to, but rather that it’s a non-goal to make Roc be a language that’s great for writing a compiler in (except perhaps unintentionally). 😄

                                                    2. 32

                                                      I want to use file names and paths that contain spaces.

                                                      Outrageous!

                                                      1. 7

                                                        Not outrageous; supporting spaces in file names is basic resilience against malicious input.

                                                        If your tool starts misbehaving because someone named their directory Programming Stuff, you’re doing something wrong. No matter your opinion on whether that looks ugly or is inconvenient to use.

                                                        1. 31

                                                          I’m pretty sure the parent comment was sarcastic. It’s insane that we still use tools that can’t do basic things properly.

                                                          1. 11

                                                            Both sarcastic and sincere at once

                                                            1. 2

                                                              I’m with you there!

                                                            2. 2

                                                              I did pick up on the sarcasm, but had a feeling not everyone reading the comment might pick it up, too. Text is an annoyingly emotionless medium…

                                                            3. 1

                                                              Pretty sure they were sarcastic.

                                                          2. 5

                                                            I’m sure I could’ve learned a lot of things in advance if I went to university, but another few years of mental pain and stress caused by our education system were absolutely not worth it.

                                                            Being able to learn these things on my own, at my pace, feels a lot more fun and rewarding than having to cram them for exams that are forced upon my throat, and then be graded with a number that doesn’t reflect any actual skills or knowledge.

                                                            I’m sure it looks different in other countries. But for me, living in Poland, the faults of the education system that scarred me through years of going to school were enough to make me not want to touch school ever again. Even if people say universities are better than that.

                                                            1. 7

                                                              I can’t speak for Poland, but universities in the UK are nothing like schools. There is no one in a university whose title is teacher. They are places you go to learn and there are people who will help you learn (and, especially, point you at things that are not part of the formal curriculum but might be interesting to you) but the responsibility for learning is entirely yours.

                                                              1. 5

                                                                I’ve heard good things about universities in the UK, but as a student raised in a working-class family I couldn’t imagine moving to a different country to study abroad. So my options were purely domestic, where unfortunately universities are what they are.


                                                                For context, Poland’s education system is very exam-heavy and metric-oriented. This perpetrates higher education spheres as well. I guess you could even call it an education crisis.

                                                                To my knowledge this happened because formal education titles were perverted into being something you must have in your CV/resume to be considered credible. A lot of folks from older generations still believe that to be the case.

                                                                I don’t know where this belief came from, but I think you can imagine how it negatively affected the quality of higher education here—therefore also negatively affecting the quality of schools, because you now have professors who completed university just for the title, raising teachers who are also completing university just for the title. (Who knew that bad education had such far-reaching consequences.)

                                                                So the system is full of people with no interest in teaching, doing it only for… I don’t know what, money? with no pedagogic competencies whatsoever. Exams are everywhere because people with no interest in education are just doing what their teachers did, without any will to change the status quo for the better.


                                                                I guess that just makes universities abroad look that much better, huh.

                                                                Makes me think that higher education might be an interesting avenue to explore in life, maybe, if I ever feel adventurous. Thank you for giving me pause about this topic!

                                                                1. 3

                                                                  Yeah, they are nothing like any Eastern European university :-). Many – especially second-tier universities – are still pretty firmly anchored in their 1950s Soviet mold from which they were either cast or into which they were shoved if they predated it.

                                                                  I haven’t studied abroad (some of those roots meant that even Erasmus mobility was kind of tricky back when I went to school) but, ironically enough, I’ve worked with people who taught at UK and US universities, including during my undergrad studies. At least for technical higher education (so EE/CompEng – I can’t speak for “purer” CS and Math degrees) they couldn’t be more different.

                                                                  There are exceptions everywhere, and you should adjust this for the fact that uni was 15+ years ago for me, but by and large some notable differences include:

                                                                  • Unless you’re an extremely gifted student there is no such thing as “pointing you at things that are not part of the formal curriculum”. The formal curriculum is where education ends. It’s less common nowadays but back when I went to school, if you used something that wasn’t in the formal curriculum to solve an exam question or a homework assignment, there was a pretty good chance that you’d get no points for it. I went to a pretty “liberal” technical university; in more uptight settings there was actually a pretty good chance you’d fail that exam for something like that.
                                                                  • Research grants have changed this to some degree for first-tier universities, but most of them are still ran as if their primary activity is teaching, not research. You can go for years without publishing anything substantial, and there’s a whole system in place to enable (at least some) universities to keep staff which can continue to teach and not perish despite failing to publish anything, or not publishing anything that meets minimal academic standards.
                                                                  • Generally, there are very few TAs. When I went to school it wasn’t uncommon to have a single TA assigned to 30, 60, or even 120 students. Some CompEng departments try to fix this to some degree but their means are rather questionable (e.g. third year students teaching second year labs). Consequently, most individual study activity consists of solving homework assignments issued to everyone and pondering over the grades you get 1-3 weeks later.
                                                                  • At least back when I went to school, there were maybe two or three people in the three departments herding us EE students who had office hours. Technically, they were all supposed to have office hours; in practice, due to how teaching was structured, the office hours were practically useless so nobody bothered with them, teachers or students.
                                                                  • The responsibility for learning is obviously primarily the students’ (they’re adults, after all, at this point forcing them to study is not going to work) but they have very little autonomy in it. Lab and homework assignments are usually very strict. In some cases (e.g. some CompEng or more niche final-year disciplines) undergrad semester projects are a little less strict, but for most technical degrees they’re basically large fill-in-the-blanks leaflets.

                                                                  Ironically enough, this is one reason why their graduates tend to do pretty well in leetcode-style challenges (and their fortunately far less popular equivalents for EE). The whole undergrad program, including curricula, is designed along very similar lines.

                                                                  Edit: that being said, I want to emphasise that, if you’re a creative student :-) you can put any university education to very good use and, indeed, it’s primarily useful as a guided tour of your ignorance.

                                                                  Not that I recommend it in general, but some of the things I did included:

                                                                  • The few overworked TAs were usually PhD students, which meant that there was always room for undergrads in research programs – usually in an unofficial capacity but I didn’t care as long as I could get some interesting work done.
                                                                  • I always studied around the formal curricula. If I didn’t care for, or if I just didn’t like a particular course, I’d trade it for studying something I was really interested in (including by going to lectures – presence was more or less compulsory – but not paying attention and studying something else altogether). That meant I’d get some bad grades, absolutely, but the art of not giving a fuck is something you need to master very early if you’re to survive even primary school.
                                                                  • Since a lot of courses or exams were either structured around or easy to pass by rote, I would use what I learned in class as a guide or support to learn from other sources (from textbooks to speaking to actual EEs with expertise in that field) and postpone learning anything exam-related until the night before, so that I could just regurgitate it on paper the day after.
                                                                  • Since no one really did office hours but there was room in the teachers’ schedule for it, if I found a course particularly interesting and tried to go for it, there was usually a non-zero chance I would be able to spend a few hours talking about various topics 1:1 with someone who’d studied them for decades. They’d usually be limited to the formal curricula but that was hardly a problem.
                                                                  • The whole university system is rid with bullshit rules that are unfair and make absolutely no sense whatsoever, some of them enacted and enforced by bitter people who resent anyone they work with (including students) but have no choice about working with them. Navigating a system like that is a very useful real-life skill. After four years of that, working in the worst multinational corporate environment you can imagine is a breeze. It sucks just as much but you get paid for it.

                                                                  Even 15 years after finishing uni I still find myself regularly reaching out for things I learned back then, and not as in “I learned how to do this specific thing in $class” but in terms of “this belongs to the same general area of things I studied in $class”. Most of the individual things I learned after my second year or so are probably obsolete by now, what stuck – and I couldn’t do without – is the structured, evidence-based, result-driven approach to engineering and the formal vocabulary to enable it.

                                                              2. 7

                                                                In his chip design tools, Chuck Moore naturally did not use the standard equations:

                                                                Chuck showed me the equations he was using for transistor models in OKAD and compared them to the SPICE equations that required solving several differential equations. He also showed how he scaled the values to simplify the calculation. It is pretty obvious that he has sped up the inner loop a hundred times by simplifying the calculation. He adds that his calculation is not only faster but more accurate than the standard SPICE equation. … He said, “I originally chose mV for internal units. But using 6400 mV = 4096 units replaces a divide with a shift and requires only 2 multiplies per transistor. … Even the multiplies are optimized to only step through as many bits of precision as needed.

                                                                This is Forth. Seriously. Forth is not the language. Forth the language captures nothing, it’s a moving target. Chuck Moore constantly tweaks the language and largely dismisses the ANS standard as rooted in the past and bloated. Forth is the approach to engineering aiming to produce as small, simple and optimal system as possible, by shaving off as many requirements of every imaginable kind as you can.

                                                                I’ve been practicing a similar philosophy with a recent C++ game dev project of mine, and man. Isn’t it fun to limit your requirements, so that you don’t have to write lines of code that accomplish nothing of value.

                                                                I think it’s super fun to do engineering in this way, following the philosophy that every line of code that accomplishes nothing of value to the final program should never be written. To face the truth, this is only really doable if you’re soloing a project, but it’s surprising how quickly you can get something working yet clean in a day or so this way. Because you’re not wasting time writing boilerplate or worrying about things that you don’t need yet.

                                                                As a simple example, I don’t yet have a hash map in this project. The closest thing I have to one is a flat array of struct Keyboard_Input_Mapping { SDL_Scancode scancode; int player; Input input; }; which maps keys on the keyboard to a player + virtual input button—but there’s like 5 or 6 entries in it, so why would I bother writing a hash-based lookup for it.

                                                                And the best part? Lines of code you don’t write don’t have to be compiled, so your compiler rips through them.

                                                                1. 3

                                                                  What a lovely comment; thank you!

                                                                  Consulting in the past, I loved clients who would trust and let me change the problem/requirements, enabling a more elegant solution. Sometimes, people just really want a chatbot on the website (always without a clear workflow/use for it nor prepared dialogue.) But sometimes you can talk to actual users, watch them visit new sites or use the client’s, understand their workflows, letting you leverage the low hanging fruit for all they’re worth! It’s so empowering, liberating ourselves from this cage of expectation and ceremony we’ve placed ourselves in!

                                                                  Perhaps also because: https://rakhim.org/user-is-dead/

                                                                  why would I bother writing a hash-based lookup for it

                                                                  Funnily, Clojure, the most vital such movement I’ve seen, just throws everything in a map and calls it a day!

                                                                  This seems to convert Moore’s philosophy from mechanical sympathy to primitive set/toolset sympathy. You start with some tools and try to solve your problem/expand your user’s power with minimal extensions to your toolbox. Since software automates, we often distract ourselves by building more tools, trying to automate more (but not surfacing key inflection or decision points). Getting used to the tools we’ve built, we ossify and try to apply them everywhere.

                                                                  …but matters of perspective. Ossifying with what our language already gives us or instead building ever newer primitive sets are what I rail against, so… This argument/tool also doesn’t simplify!

                                                                2. 3

                                                                  I found this website hosting HTML-rendered man pages with a proper OpenSearch-compatible engine, making it possible to add it to Firefox as a search engine. Pretty useful to have when I need some look up some C APIs on a Windows machine, where I don’t just have man handy.

                                                                  It looks kind of old, no idea how up to date the man pages are. (It’s not like C APIs change very often anyways…)

                                                                  If anyone has anything similar that also has OpenSearch support, please share!

                                                                  1. 5

                                                                    I like https://manpages.debian.org/. As far as I can tell, it does have OpenSearch support.

                                                                    1. 4

                                                                      Protip: you can use bookmarks with %s and a keyword to make a search engine out of anything!
                                                                      Since it’s just a bookmark it gets sync’ed, if you enabled that.

                                                                      See https://kb.mozillazine.org/Using_keyword_searches

                                                                      EDIT: since it’s relevant to the thread my man keyword points to https://man.archlinux.org/search?go=Go&q=%s even though I don’t use arch btw.

                                                                    2. 1

                                                                      I wonder, perhaps it was a problem with how the data was presented? I usually do my performance profiling off of timeline traces (i.e. Perfetto & co.), which could perhaps more clearly show the issue in this case—that transformSSA is just taking an insanely long time relative to its caller.

                                                                      1. 3

                                                                        This is precisely the thing I love about C. It makes you think about your data structures. I’m excited to try out the same sort of stuff in Zig when it becomes more stable.

                                                                        While this level of control absolutely not necessary when you just want to get things done and have them work, sometimes I just wanna play around with low level code and feel clever. There’s something really satisfying about low level data structure manipulations, and how it all comes together in the end, to create something more meaningful than the sum of its parts.

                                                                        1. 2

                                                                          Yeah, it’s worth knowing that data structures can be really small, both in terms of code and data, because sometimes (rarely) a tiny bespoke hash table is just the thing. I used one when I rewrote BIND’s DNS name compression code which roughly halved the compression code size and massively improved zone transfer performance.

                                                                        2. 19

                                                                          From the very little Go I’ve written, I gotta say that the explicit if err != nil error handling struck me as something pretty darn good even if a bit verbose. And in my experience outside Go, just knowing its outlook on error handling has nudged me towards writing more satisfying error handling code, too: https://riki.house/programming/blog/try-is-not-the-only-option

                                                                          I also regularly write C++ code which does error handling in this way and do not find it particularly revolting. (Though most C++ I write is very game devy, where an error usually results in a crash, or is logged and the game keeps on chugging along.)

                                                                          I can see why people would not share the same opinion though. Perhaps I’m just slowly becoming a boomer.

                                                                          1. 14

                                                                            Yeah, I think the grievance about Go boilerplate is somewhat misguided. Most of the “boilerplate” is annotating the error with context, and I’m not really aware of an easy way to alleviate this problem. Sure, a ? operator will help you if you’re just writing return err all over the place, but if that’s your default error handling approach then you’re writing substandard code.

                                                                            That said, I don’t love the idea of wedding the syntax so tightly to the error type. I wish this proposal would take a more general approach so it could be used for functions that return T, bool as well (roughly Go’s way of spelling Option<T>), for example.

                                                                            1. 2

                                                                              You can also add stacktraces to errors early. Then you can just return error and don’t need to rely on grepping (hopefully) unique (they are not) error strings to reconstruct a stacktrace by hand.

                                                                            2. 13

                                                                              The problem with the Go error form is not the syntax. It’s the fact that, by using this error-handling form, the Go compiler is basically not involved in enforcing the correctness of your program.

                                                                              Go will fail to compile if you have an unused import, but you can just forget to check an error result. That’s the problem.

                                                                              1. 9

                                                                                you can just forget to check an error result. That’s the problem.

                                                                                I agree with you that it’s a more serious problem than the boilerplate, but the overwhelming majority of complaints about Go are about the boilerplate. That said, “you can just forget to check an error result” is also not a particularly serious problem in practice. I’m sure it has lead to bugs in the past, but these problems are relatively rare in practice, because:

                                                                                1. the errcheck linter exists and is bundled with popular linter aggregators
                                                                                2. even without a linter, you still get a compiler error when you ignore the error in an assignment (e.g., file := os.Open() instead of file, err := os.Open()
                                                                                3. for better or worse, Go programmers are pretty thoroughly conditioned to handle errors. A fallible operation without an error check looks conspicuous.

                                                                                Are these as nice as having error-handling checks in the language? No. Is it a real problem in practice? No.

                                                                                1. 4

                                                                                  I think the question I would pose, then, is: does type-safety matter or doesn’t it?

                                                                                  If linting is sufficient to enforce program correctness, why bother with static types? Why not use a dynamic language that’s easier to work with?

                                                                                  If I’m accepting the effort of working within a static type system that requires me to correctly annotate all my types, then I also want the type system to take responsibility for the invariants of my program.

                                                                                  If I have to expend the effort of writing types but also have to write a thousand ad-hoc if statements to check my invariants, then Go feels like the worst of both worlds. At least in a dynamic language, I can build higher-level abstractions to do this in a less tedious way.

                                                                                  1. 2

                                                                                    I think the question I would pose, then, is: does type-safety matter or doesn’t it?

                                                                                    I don’t think it’s a binary proposition. It certainly matters more in some cases than others, and I don’t think forcing you to handle errors ends up being an enormous deal in practice. Would I prefer that Go’s type system enforced handling return values? Sure. Has this ever caused me a problem in my ~15 years of extensive Go use? Not that I recall (though I’m sure it has caused someone somewhere a problem at least one time).

                                                                                    If I have to expend the effort of writing types but also have to write a thousand ad-hoc if statements to check my invariants, then Go feels like the worst of both worlds.

                                                                                    Eh, most of the boilerplate with error handling is in annotating the errors with helpful context, which, as far as I know, doesn’t lend itself well to an automated solution unless your solution is not to annotate at all (in which case you’re just trading a little pain up front for more pain at debug time) or to annotate with stack traces or something similarly painful to consume.

                                                                                2. 4

                                                                                  It’s also not implemented well. It requires observing the difference between defining err and re-assigning err, which is important only because go complected error handling with variable definitions, and uses boilerplate that can’t always limit the variable’s scope to just the scope it needs.

                                                                                  When moving code around or temporarily commenting out parts of it, it forces adjusting the syntax. Sometimes reassignment isn’t possible, and you get err2 too, and a footgun of mixing it up with the first err. This problem doesn’t have to exist.

                                                                                  1. 1

                                                                                    Brad Fitzpatrick made a point about this somewhere… I decided to keep a copy for posterity. It’s ugly, no one should use it. But exposes the issue you put forward of ignored errors.

                                                                                    https://go.dev/play/p/JBQ3zeVMti For your amusement.

                                                                                    1. 1

                                                                                      Go will fail to compile if you have an unused import, but you can just forget to check an error result. That’s the problem.

                                                                                      I did notice that, and totally agree. I’d probably expand that to error out on any unused values overall, not just errors, but at this point it’s probably too big of a breaking change to make it into the compiler itself.

                                                                                    2. 5

                                                                                      You’re comparing it to the wrong thing. The ? propagation is a revolt against C++ style exceptions too:

                                                                                      • Throwing and catching is an alternative parallel way of “returning” values from functions. ? keeps returning regular values in a normal way.

                                                                                      • Exceptions are invisible at the call site. ? keeps the error handling locally visible.

                                                                                      • C++-style exceptions are even invisible in function’s prototype, and Java’s typed exceptions aren’t liked either. ? simply uses normal function return types, keeping the error type explicit and easy to find.

                                                                                      • Exceptions can unwind past many levels of the call stack. ? keeps unwinding one call at a time.

                                                                                      ? is closer to golang’s philosophy than any other error handling strategy. It agrees that error handling should be locally explicit and based on regular values. It just doesn’t agree that the explicitness needs to go as far as taking 3 lines of almost identical code every time.

                                                                                      Experience from Rust shows that just a single char is sufficiently explicit for all the boring cases, and because the errors are just values, all the non-trivial cases can still be handled with normal code.

                                                                                      1. 1

                                                                                        It just doesn’t agree that the explicitness needs to go as far as taking 3 lines of almost identical code every time.

                                                                                        I usually want to add context to my errors, so the 3 lines are rarely “almost identical”. Do people routinely omit additional context in Rust, or does the ? somehow allow for more succinct annotations than what we see in Go? As far as I can tell, it seems like the ? operator is only useful in the ~5% of cases where I want to do nothing other than propagate an unannotated error, but people are making such a big fuss about it that I’m sure I’m misunderstanding something.

                                                                                        1. 4

                                                                                          Rust solves this without need to abandon ?.

                                                                                          ? calls a standard From trait that converts or wraps the error type you’re handling into the error type your function returns, and the conversion can can have a custom implementation. There’s a cottage industry of macro helpers that make these mappings easy to define.

                                                                                          It works well with Rust’s enums, e.g. if your function returns my::ConfigFileError, you can make std::io::Error convert to ConfigFileError::Io(cause), and another type to ConfigFileError::Syntax(parse_error). Then another function can convert that config error into its own ServerInitError::BecauseConfigFailed(why), and so on. That handles ~80% of cases.

                                                                                          For other cases there are helper functions on Result, like .map_err(callback) that run custom code that modifies the error type. The advantage is that this is still an expression and chains nicely:

                                                                                          let x = do().map_err(Custom::new)?
                                                                                              .do_more().context(obj)?
                                                                                              .etc().with_context(callback)?;
                                                                                          

                                                                                          And for complex cases there’s always match or if let Err(err) that is like golang’s approach.

                                                                                          The go codebases that I work with are very often just return nil, err, and at best return nil, custom.Wrap(err) which is like the From::from(err) that ? calls.

                                                                                          1. 2

                                                                                            Thanks, I am aware of these but you summarized them well. I guess my feeling is that this largely moved the annotation boilerplate into creating a new error type. This doesn’t seem to me to be an enormous net win if at all. In my experience, creating new error types can be quite burdensome, at least if you want to do a good, idiomatic job of it (IIRC, when I used some of the macro libraries, they would throw difficult-to-debug error messages). I may have been holding it wrong, but with Go I can just return fmt.Errorf(…) and move on with life. That’s worth a lot more to me than the absolute lowest character count. 🤷‍♂️

                                                                                            1. 1

                                                                                              fmt.Errof isn’t strongly typed. bail!("fmt") does that in Rust, and that’s okay for small programs, but libraries typically care to return precise types that allow consumers of the library to perform error recovery, internationalisation, etc.

                                                                                              You’re right that it moves boilerplate to type definitions. I like that, because that boilerplate is concentrated in its own file, instead of spread thinly across the entire codebase.

                                                                                              1. 1

                                                                                                fmt.Errorf isn’t strongly typed. bail!(“fmt”) does that in Rust, and that’s okay for small programs, but libraries typically care to return precise types that allow consumers of the library to perform error recovery, internationalisation, etc.

                                                                                                I agree, and in Go we only use fmt.Errorf() if we are annotating an error. Error recovery works fine, because Go error recovery involves peeling away the annotations to get at the root error. This is probably not ideal, but it’s idiomatic and doesn’t cause me any practical problems whereas with Rust I have to choose between being un-idiomatic (and bridging my personal idiom with idioms used by my dependencies), or writing my own error types for annotation purposes, or using a macro library to generate error types (and dealing with the difficult-to-debug macro errors) all of which involve a pretty high degree of tedium/toil. I don’t love the Go solution, but it mostly gets out of my way, and I haven’t figured out how to make Rust’s error handling get out of my way. :/

                                                                                                You’re right that it moves boilerplate to type definitions. I like that, because that boilerplate is concentrated in its own file, instead of spread thinly across the entire codebase

                                                                                                Yeah, I agree that making dedicated error types makes sense when you are reusing errors across the codebase but for simply annotating errors we are almost always concerned with a one-off error so there’s no “spread thinly across the entire codebase” to worry about.

                                                                                                I’m not trying to be a Go fanboy here; I don’t particularly like Go’s error handling, but for all its warts and oddity, it mostly gets out of my way. I feel like I spend at least an order of magnitude less time meddling with errors in Go than I do in Rust, even though Rust has ? (my constraint is almost never keystrokes). My issues with Go are mostly theoretical or philosophical, while my issues with Rust are regrettably practical. :(

                                                                                          2. 1

                                                                                            I usually want to add context to my errors, so the 3 lines are rarely “almost identical”. Do people routinely omit additional context in Rust, or does the ? somehow allow for more succinct annotations than what we see in Go?

                                                                                            You’d usually do something like .context(/* error context here */)? in such cases. Though, to be honest, while I’m not a Go programmer, I’ve seen some Go code, and it seems most projects don’t add context to errors in most cases?

                                                                                            Personally I haven’t found the need to add context to errors in Rust anywhere close to 95% of the time. Usually a stacktrace has already been captured and there isn’t much to add.

                                                                                            1. 3

                                                                                              Though, to be honest, while I’m not a Go programmer, I’ve seen some Go code, and it seems most projects don’t add context to errors in most cases?

                                                                                              It seems to vary a lot, and especially older projects (before the fmt.Errorf() stuff was added) probably don’t attach context.

                                                                                              Personally I haven’t found the need to add context to errors in Rust anywhere close to 95% of the time. Usually a stacktrace has already been captured and there isn’t much to add.

                                                                                              Usually I want to add identifiers (e.g., filepaths, resource identifiers, etc) to errors, so rather than “permission denied”, you get “opening /foo/baz.txt: permission denied”. I also haven’t found a very satisfying way to attach stack traces to errors–do you check each call site to determine whether it has already attached the stack trace? Maybe that’s reasonable, but what I usually do is make sure each function adds an annotation that says what it’s doing (e.g., OpenFile() will annotate its errors with fmt.Errorf("opening file %s: %w", path, err)). I definitely don’t think that’s an ideal solution; it’s just the best one I’ve found so far. 🤷‍♂️

                                                                                        2. 1

                                                                                          Though most C++ I write is very game devy

                                                                                          Also probably no exceptions is a natural technical constraint for you already, right

                                                                                          1. 2

                                                                                            Depends on your requirements; I’d say for a small game, the runtime & binary size cost of exceptions isn’t large enough to matter that much for you.

                                                                                        3. 19

                                                                                          I think it is well-known and little-used. It is a non-standard attribute, it’s not part of C language. If you are willing to go as far as extending the language, you might as well switch to “C with destructors” by compiling your code as C++.

                                                                                          Folks who don’t switch to C++ generally care about portability across compilers a lot, and for them non-standard extensions are a big no.

                                                                                          A more specific question is why this isn’t being used by the Linux kernel? The answer is that it is being used:

                                                                                          https://github.com/torvalds/linux/blob/v6.12/include/linux/cleanup.h

                                                                                          1. 7

                                                                                            you might as well switch to “C with destructors” by compiling your code as C++.

                                                                                            Nothing wrong with your comment but every single time I think “yeah I’ll just write this in C++ and get destructors for free” I come back crying about the rule of five. It sucks there’s so much boilerplate needed to add a destructor.

                                                                                            Which instead results in me guiding myself to code that does not use destructors at all, because boy I do not want to type out a copy constructor, copy assignment operator, move constructor, and move assignment operator every single time…

                                                                                            1. 14

                                                                                              99 times out of 100 you need rule of zero. Implementing user-defined destructors is an exceptional case. In rust-analyzer, there’s 45 impl Drop for 400k lines of code (and like a third are tests for analyzing impl Drop). I would expect the ratio of cases that need an explicit destructor in C++ to be similar. It probably will be significantly larger in the kernel, but significantly larger than once per 10k lines is still pretty rare.

                                                                                              The value here is not that you can implement a custom destructor, the value (questioned by some) is that compiler automatically generates all the code to use your custom destructor. There are by far more usages than definitons of true resource types.

                                                                                              Though, it is (part of) a social problem with “just compile as C++” that you need to educate people to not add needless destructors and instead lean onto existing RAII containers.

                                                                                              1. 10

                                                                                                The rule of five / three is not requiring boilerplate, it’s requiring you to be explicit if you want to move away from the default. The default destructor on an object calls the default destructor on all of its members. If you are using types such as std::vector or std::unique_ptr that have destructors, they will be called and you don’t need to write your own.

                                                                                                If you do write your own, it means that you’re doing something unusual with ownership. In this case, you need to design what happens when you copy or move the object. Or, conversely (it’s usually this way around), if you create a custom definition of what copy or move means, you probably also need to define what cleanup means.

                                                                                                If you are writing something that holds raw pointers or some other resource (for example, file descriptors) then you need to specify what happens if someone moves or copies it. The move case is particularly important with respect to destruction: you need to handle being destroyed if you no longer own the resource that you are encapsulating.

                                                                                                Normally, the correct rule is zero: do not implement any of these. You use them in foundational things such as smart pointers or owning wrappers for other types, everything else just composes those.

                                                                                                1. 5

                                                                                                  You only need those if you might copy or move instances (which would require at least as much code to implement in C too!) If you aren’t going to do that, just declare a copy constructor with “=delete” and you’re good to go.

                                                                                              2. 1

                                                                                                Maybe I’m being dense, but:

                                                                                                my_data<enter>
                                                                                                <up arrow><type .some_attr><enter>
                                                                                                my_data.some_attr
                                                                                                
                                                                                                my_data.some_attr<enter>
                                                                                                <up arrow><type )><ctrl-a><type json.loads(><enter without going to the end>
                                                                                                json.loads(my_data.some_attr)
                                                                                                

                                                                                                that’s one keystroke/key combo more.

                                                                                                Yes, it’s the basic example, but I find myself going to the middle of a pipe’d pipeline in my shell just as often, that’s what alt-left/ctrl-left or whatever are for. If I use vim bindings in an editor it’s also easier.

                                                                                                I may be completely missing the point, but my take is that for this to really matter it needed to be like 90% forward, where in practice it’s maybe 60% and so the perceived benefit is only “a little less going back”.

                                                                                                And if I sound like having a strong opinion, it’s because I have a mac as a work laptop for the first time and Home/End/ctrl-left/alt-left and so on behave differently than on Linux+Windows and I’m constantly failing to move, to a degree that I only notice now how I don’t notice it usually.

                                                                                                1. 7

                                                                                                  Yet, at the same time, pipes do feel much more satisfying to write to me than wrapping stuff in function calls. I feel like it has a lot to do with not just how you type the expression, but also how it looks in the end.

                                                                                                  When you’re composing a pipe, you can see the pipe growing to the right. The left side remains as it was before. Wrapping something in a function call, on the other hand, means shifting the original expression to the right.

                                                                                                  I feel like that incremental growth that happens with pipes is very satisfying.

                                                                                                  1. 2

                                                                                                    Also half of the occurrences of people shouting “useless use of cat” are not grasping that the person who wrote it probably was just working incrementally from left to right ;)

                                                                                                    So yeah, I’m not disagreeing per se - but I don’t really find it a meaningful difference.

                                                                                                  2. 2

                                                                                                    Perhaps this isn’t the most motivating example, yeah.

                                                                                                    I think that there’s a universe where, in the REPL, typing | right after evaluating an expression would wrap the previous expression in parentheses, put the cursor to the very left, and let you just type the function call.

                                                                                                    where in practice it’s maybe 60%

                                                                                                    So I see this as more an issue with the status quo in many languages. I’m constantly finding myself wishing I could just continue transforming, and Uiua is the only language that keeps up. That and shell scripting for the most part. like 85% of my thinking is in this mode, but I can’t reflect that in my code.

                                                                                                    I do agree about the value of dancing around with Emacs bindings, one of the big reasons for me to stay on a Mac.

                                                                                                    1. 8

                                                                                                      It is this universe! In LSP/TreeSitter world this is almost a trivial feature to implement, and many do this:

                                                                                                      In general, these days everyone has access to a powerful structured editor, not only lispers. It’s just that:

                                                                                                      • few people know how to use it effectively
                                                                                                      • to be useful, it needs to be really polished, and I think only JB has institutional capacity for such polish (though maybe newer editors like helix, which put tree sitter front and center, are also good).
                                                                                                      1. 1

                                                                                                        I’m constantly finding myself wishing I could just continue transforming, and Uiua is the only language that keeps up.

                                                                                                        If you haven’t, I do suggest you give Factor a proper go!

                                                                                                        1. 1

                                                                                                          Factor has been on my radar for a while! I’m having fun messing around with Uiua that Factor is a bit by the wayside.

                                                                                                          To be honest what I think I really need is embeddable versions of these languages. So when I’m working in a Python REPL, I can pipe my data into the languages and work off of that for a while, and then escape back out to the real world in the end.

                                                                                                    2. 5

                                                                                                      It interests me that compared to media where ornament, illusion, and excess are valued by some critics (architecture, painting, film, basically any given fine or liberal art) everyone in software seems to agree that simple code is better. It’s somewhere between an axiom and a thought-terminating cliche. Certainly it’s hard to find anyone arguing that code should be complex. And so Socrates asks, innocently: if everyone agrees code should be simple, then all code is simple, right? Because nobody would write any other kind.

                                                                                                      1. 29

                                                                                                        Everyone agrees code should be simple. Everyone agrees it should be as simple as possible, but no simpler. No one agrees on what ‘as simple as possible’ means.

                                                                                                        1. 9

                                                                                                          Writing simple code is difficult. Adding another special case to fix an edge case bug or add one new feature is the easy path. Simplifying code requires thinking hard about the problem domain until we understand the details well enough to see and understand the patterns in the requirements. The day to day work of programming is often about tweaking existing code to adapt to changing requirements, which typically adds complexity; reducing that complexity again might require taking a step back and recognizing that the existing architecture is no longer fit for purpose and re-architecting portions of the code base.

                                                                                                          Simplicity is something we strive for, complexity is something it takes effort to fight against, as the natural evolution of a code base is to increase in complexity.

                                                                                                          There’s also the problem that the value of simplicity isn’t innate, it has to be learned. People who haven’t had to work a lot with someone else’s code or code bases too big fit in your head yet naturally don’t understand the perils of complexity. They’re often quick to reach for things like unnecessary layers of abstraction (aka layers of obstruction). And most people, when given a task in a task tracking system, will focus on getting that task done; most people aren’t, or don’t percieve themselves to be, in a position to unilaterally decide, “implementing this feature in the current architecture would increase complexity too much, it is time to restructure this part of the code”.

                                                                                                          1. 3

                                                                                                            Some of us strive harder than others, though. And we’re certainly not all equally incentivized to restructure and refine and improve existing code. Too often we are paid only to implement features good enough to ship, and then move on to the next ticket. Only when things start to break do we return to that code, and then only long enough to apply the duct tape.

                                                                                                          2. 6

                                                                                                            I think you’re comparing one-person-or-small-group-art with programming-as-part-of-a-large-group.

                                                                                                            The “simple code is better” mantra comes from the world of “professional software development”, where it’s a given that you’re paid to produce code that can be maintained by people coming after you, that you’re part of a larger-than-you software lifecycle.

                                                                                                            Code that is written for fun, like the “advent of code” answers, a ray-tracer one makes over the weekend, a one-man-project video game… none of that has to be simple, people don’t push for simplicity there as heavily in my opinion.

                                                                                                            In fact, the complex, “I solved advent of code in sql” type solutions seemed to garner far more praise than the “I ran ‘numpy.solve_graph’” solutions, so clearly there’s some appreciation for complexity.

                                                                                                            Going back to art, there also is “art done as part of a bigger machine”. The people making hallmark cards, the people drawing keyframes for a disney movie… I have little doubt that those jobs, just like “software development”, value working in a way that fits into the larger company (i.e. simple processes, minimal individualism).

                                                                                                            1. 5

                                                                                                              I don’t know about this. The code that I’ve written for fun with the intent of open-sourcing it has tended to be a lot simpler than code code that I’ve written professionally as part of a team. For solo projects where I don’t have a deadline, I can take the time to polish everything as much as I want, rewrite pieces to simplify them, rewrite bits to unify concepts, and so forth. Professional code tends to be more about writing it and moving on to the next task.

                                                                                                              1. 3

                                                                                                                I don’t think it’s a one person or team thing, the key bit in your post for the second category is that this is code no one is maintaining. You don’t take your advent of code project from this year and use it as a starting point next year. By Christmas, it’s done and you throw it away.

                                                                                                                Even for projects where I’m the sole contributor, I value simple code if I’m going to come back to it later and need to understand it. Especially if it’s not something I’m spending much time on between hacking times, so I want to be able to get back up to speed quickly.

                                                                                                              2. 5

                                                                                                                The value of simple code is something you learn of at a certain level of experience, though… I can’t say I knew simple code is good code 10 years ago when I was just starting out. Especially when most learning materials were focused on object orientation rather than simplicity.

                                                                                                                everyone in software seems to agree that simple code is better.

                                                                                                                I don’t know what “everyone” you’re referring to, but I’ve met plenty of people who were not on board with this, even despite their seniority.

                                                                                                                1. 5

                                                                                                                  The term “simple” is just useless. It has two meanings: 1 = simple as easy to use or easy to understand, and 2 = simple as consisting of few parts, basic, primitive. They get mixed up all the time.

                                                                                                                  People often create something that is simple(basic), but imply it gives them simple(easy). And this often isn’t true. If you have something simple(basic) that doesn’t handle all the edge cases, you will breed complexity where the edge cases happen. Once you add support for all the edge cases, it may be more robust and simple(easy to use), but it’s not simple(basic) any more.

                                                                                                                  This is why people keep reinventing frameworks, CMSes, ORMs, game engines, GUI toolkits, build systems. Every new project starts out as simple(basic), and because it’s written exactly to the author’s needs, and hasn’t been battle-tested much, it also seems simple(easy) to the author. The author proclaims that their framework is the simple, with none of the bloat and complexity of the other framework. Once the new framework matures, fixes “todo: hack”s, and handles more stuff to make more things simple(easy) with it, it stops being simple(basic), and someone starts the cycle again.

                                                                                                                  And if you’ve ever worked in a web agency, you’ll know that all the clients that ask for “just a simple website” mean they want a simple(easy) website, but only have a budget for simple(basic).

                                                                                                                  1. 5

                                                                                                                    If you have something simple(basic) that doesn’t handle all the edge cases

                                                                                                                    Then you have a big fat bug and failed to fulfil the requirements, of course working around it is going to be more complex than actually fixing the bug.

                                                                                                                    That said, we often overestimate the complexity necessary to handle edge cases. Done well, many edge cases can be merged into the common case. At least if we avoid silly things like throwing an exception on empty lists, when we could instead just do nothing, or return an empty list.

                                                                                                                    This is why people keep inventing frameworks, CMSes, ORMs, game engines, GUI frameworks.

                                                                                                                    Perhaps the mistake is generalising the framework? It would make sense that a special purpose framework can be much simpler than one that is supposed to handle everyone’s use case, so perhaps we should keep home made frameworks in their intended niche?

                                                                                                                    1. 3

                                                                                                                      Done well, many edge cases can be merged into the common case.

                                                                                                                      When you have many similar but not identical cases, merging them into something common becomes an abstraction layer. This itself can become a point of contention — it’s now simpler(easy) to handle all the cases, but every indirection and smart solution moves it further away from simple(basic).

                                                                                                                      For example, golang is loved for being simple(basic), even though ad-hoc concurrency with channels can be difficult to get right (e.g. you can corrupt data if you fail to use atomics and locks where necessary). Languages that make this aspect simple(easy) are not simple(basic).

                                                                                                                      Perhaps the mistake is generalising the framework?

                                                                                                                      Probably. But you will be seeing limitations that your solution keeps bumping against, and seeing patterns of circumstances where it keeps failing, and it’s hard to consistently say “no” to fixing your biggest problem.

                                                                                                                  2. 3

                                                                                                                    I’ve seen many instances of people not agreeing on this, but IME it’s typically a misunderstanding. They’ll presume that the simplest implementation is whatever comes into their heads first (the article’s point that this is not the case actually needs to be made), and indeed, that’s often not the best.

                                                                                                                    In other cases, I’ve seen “simple” be conflated with “easy to implement”. For example, I’ve seen an authorization mechanism that’s essentially a stack based virtual method call machine, each one doing a database request. It’s “simple” because to utilize it, you just add a string to an array, and it chugs along from there, but in practice, it’s a disaster.

                                                                                                                    Put differently, the problem we’re facing is indeed epistemic in nature, where the essence of simplicity as agreed upon by people that discuss it is non-transferable to those that are not predisposed to this type of meta-thought, and consequently, communication breaks down. A classic problem of misconception with extremely high cost.

                                                                                                                    We can theorize about what would happen if this kind of information was transferable, but we can also wonder what would happen if pigs could fly, so in the end, this is just how it is. I think the kind of outreach the article tries to do, as well as explicit on-the-job training when possible, is one possible way out.

                                                                                                                    1. 3

                                                                                                                      In other cases, I’ve seen “simple” be conflated with “easy to implement”. For example, I’ve seen an authorization mechanism that’s essentially a stack based virtual method call machine, each one doing a database request. It’s “simple” because to utilize it, you just add a string to an array, and it chugs along from there, but in practice, it’s a disaster.

                                                                                                                      This very article conflates simple with “easy to implement”:

                                                                                                                      Simpler code is faster to write & maintain. That’s cheap.

                                                                                                                      I’ve found many times in my career that the “simple” thing was harder to implement initially, and therefore not the cheapest way. It has probably proven to be the easiest to maintain in every case, but not so frequently “faster to write”.

                                                                                                                      1. 2

                                                                                                                        This very article conflates simple with “easy to implement”:

                                                                                                                        No I do not. In fact, I’m saying exactly what you just said. From the article:

                                                                                                                        See, the simplest solution to a problem is rarely the first one that comes to mind. In the short term it’s this solution, not the simplest, that is cheapest to implement.

                                                                                                                        Though simple code and good tests are cheap in the long term, they significantly slow down the start of any project, and quite visibly so. Promises of speed up are always a couple weeks ahead, and actual return on investment can take even longer.

                                                                                                                        1. 2

                                                                                                                          Sorry I missed that! I think my eyes jumped over that after I read the bullets (that I quoted in my earlier reply) to read the testing section. The testing section (particularly property testing) was the part of the essay that I found most interesting, and I hurried through the rest.

                                                                                                                    2. 3

                                                                                                                      And so Socrates asks, innocently: if everyone agrees code should be simple, then all code is simple, right? Because nobody would write any other kind.

                                                                                                                      There are too many forces in opposition to simple code. Skill, time and (by extension) money. But also the unstoppable march of feature creep. More requirements means more complexity usually. And the most elegant, simplest designs are often the hardest to integrate new features into, especially if the features need cross-cutting information.

                                                                                                                      1. 2

                                                                                                                        “if i had more time, i would have written a shorter letter” – wayne gretzky, probably

                                                                                                                        1. 2

                                                                                                                          I remember taking a class on civil engineering in college. One of the anonymous quips that was quoted was:

                                                                                                                          Any idiot can build a bridge. It takes an engineer to build a bridge that barely stands.

                                                                                                                          It’s one that I often think of while working on a software system.

                                                                                                                          To me, simple code means that the code structure uses the minimum possible set of structural elements to handle all reasonable inputs – where it’s been reduced until the point that taking any single one of the remaining elements away will make the whole thing collapse and fail on the majority of inputs. That also means not special casing anything, since if you take away the special case handling it will only fail for some edge cases.

                                                                                                                          1. 1

                                                                                                                            Complicated code is the wrong tool but it often does the job.

                                                                                                                          2. 1

                                                                                                                            And even better, the type system lines up so well with a certain other language that you can use TypeScript to write it: https://typescripttolua.github.io/

                                                                                                                            1. 4

                                                                                                                              Except that Lua’s type system does not really line up with that of JavaScript, and TSTL is very much aware of this.

                                                                                                                              https://typescripttolua.github.io/docs/caveats#differences-from-javascript

                                                                                                                              So it feels like you’d end up writing TypeScript that is so dependent on Lua quirks that you might as well work with the language and write Lua instead.

                                                                                                                              1. 1

                                                                                                                                I don’t mean the nitty-gritty of the type system like == equality and null vs undefined etc., but rather that the primitives are mostly the same (Lua even treats numbers as floats, like JS!), and you make compound types by using objects/dicts/tables, and arrays are just a special kind of those.

                                                                                                                                They even say:

                                                                                                                                TypeScriptToLua aims to keep identical behavior as long as sane TypeScript is used: if JavaScript-specific quirks are used, behavior might differ.

                                                                                                                                I used it for an ad-hoc NeoVim extension once and found it worked pretty well.