Threads for scraps

    1. 13

      It is both scary and funny that the biggest commercial operating system requires scary hacks (such as code injection, or writing temporary JavaScript scripts) just to allow you to delete a file (which happens to currently be executed).

      In Linux and macOS, you are free to delete files. Even if they are open or being executed. It’s as simple as that. No hacks required!

      But, an even bigger issue is that something such as an uninstaller even exists. The fact that you need to not only write your software, but every software that you want to release you need a separate software to install and uninstall it. That is crazy! Even though they are not perfect, Linux’s package managers are amazing at solving that problem. MacOS is arguably even easier, you literally just copy a .app file into Applications and it’s there, and you delete it and it’s gone! ✨Magic✨


      1. 8

        MacOS is arguably even easier, you literally just copy a .app file into Applications and it’s there, and you delete it and it’s gone! ✨Magic✨

        I’ve never been convinced this really worked right when the app will still leave things like launchd plists around that it automatically created…

        1. 3

          True, I have experienced that as well. It is not very common thankfully.

          Also, some applications do require installers even on macOS. An example (shame on you!) is Microsoft Office for Mac. At least those are standardized, but it is annoying. I will not install software that requires an installer on any of my systems.

      2. 4

        Windows has the technology. it’s called “Windows Installer” and it’s built into the OS. However it required using a MSI file, which people don’t like because of the complex tooling.

        More recently there is msix which simplifies things greatly while having more features but people don’t like it because it requires signing.

        1. 6

          Kind of. The root problem here is that you cannot, with the Windows filesystem abstractions, remove an open file. With UNIX semantics, a file is deleted on disk after the link count drops to zero and the number of open file descriptors to it drops to zero.

          This is mildly annoying for uninstallation because an uninstalled can’t uninstall itself. The traditional hack for this was to use a script interpreter (cmd.exe was fine) that read the script and then executed it. This sidesteps the problem by running the uninstaller in a process that was not part of the thing being installed. MSIs formalise this hack by providing the uninstall process as a thing that consumes a declarative description.

          It’s far more problematic for updates. On *NIX, if you want to replace a system library (e.g., you install the new one alongside the old then rename it over the top. The rename is atomic (if power goes out, either the new version will be on disk or the old one) and any running processes keep executing the old one, new processes will load the new one. You probably want to reboot at this point to ensure that everything (from init on down) is using the new version, but if you don’t then the old file remains on disk until the open count drops to zero. You can update an application while it’s running then restart it and get the new version.

          On Windows, this is not possible. You have to drop to a mode where nothing is using the library, then do the update (ideally with the same kind of atomic rename). This is why most Windows updates require at least one reboot: they drop to something equivalent to single user mode on *NIX, replace the system files, then continue the boot (or reboot). Sometimes the updates require multiple reboots because part of the process depends on being able to run old or new versions. This is a big part of the reason that I wasted hours using Windows over the last few years, arriving at work and discovering that I needed to reboot and wait 20 minutes for updates to install (my work machine was only a 10-core Xeon with an NVMe disk, so underpowered for Windows Update), whereas other systems can do most of the update in the background.

          1. 3

            This is mildly annoying for uninstallation because an uninstalled can’t uninstall itself

            I think this is only half-true, because WinAPI gives you the “delay removal until next reboot” (MOVEFILE_DELAY_UNTIL_REBOOT), so it should be possible for the uninstaller to uninstall the application, and then register itself, along with its directory, for removal until next reboot. Then Windows itself will remove the uninstaller on next reboot.

            On servers this could mean that it will be removed next month, but this in turn is a virtual problem, not a real one.

            1. 1

              Windows servers list “application maintenance” as a reason for a reboot, so it’s not culturally weird to reboot after an application update.

          2. 2

            MSIs formalise this hack by providing the uninstall process as a thing that consumes a declarative description.

            Yep, that was my point. Or to put it another way, Windows can handle the management of a package so you don’t have to. Which was the complaint in the OP.

            But on your point, it is totally possible to do in-place updates to user software. On modern Windows most files can be deleted even without waiting for all handles to close. And any executables you can’t immediately delete (due to being run) can be moved. The problem is software that holds file access locks. Unfortunately standard libraries are especially guilty of doing this by default, even newer ones like Golang do this for some inexplicable reason.

        2. 2

          True, arguably Windows also has an app store nowadays and NuGet and WinGet. I did not know about msix! Maybe a bit of an XKCD 927 situation there.

          1. 4

            Windows also has:

            So the existence of installers/uninstallers is a “cultural” thing, not a technical necessity.

            1. 1

              “if you want to use our product, install [my chosen package manager]” is pretty non viable. I write the installer for a game, none of that would be an option.

              1. 3

                Sure you do. You just call it “Steam” instead.

          2. 2

            WinGet simply downloads installer programs and runs them. This is visible in its package declarations

            NuGet is a .Net platform development package manager right? Like Maven for the JVM it is not intended to distribute finished programs but libraries that can be used to build a program. But perhaps it can be used to distribute full programs just like pip, npm et al.

            1. 2

              In theory, NuGet is not specific to .NET. You can build NuGet packages from native code. Unfortunately, it doesn’t have good platform or architecture abstractions and so it’s not very useful on non-Windows platforms for anything other than pure .NET code.

    2. 15

      These changes will make stdlib mux actually pretty usable. This is one (there are others of course) area where the Go community’s dogmatic insistence on using the stdlib for everything, is just plain wrong. The stdlib mux is anaemic and has footguns.

      The {$} marker is a bit of a hack, but something is obviously necessary for backwards compatibility.

      1. 5

        This is one (there are others of course) area where the Go community’s dogmatic insistence on using the stdlib for everything, is just plain wrong.

        not really sure what you mean in this regard, almost everyone I’ve talked to use gorilla mux or chi, not the stdlib mux on its own.

        1. 4

          Yeah, I’ve never heard anyone insist on the stdlib mux and I’ve been active in the community for over a decade. I have heard people advocate for using libraries that follow stdlib abstractions (e.g., they implement http.Handler and so forth).

        2. 2

          Added dollop of anecdata: pretty much the same over the last 5 years. I tried to use stdlib mux once for a coding challenge but pretty quickly swapped out to Gorilla when things got hairy.

    3. 28

      No mention in any of the “cons” lists that some of these have abhorrent runtime costs, require tons of (dynamic heap) allocations, make it expensive to set and query for values, make it difficult to refactor, etc?

      Might as well just use Ruby.

      1. 14

        Might as well just use Ruby.

        For some people Rust is a Ruby-alike with access to low-level when you need it and nice type safety benefits.

        1. 8

          This concept is completely alien to me. In what possible way is Rust like Ruby?

          1. 11

            Great iterator methods, mixed programming paradigms, emphasis on developer happiness, and a bundler-like tooling experience come to mind.

            1. 6

              I was going to say this. Its interesting how tooling almost matters more than the language itself. I’m in the ruby camp, but I was pleasantly surprised at how easy it was to pick up rust (even if it wasn’t idiomatic rust) because of its stellar editing experience.

            2. 2

              *nod* I came from Python for the maintainability (i.e. the strong type system, lack of None, lack of exceptions, ecosystem that’s much more “fearless upgrades” than Haskell’s, and ease of producing static binaries so a system upgrade can’t force me to confront a maintenance debt at the worst possible time), not for the performance.

              …optimizing my creations for performance is pure nerd-snipe.

          2. 4

            The design perspective that feels similar to me is that both languages allow a lot of things to happen implicitly. The major difference is that in Rust, this happens at compile time, while in Ruby, it happens at run time.

      2. 10

        Yeah, bringing in a whole hash map at runtime just to be able to have some syntactic sugar in the source code isn’t a great thing.

        I’ve often wondered how to actually do this in compiled languages. The problem is that if the language introduces some sort of syntax for named arguments while allowing for default arguments, then the argument names become part of the program’s ABI, or else you have to supply “header files” or similar in order to compile your own code against them.

        The stated solution here is a bit wordy and doesn’t have a zero runtime cost, I don’t think (unless the compiler somehow can elide the options struct you have to carry around for every function call). It does seem like the best solution of the ones presented, though.

        1. 4

          I have seen some libraries do a variadic approach, where you have one “tag” followed by its argument, something like:

          bool f(...);
          #define TAG1 0x1
          #define TAG2 0x2
          #define TAG3 0x3
          f(TAG1, ptr, TAG2, num, NULL);

          But this is really prone to error - no type safety, no structs/floats on stack unless you want pain, needs the sentinel null.

          1. 3

            That pattern was the bread-and-butter of Amiga programming. It was quite nice for the time.

            1. 2

              Intutition was probably the brightest spot for the Amiga programmer, and it got less pretty the further you went down the stack. (Raw Intuition was hard enough to handle that people wanted BOOPSI and MUI anyuways though.)

              1. 2

                MUI is still the best UI toolkit I’ve ever used but (a) there’s probably some rose-colored glasses being worn and (b) I never really did all that much UI programming.

                But yeah. Intuition to MUI is like Xt to Motif (but more so).

                …and then there was GadTools over there just doing its thing…

        2. 4

          I’ve often wondered how to actually do this in compiled languages.

          OCaml has labeled, and optional (default) arguments.

          argument names become part of the program’s ABI, or else you have to supply “header files” or similar in order to compile your own code against them.

          In OCaml labels are part of the function’s type (this is different from Swift, where they are not). To call a function, you need to know its type. This is true for any language, including dynamically-typed ones. In OCaml that means you need to know the labels.

          Just like in almost any modern language, in OCaml you need to import a module (which contains type definitions) to call functions from that module.

          1. 2

            One reason I turn to OCaml so frequently — whatever so many other languages think can’t be done, it does, and does it right. ;)

      3. 10

        The kinds of use-cases you’d typically consider a lot of these approaches in tend to not be tight inner loop, Thou Shalt Not Allocate type situations, so I don’t expect that runtime costs and heap allocations would be very near the top of my lists of concerns if I were considering a DSL approach like this in the first place. Think a webapp where you’d still be looking at far better performance than Ruby, or initialization of a major subsystem in a game engine, rather than microcontroller firmware.

        Your mileage and use-cases may vary.

      4. 3

        I kind of laughed when I saw the use of .filter().filter() in an example. Yup, nothing wrong there.

        1. 12

          There are no extra heap allocations when doing .filter(...).filter(...). It’s just two chained iterators. What does dynamically allocate on the heap is the last .collect().

        2. 5

          Yeah, I had a note about how “I know this is not the most efficient way to filter a list, but it is not the point of the article so people will get it” but I ended up removing it. I guess I was wrong.

          1. 2

            I would have been completely okay with it if it was reduced to one filter, then it might have been passable. I think crunching a ton of logic inside lambdas inside chained methods is a little hectic to read, but otherwise I get the intent. No sweat :)

      5. 2

        abhorrent runtime costs

        Compared to “optimized for bare metal”, sure, but compared to Ruby? Using a hashmap for optional values in Rust will still be magnitudes faster than Ruby.

    4. 12

      And, it’s not just correct—it’s also easy to review. More than half of respondents say that Rust code is incredibly easy to review.

      this is the only thing that surprised me. I find Rust code is often fairly hard to review because of things like deref coercion and the ? operator and type inference leaving a lot not shown in the code. Don’t get me wrong, I like those features when I’m writing Rust! I just find they’re a bit of a double-edged sword and sometimes make reviewing Rust less straightforward than other languages.

      1. 11

        The problem with this article is that most of the claims are subjective. I don’t care if the code is subjectively easy to review, I care about the classes of bugs that code review commonly misses (which is something that is objectively measurable but only with several years of a large code corpus).

        1. 5

          I think the theory of change underpinning this article has it exactly backward. It seems to suppose that managers are wanting to migrate to rust for safety and performance advantages but anticipate developer resistance. My sense (which also might be quite wrong) is that developers are generally eager to adopt rust for those reasons but lack a compelling business case to overcome what managers see as re-work when proposed in the context of existing systems.

        2. 2

          I don’t really think the subjectivity is a problem; I think subjective concerns are pretty important. The thing that concerns me more is that the audience is really narrow; it seems like the opinions are all IC’s. Sure programming Rust is fun and enjoyable and satisfying, but I would be curious to hear from the people around those IC’s as to whether they thought the IC’s using Rust were doing good work and good collaborators and how it compared to the IC’s using other tools. That’s not really explored in this analysis, which seems like a big miss to me.

      2. 6

        The deref thing can get a bit hairy but the ? operator and type inference IME make code simpler to understand. The ? operator is taking some convoluted control flow and expressing it in a very intuitive manner and with well chosen function names, the types exist more as a checking mechanism for whether a type supports an operation rather than an explicit thing you need to worry about.

        1. 2

          For reviews I think that’s somewhat true, but I’ve found overall that the ? operator and the error system gives a huge preference to the comfort of the writer and doesn’t create a culture of good error messages for operators.

          1. 1

            Compared to what? I can’t think of a language where this isn’t the status quo. At least in rust the return type tells you what errors are possible (unless you use Box<dyn Error>, which is basically equivalent to exceptions or Go’s error type.)

            1. 1

              although Box<dyn Error> bears many similarities to Go’s error in that both are heap allocated fat pointers that may unwrap recursively, they exist in very different ecosystems and are very different in practice. Since error is used universally in Go, but Rust’s error system is in practice built on Result<T, E>, the systems have many differences. For one thing, Go’s fmt.Errorf is in the standard library while the equivalent in Rust is the anyhow crate’s Context. While fmt.Errorf is used pretty universally in Go in both library and application code, Rust in practice tends to have a split where successive wrapping of heap-allocated error values is seldom done in libraries for good reason; such an approach, while idiomatic in Go, would be irregular in Rust. The capabilities are fairly similar in application code, but substantially different in library code and across their ecosystems, because they are fairly different languages.

          2. 1

            Oh from an operational standpoint yeah ? is just punting the problem up to your caller. You can map_err to at least transform the error into something more intelligent though. That’s at least something the callee can do to make the caller’s life easier but I suspect most people just let the error type propagate upwards unchecked.

        2. 1

          The ? operator is taking some convoluted control flow and expressing it in a very intuitive manner

          I like the ? operator and all, but IIRC Rust goes out of the way to make the control flow “convoluted” (requiring branch arms to evaluate to the same type if I’m not mistaken). Most other mainstream languages don’t have the “marching to the right” problem that motivated the ? operator in Rust. Of course, ? could still clean up a bit of boilerplate in those other languages, but it’s a very minor convenience compared to a virtual necessity in Rust.

          Also, in general I think a lot of people use ? to propagate errors, which is a neat feature, but in practice we should be adding context, so (I think?) you end up needing to unpack those results explicitly anyway? Maybe the idiom is to use some result combinator to add the context?

          1. 2

            It’s relatively easy to make both arm branches evaluate to the unit type (). If you put a semicolon after all your statements in a block then the block will return (). The “having to be the same type” is a consequence of if being a statement and being from a more functional background (the first language that I really learned was actually Racket) it’s intuitive for me but I understand that that’s not going to be the norm. I typically find Rust control flow relatively reasonable. You can screw yourself by having mutable logic in the match arms but flow typically follows a stack (? is an exception in that it acts as an early return, which is otherwise pretty rare in Rust). You enter a block, the block returns a value. And yeah, adding context to errors is actually pretty easy all things considered. Result has a map_err function on it that takes a closure. If the Result holds an error value then it’ll call the closure passing the error value as a parameter and you return a value that substitutes in. They don’t have to be the same type cause technically they’re two different Results. Again, I don’t think a lot of people do this. In general people are bad about thinking about error handling (or at least I am) and Rust kinda makes it easy to spot when someone didn’t write the handling for the error case cause they’ll call unwrap or have an empty or panicking match arm if they matched on it. This was probably much more rambly than I intended but screw it it’s stream of thought.

            1. 1

              I think Rust’s control flow makes sense for Rust; I’m not objecting to it, but I was responding to the implication that ? makes Rust a more readable language than other languages because it simplifies control flow—my point was that control flow in other languages are already quite a lot simpler because they don’t require branch arms to have matching types and thus ? is much less valuable outside of Rust. There’s no right-ward march to avoid.

          2. 1

            Most other mainstream languages don’t have the “marching to the right” problem that motivated the ? operator in Rust.

            Really? What languages? Nested error handling in python and go certainly marches to the right.

            1. 1

              In Python and Go, you can march your error handling to the right, but you have to be deliberate about it. In Rust, apart from ?, you’re essentially forced to march rightward—at least it was much more difficult to avoid a rightward march.

          3. 1

            Maybe the idiom is to use some result combinator to add the context?

            This is exactly the idiom with snafu. It can be done too with the far more widely used thiserror but is much less convenient. It also is supported directly in thiserror’s sibling library anyhow (thiserror is intended for use in libraries and anyhow for use in programs), albeit stringly-typed, whereas snafu allows doing it in strongly-typed as well as stringly-typed ways.

            1. 1

              It can be done too with the far more widely used thiserror but is much less convenient.

              Is it really much less convenient though? I’m not familiar with snafu but in anyhow you would do some_result.context("blah")?, whereas with thiserror you’d just use good ol’ map_err: some_result.map_err(BlahError)?.

              Only inconvenience being that you have to explicitly declare your error type, but that’s a win for documentation IMHO.

              #[error("blah {0}")]
              pub struct BlahError(UnderlyingError);
              1. 2

                context and map_err are extremely different. context creates a new value that includes the old value, where as map_err may or may not do so. context could be thought of as treating every error as a node in a linked list of values; using context adds an item to a list, whereas map_err takes a single value of one type and converts it to a single value of another type. Although they have similar-looking APIs, the value produced by the APIs have enormously different affordances.

              2. 1

                Is it really much less convenient though?

                Now that I look at actual code, no, the difference is not as large as I had thought. The difference comes if one wants to add context to errors that is not statically known, i.e., if one wants to have fields in one’s BlahError other than the underlying error. Suppose one has a number of &strs or &Paths around one wants to capture. They need to be converted into Strings and PathBufs or similar to be stored in the BlahError. With thiserror one might have, e.g., some_result.map_err(|error| BlahError { error, host: host.into(), user: user.into(), query: query.clone(), db_dir: db_dir.into() }), vs, with snafu, some_result.context(BlahError { host, user, query, db_dir } }). As I said, it’s not as large a difference as I had misremembered, although it would scale linearly with the field count.

      3. 6

        If you find Rust difficult to review, what are you comparing it to? I find it easier than C++, even though I end up writing more C++ than Rust these days.

        1. 3

          reviewing Go code is a lot easier imho, but again, it’s a sword that cuts both ways: Rust is more powerful and makes it easier on the writer because you have more powerful abstractions and a more robust type system, so you can accomplish things using less boiler plate. Boiler plate makes it more tedious to write the code, but boilerplate code is easy to review (although admittedly quite boring).

          1. 9

            In Go I’m shocked that methods can be called on null pointers, and this is sometimes considered a valid thing to do. That’s way worse than Deref IMHO.

            When errors are returned, it’s up to each function to decide whether the ok value is also usable or not. That’s messier than the Result enum.

            Concurrent primitives (like data parallelism) require boilerplate that is not easy to review. It may look simple, but it can have subtle bugs. Rust will tell you when you need a lock or atomic.

            1. 4

              In Go I’m shocked that methods can be called on null pointers, and this is sometimes considered a valid thing to do.

              I’m always a bit surprised that (non-virtual) methods in C++ can’t, because this is a pointer (not a reference). For a long time, the ->dump() methods in LLVM worked on a null this and contained something like if (this == nullptr) { print "<null>" ; return; }. Then compilers started optimising on the assumption that they were null and it all started to explode because they’d elide that entire branch.

              You either need a type system that includes a concept of nullability and makes it impossible to call a method on a null pointer, or you need some handling of null as a receiver (because you can’t statically guarantee that it won’t happen). Objective-C returns 0 from all methods on a null object. Smalltalk is a bit more fun and as a singleton Null object that becomes the receiver for any method called on null and can do things like throw an exception or pop up a debugger. Go just assumes that the receiver is a nullable pointer. The only deeply wrong choice that I can see here is the C++ one of making it easy to call methods on null objects but make it undefined behaviour and so you cannot, within the method, program defensively for that case.

            2. 2

              Eh, I can get my head around method calls on null pointers. I have a much more difficult time with Deref. Null pointers are less safe to be sure, but that’s different than “can I grok the code” IMHO.

            3. 1

              calling a method on a null pointer is not particularly common, although it is allowed. An example of where I think this is somewhat normal is if you consider the json.Marshaler interface: a value has a MarshalJSON method that, when called, produces the json representation of that value. Since you can call a method on a null pointer, this allows calling the method on the null pointer to produce the json value null, instead of requiring that the null check be at every call site. Once you’ve allowed null to exist, I’m not really convinced that calling a method on a null pointer is all that strange; it’s the existence of null itself that produces this sort of condition.

              In Rust we find it normal to call a method on an Option<T> that is None. Of course, option types and nulls are different things, but in both cases the situation is that the user is calling a method on a non-value.

            4. 1

              Rust won’t tell you whether you need a lock or atomic instructions. It will only error out on data that cannot be safely shared (Sync) or sent (Send) with/to other threads. It’s up to you on how to resolve that.

          2. 4

            Strong disagree. Boilerplate is noise that people will tune out which means if there’s an issue with it it probably won’t get caught on review. There’s a tradeoff between having more powerful abstractions that are harder to parse as a reviewer vs getting buried in boilerplate and just filtering the code out.

            1. 1

              This has never been my experience. I can scan the visual structure that boilerplate affords a lot more easily than I can grok some dense iterator combinator chain, and that’s after more than a decade of programming in languages like JS, Python, and Rust (all of which use iterator combinators pretty liberally). Maybe I’m an outlier, but there’s definitely a threshold for me beyond which density inhibits readability.

              1. 1

                What if that visual structure is hiding something like an early return? I do agree that straightline code tends to be a lot more readable than complicated iterator chains but straightline code is also less structured. Iterator chains can’t early return or fail implicitly because the code the user writes is in a lambda that can’t affect the function’s control flow short of mutating values. The worst case is a panic! and if you never try to catch panics (please do not try to catch panics) it’s the same as calling exit(0) in C except we call drop on our values. (NOTE: This can actually screw you if you’re writing unsafe code because you might have broken some invariant that unsafe code in your Drop implementation relies on. But hey, that’s unsafe so typically not a concern)

                1. 1

                  What if that visual structure is hiding something like an early return?

                  For me at least, visual structure makes it much easier to spot the early return compared with a single ? surrounded by dozens of other characters.

                  Iterator chains can’t early return or fail implicitly because the code the user writes is in a lambda that can’t affect the function’s control flow short of mutating values.

                  I mean, they absolutely can, whether it’s an errant ? or an unexpected short-circuit (short circuiting isn’t exactly an early return; more like an early break but no less incorrect). I think iterator chains are more readable than for loops in the very simplest cases, but those cases are already so simple in absolute terms that the difference is negligible. And the more complex the case, the more readability favors for loops IMHO.

          3. 4

            Go isn’t really a systems programming language. I wouldn’t expect to be using Rust in the same places I’d be using Go.

            1. 3

              yeah I mean I agree with that, but that’s not how the inventors of Go thought about it

              1. 2

                Pretty sure their notion of ‘systems’ was more alike “distributed systems”. The context of Pike’s “systems programming language” remark was clearly about services running at Google, but everyone still latches onto that one poor word choice even a decade later. 🙃

                1. 1

                  Hmmm, yeah that sounds true now that you mention it. That’s certainly what it’s been used for.

            2. 2

              The overwhelming majority of Rust proponents I’ve talked to tell me that Rust is suitable anywhere that Go is suitable (including a bunch of people citing this survey as evidence that Rust is no more difficult/cumbersome than Go).

            3. 1

              I mean… sure but I’m just comparing the ease of code review between the two languages, not comparing which language is a better fit for which project, that’s a separate topic. The article doesn’t give any more context beyond “it’s easy to review”; I found that to be a surprising conclusion.

      4. 4

        These features are safe, and guaranteed to type-check. Even though you don’t see their code inline, you can rest assured there is no UB hiding in there.

        There are also clippy lints that warn about deref edge cases or needless references.

        1. 4

          Yeah, I massively prefer ? to reminding people that *ing a std::optional that doesn’t have a value is UB.

      5. 3

        The thing that makes these things less problematic is that Rust tries to make sure that implicit behaviour has as few impacts on program semantics as possible. For example, passing a &my_vec into a function that accepts a slice is convenient but doesn’t fundamentally change the behaviour of the program.

    5. 9

      The argument presented here is essentially “Stallman wasn’t persuasive and that’s everyone else’s fault”.

    6. 26

      I’m sorry… This is gonna sound like I am teasing you or that I’m mocking you, but actually I’m really being frank, and I’m gonna give you my advice. It’s just not gonna be great.

      Or you could say “check out the previous commit and force push it”, answering their question. I don’t like this blog post. It seems to be suggesting all our tooling should be geared towards what “humans think about” instead of what engineers need to do. Humans think “I want to build a table”, not “I have to use tools to transform wood into components that assemble into a table”, but they have to do the latter to achieve the end goal. It’s IKEA vs real work.

      1. 10

        The tools we build need to be geared towards what the users think about.

        Engineers should be familiar with the mindsets their tools force upon them. That’s the true meaning perhaps of Englebart’s violin: we adapt to our tools instead of the other way around.

        1. 3

          and when someone else already pulled that commit that you just removed…

          1. 10

            Why not simply use the command that was designed for this? Do a git revert, and then a git push. Works for the whole team.

            This is a nice example of the issue outlined in the post. Only there really is no way you can dumb down git so far that you can simply forget the distributed nature of it. The only wisdom needed here is that you always need to add stuff to the chain, never remove. This principle, and the consequences of not not following it, should really be part of your knowledge if you want to be a a software developer.

            1. 2

              I think this depends on how you read the scenario in the article - I read it as “I just pushed something that shouldn’t exist in the git history”, like I’ve been in situations where someone’s pushed a hard to rotate credential, and so you rewrite history to get rid of it to reduce damage while you work to rotate

              1. 4

                a hard to rotate credential

                Isn’t this the real problem? Rather than blaming git for being a distributed version control system, how about solving the credential rotation issue?

          2. 1

            They can still use it to create a new commit if they find a need for the content. It is really only a problem if they want to communicate about it to someone who has not pulled it. IME that is extremely rare because the way to communicate about a commit is to communicate the actual commit.

      2. 2

        It feels a bit like when the author is mentoring less experienced devs, he assumes they can’t grasp the more complex aspects of the tool because it doesn’t fully click the first time.

        Over the past three decades as I’ve learned from all sorts of folks, and done my best to pass on my knowledge on all sorts of things from operating systems to networks and all sort of software development tools, I’ve often had to ask for help on the same things multiple times, esp if I don’t use a particular mechanism often.

        For the past few years, I’ve worked as a build engineer and part of my team’s job has been to help come up with a workflow for a large team of engineers that makes sense. Sometimes we intervene when there are issues with complex merges, or a combination of changes that work on their own, but not together.

        Most people also can sort things out on their own given some time. You don’t have to back out a change and do a force push - I would, because it makes the history cleaner, but there’s absolutely nothing wrong with a revert commit, which is extremely straightforward to create.

    7. 55

      It’s weird to me that people are willing to trust a saas provider with the safe handling of all of their sensitive passwords but then won’t trust them to safely handle measuring how many times you click a button in their app.

      1. 12

        Telemetry has become a curse word. The opposition is totally illogical.

        1. 17

          It’s really painful to handle as a developer. I need to use specific phrasing and be very explicit about “stacktraces get sent to me, because getting you to reproduce the issue, figure out how to use event log, send me the right parts would take days”. Yes it’s telemetry and no, I neither care about nor receive your data.

          But the word telemetry is toxic now - as usual, ad companies are why we can’t have nice things.

          1. 2

            That sounds like a good outcome for everyone - your users are hopefully grateful for the clarity? If a developer can’t come up with a concise and uncontroversial description without invoking the T-word, well, maybe they’re doing things that some users would rather they wouldn’t.

            1. 9

              It’s not great. It basically says: whatever word is chosen by adtech for a dark pattern will be killed for general use.

              What if they start talking about “comment” next? Would you “post a message replying to another message” that it’s a good outcome for clarity?

              This may sound like a slippery slope, but we’ve already lost telemetry and cookie, and are close with user experience and personalisation.

            2. 6

              Not really. If you say “now collects error reporting” users still lose their minds.

            3. 2

              Telemetry is broad, but so are ‘stack traces’. If they are just call stack addresses, then that’s unlikely to leak very much personal data, but a lot of more advanced post-mortem stack traces capture arguments and these can contain a lot of personal information. Error reporting, which @insanitybit mentions, may include complete core dumps, which can contain a load of private data.

              Collecting telemetry that is both privacy preserving for the users and useful for the developers is very hard.

          2. 2

            Does having a “crash report” style dialog work better in terms of report outcomes? I mean the sort of thing where you don’t collect “telemetry” in the usual sense, but then if there’s a (caught) crash you pop up a dialog which allows the user to click a “Submit crash report” button, and is clear what is included in the report.

            Personally, I’m much more inclined to use this sort of feature to submit actual feedback when something goes wrong, rather than allowing generic info on what I’m using in the application all the time.

            1. 6

              “The app crashed then some weird window popped up so I closed it.” If you’re providing software to non-tech people, that kind of request is meaningless to them. When that dialog pops up, is there any reason a user of an app provided by their employer should not report a crash? I don’t believe there is.

              is clear what is included in the report.

              This applies to developers only. A generic user does not understand what a stacktrace is.

        2. 3

          The fact that someone doesn’t want to do something, in this case share x information, is reason enough to respect that preference. The burden isn’t on users to justify why they don’t want to share information.

          Beyond that I don’t think it’s illogical. We live in a world in which, best case scenario, our privacy is perpetually invaded against our will by the technology we rely on. If that means users don’t want to share any additional information to preserve what little privacy they have left that’s completely rational.

          Not to mention that a lot of telemetry isn’t honest. It’s often not collected without user consent, or knowledge at all, which makes it shady. Often there’s no opt out. There is a fair amount of academic research on techniques to de-anonymize supposedly “anonymous” data sets telemetry collects, e.g.

          So “telemetry is good, opposing it is illogical” is incorrect and misses all of the nuance related to this topic. Really it’s an attitude that’s disrespectful to users.

          1. 3

            There’s a related problem, which is that end users are not good at understanding the consequences of aggregation. If I show you all of the data collected about you by one application, it’s very hard for you to understand what someone can learn about you by correlating that against other data about you and against other data about users of that app. If you want to try to anonymise it then you need users to understand differential privacy (and, while I understand the concepts, I don’t want to actually do the maths every time I agree to a service).

            The GDPR with a clear privacy policy can improve this. If you have consent to collect data for some explicit purpose then you cannot legally use it for any other purpose without additional consent. A telemetry service that showed me precisely what it collected and had a GDPR-backed guarantee that it would be used only for answering a specific set of questions about the program would be fine. You might also be able to make this dynamic. If you provided users with the set of questions that developers wanted to answer and let them opt into having their data used to compute the result, you might get some very helpful results.

          2. 2

            The fact that someone doesn’t want to do something, in this case share x information, is reason enough to respect that preference.

            No it isn’t.

            The burden isn’t on users to justify why they don’t want to share information.

            Yes it is. It’s everyone’s burden to justify their positions.

            We live in a world in which, best case scenario, our privacy is perpetually invaded against our will by the technology we rely on.

            That is obviously not the best case scenario. One can obviously write software that does not perpetually invade your privacy against your will. I suspect the vast majority of software written is in fact abiding by that.

            Not to mention

            All of these are implementation failures. Not all telemetry is good, not all telemetry is bad. Some may be bad, most might be, in fact.

            Users are welcome to opt out of whatever they want by not participating and I always believe they should have that option, but I’m not going to pretend that their decision is magically rational.

            Really it’s an attitude that’s disrespectful to users.

            It’s disrespectful to everyone, actually. Developers, end users, etc, are really terribly informed about technology and make pretty illogical decisions all the time. I don’t see why that’s taboo to point out that people are irrational.

    8. 8

      I’ve been writing Rust full-time with a small team for over a year now.

      It sounds you have been building an application, rather than a library with semver-guarded API. This explains the differences:

      • in a library, taking a proc macro dependency significantly restructures compilation graph for your consumer, and you, as a library, don’t want to force that. In an application, you control the compilation graph, and adding a proc macro might not be problematic.
      • libraries should work hard on protecting their own abstractions. From only matters when you raise an error, user of the library has no business raising errors. Adding From publicly exposes the internal details of how you use ?. In an apppicarion, you don’t need to protect abstractions as hard, as you can just refactor the code when issues are discovered

      (Personally, I’d am not a fan of using From even in applications, as it makes it harder to grep for origins of errors)

      1. 5

        I’ve been building both applications and libraries. And I 100% agree with both of your points wrt. libraries.

        But, I don’t think the application / library binary is … well … binary. It’s a spectrum.

        In my experience, any application of significant complexity will have internal library-like things. In that kind of situation, protecting their abstractions is vital but a slightly more complex compilation graph less so. (👀 tokio and friends)

        Conversely, a library might exist to make hard things easier. One common case is an abstraction that intentionally exposes its internals to eliminate complexity for the callers.

        (wrt. From in applications and grepping for the origins of errors, I blame Rust’s error handling ecosystem’s anemic support for backtraces and locations. We shouldn’t have to grep! I touch on that at the end of the gist. I’ve daydreamed about thiserror supporting a location field that does something similar to this hack.)

        1. 1

          (Personally, I’d am not a fan of using From even in applications, as it makes it harder to grep for origins of errors)

          Wow. I thought I was the only one. I’m very skeptical of implementing From for error types. At least as a general pattern.

          The main reason I’m skeptical of it is because (obviously) From works based on the type of the value, but not on the context. As a hypothetical example, your code might encounter an std::io::Error during many different operations; some may be a fatal error that calls for a panic, some may be a recoverable/retryable operation, and some may be a business logic error that should be signaled to the caller via a domain-specific error type. When you implement From for a bunch of third-party error types and 99% of your “error handling” is just adding a ? at the end of every function call, it’s really easy to forget that some errors need to actually be handled or inspected.

      2. 2

        in a library, taking a proc macro dependency significantly restructures compilation graph for your consumer, and you, as a library, don’t want to force that.

        hmmm I’m not sure what the issue is here. Dependencies have dependencies, I don’t think people find that surprising. For me it’s more of a question of how many dependencies and how does that set of dependencies change my tooling?

        A proc macro that only depends on the built-in proc_macro crate is pretty different from a proc macro that depends on syn, quote, and proc_macro2.

        1. 3

          proc macro deps are different, for two reasons:

          • they are not pipelined. proc macro crate must be compiled to .so before dependant crates can even start compiling (normally, cargo kicks downstream compilation as soon as .rmeta is ready)
          • half the world depends on syn, so, if you have many cores, it normally is the case that one core compiles syn while the others are twiddling their thumbs.

          And yes, this mostly doesn’t apply if the proc macro itself is very small and doesn’t depend on parsing Rust code/syn, but thiserror uses syn.

          1. 1

            What kind of macros are compile-time efficient, if any?

            1. 3

              In general, macros that avoid parsing Rust and instead use their own DSL. See eg this bench:

          2. 1

            ah. I personally don’t like syn so I get the gripe. I think “avoid proc macros in libraries” is casting a wide net, but “you probably don’t need syn to write that proc macro” is something I very much agree with.

    9. 2

      I use Linux and Windows and MacOS regularly. Every one of them has bash. Even with every system having the same shell, making a script portable is difficult, because the shell is reaching out to many different things to do its work, and they’re distributed and versioned differently in different environments. The difference between [ and [[ ranks far, far below “every system has a different tar”. tar is not a posix utility, but admin is?

      I often find it easier to write Python using only the standard library, because with a Python install, you have just one version number to worry about. I know my oldest Python is 3.8, so that’s what I target and it’s much easier to make portable than trying to figure out how to do it in a shell script.

      1. 1

        Yes, SCCS is the version control system defined by POSIX, which is the reason why admin is included in POSIX. I agree that unstandardised tar is a problem, but the POSIX answer is pax.

        1. 5

          My whole argument is that the standard requires things that are so antiquated that they are irrelevant, so I’m not sure why telling me that it’s in the standard is supposed to convince me of anything, anything at all. I know that it’s in the standard. That’s my whole point. The fact that it’s in the standard is what makes the standard bad! Meanwhile, tar, an altogether ubiquitous utility that everyone knows is not in the standard. It is long past time to either evolve this standard or let it go.

          1. 2

            The standard does evolve, but, being a standard, obviously not much. As far as I know, SCCS is in POSIX just so that there is a standardised VCS.

      2. 1

        macOS has bash, but the installed bash is the last GPLv2 version, which is around 15 years old. If you’re using bashisms, there’s a good chance that they won’t work there.

        1. 1

          my entire point is that portability extends beyond the shell and the posix utilities standard is old and that every shell script portability concern I’ve ever had has not been within the shell itself

          1. 1

            I’ve recently had to fix a shell script that assumed GNU extensions for sed and awk, so I have some sympathy there, but my experience is that most *NIX platforms support a fairly large overlapping set of extensions to the core UNIX utilities, whereas bash extensions are rarely supported anywhere other than bash.

            1. 2

              the very first thing I said is that I use Windows regularly. This is kinda my entire point: that portable is not actually meaningful when the people saying it are saying “well, it’s portable for me”.

      3. 1

        one of the many reasons i don’t use bash is because there are so many subtle differences between bash implementations, as you pointed out. bash on macos is not bash on linux, and the coreutils are all different.

    10. 14

      One thing that always bothered me about “programmer time is expensive; processors are cheap” is that it’s used to justify slow client software, when nobody is buying cheap processors for those who have to use it.

      (And even if they did, it would still be irresponsible and unsustainable, in my opinion.)

      1. 14

        My issue is that all these threads turn into “we should learn from game devs, they care about performance”. But game developers pretty obviously are not doing any better, on average, than the people they’re sneering at – the industry is infamous for forcing customers onto a hardware upgrade treadmill and for ludicrous “minimum” hardware requirements on new titles. And they don’t even get the optimization of developer time out of it, because the industry is also infamous for extended “crunch” periods.

        1. 12

          At the same time, console game dev is one of the few places where so much time and effort is spent on optimizing consumer facing software, often with great results.

          This is in contrast to my experience with most (if not all?) of the websites where often clicking something result in hundreds if not thousands of milliseconds of delay.

          1. 3

            Console is on an upgrade treadmill, same as other areas of game dev.

            And the kinds of high-impact high-rated big-name games you’re thinking of are, to be frank, a small fraction of the games industry as a whole. And the industry as a whole does not have a great track record on performance. As I mentioned last time around, the single best-selling video game of all time (Minecraft) infamously has a large, dedicated community maintaining third-party addons and mods to make its performance more bearable on average hardware.

            1. 1

              I don’t know what are you talking about with that upgrade trademill. The fact that every 8 or so years you can buy a better hardware doesn’t change the fact that during a given generation you can be pretty sure that games you buy will work well on current gen hardware. This is not something you can expect from typical user facing software.

              Let’s take as an example one of the biggest and richest company - Google. Can I expect that maps or sheets will work at 30fps (not to mention 60fps) without hiccups on a hardware that is ~6 years old? Of course not (and to a big extent that software is way simpler than realtime rendering of complex 3d scenes).

              This is not the case with console games, especially with first party studios. In those cases you can be almost 100% sure that current gen hardware will run the game flawlessly.

              This is what people are talking about when they put game dev as an example of subset of industry that cares about perf. No one is claiming that every game dev cares about perf. You seem to be strawmaning and I think you are well aware of this so this is my last message in this topic.

              1. 3

                This is not something you can expect from typical user facing software.

                Apple pretty commonly supports its hardware for as long or longer; macOS Monterey is still receiving patches and supports hardware Apple manufactured literally 10 years ago. The iPhone/iPad ecosystem similarly is known for long support cycles.

                That doesn’t mean every new app for those platforms is designed to stay within the capabilities of ten-year-old hardware, of course, but as I keep pointing out games go through a hardware upgrade treadmill too. It’s slower in the console world but the treadmill is still there and, if anything, is harsher – an old computer may run new software with reduced performance, but when the console world moves on they often just don’t release a version for the prior generation at all (and plenty of top titles are exclusives with deals locking them to exactly a particular console).

                This is what people are talking about when they put game dev as an example of subset of industry that cares about perf.

                Many people in this thread and the last one have been treating performance as a moral crusade. See, for example, other comments in this thread like “user time is sacred”. If console developers could require you to get a RAM or GPU or SSD add-on to run their games the way PC game developers can, they absolutely would do that without a second thought. We know this because PC game developers already do that. There’s no moral issue for them – the console devs aren’t carefully thinking about how every CPU cycle is a moral affront that steals time from the user, they’re thinking about it as a practical thing imposed on them by the hardware they’re targeting.

                No one is claiming that every game dev cares about perf.

                Well, here’s an example from the last thread where I brought up Minecraft and some other examples in response to someone who was claiming that:

                in a game, if you can’t keep to your frame budget (say, make 60 FPS on a modern PC, where nano/microseconds can add up) then that can lead to poor reviews, and significant loss of potential revenue

                This person wanted to generalize game dev not just to “cares about performance” but must care about it. Yet that’s just completely wrong. So I don’t know how you can reasonably claim I’m “strawmanning”.

        2. 5

          even in games, most people aren’t focussed in performance, that’s just one topic. Lots of games also don’t compete on performance metrics. Some games’ value proposition is very cool graphics, so those games are actually competing on performance. It’s one of few areas where customers are actually showing up to pay for the more performant thing. That stuff gets noticed by the lobsters crowd, but a lot of other stuff like dialogue systems don’t get noticed here as much, even though game devs love that stuff. and lots of people outside of games do care about performance. Performance tends to get talked about more when it’s easy to tie performance to revenue, and games is a domain where it’s often clear how performance relates to revenue, because in games, performance is often the product.

        3. 3

          Yup lol. When I hear ‘gamers are focused on performance’ I can’t help but think ‘but are they focused on user experience?’:

      2. 11

        Programmer time is expensive.
        CPU time is cheap.
        User time is sacred.

        Many, possibly most, programs have many more users than they do developers. Saving only one second a day per user can amount to a huge benefit, but we often don’t pay attention because we can’t multiply. Likewise, while one CPU is cheap, the number of machines that needs to be updated because some popular program has chosen Electron definitely is not.

        1. 4

          Affirming this. My current client is willing to burn years of developer time on shaving literally one second off of an employee process because one second of employee time scaled over their nation-wide business works out to hundreds of millions in additional profit.

        2. 3

          Saving only one second a day per user can amount to a huge benefit, but we often don’t pay attention because we can’t multiply

          Or we can multiply, but we also realize that in many fields of programming we are rarely presented with such clean “this saves one second per day for every user of the software, with no other consequences or side effects” decisions to make.

          Remember: programmers are a finite resource. Assigning a programmer or team of programmers to do Task A means there are fewer available to be assigned to Tasks B, C, D, E, etc. Which is why, despite everyone hating it, we spend so much time in meetings where we try to prioritize different things that need to be done and estimate how long it will take to do them. And that is just the beginning of a complex web of tradeoffs involved in designing and building and shipping software.

          If you want to have, say, a rule that no new feature can be added until every existing feature has been optimized to a performance level than which no greater can be achieved, then you are of course welcome to run your teams that way. I don’t think you’re going to get very far doing it, though, because of your immensely long dev time for even small things.

          Which means that sooner or later you will have to decide on a level of performance that is less than the theoretical ideal maximum, but still acceptable, and aim for that instead.

          And we both know that you and everyone else who claims to “care about performance” already did that. So really the debate is not whether people care about “performance” or value the user’s time or whatever. It’s a debate about where, on a complex multi-axis spectrum of tradeoffs, you’ve decided to settle down and be content. But that doesn’t sound as pure and noble and as satisfyingly moralizing as making absolute proclamations that you care about these things and everyone else either doesn’t or is incompetent or both.

          But we both know the truth, and no amount of absolutist moralizing changes it.

          1. 2

            There’s a reason for grandstanding: the incentives of the programmer (or the programmer’s company) are often misaligned with the interest of the end user. Especially when the end user is locked in this particular software, there’s network effects, or switching costs… or just how performance looks before you’ve even tried the software. So yeah, the dev gonna prioritise. The question is for whom?

            Or we can multiply, but we also realize that in many fields of programming we are rarely presented with such clean “this saves one second per day for every user of the software, with no other consequences or side effects” decisions to make.

            Correct. The actual savings tend to be probabilistic (they affect fewer users), much larger (up to a freeze or crash), and fixing those is never without consequences… though if those consequences are too scary that would indicate a low-quality code base that should probably be refactored first thing in the morning, because at this point all your development cycle have seriously slowed down.

        3. 2

          This advice is applicable to any widely-deployed software.

          It does not apply to one-off scripts, except it actually does. You merely need to place the writer of the one-off script in the “user” slot.

        4. 2

          Biiiingo. There’s probably a blogpost in there somewhere, but I really just want to say thank you for highlighting what I’ve attempted (evidently poorly) to articulate elsewhere.

      3. 6

        It was also true more or less between 1990 and 2010, when powerful desktop hardware got 50% faster every year. People were just getting good at working with the implications of that when computers stopped getting trivially faster and battery-powered devices became way more important. I certainly wouldn’t call it true anymore; there’s plenty of people out there paying both developer salaries and AWS bills who will tell you how expensive processors are.

    11. 86

      nearly every professional programmer works in some sort of specialized domain. Most programmers think that the domain they work in is representative of programming in its entirety, and they’re usually wrong. An example of a specialized domain is “stateless http servers”. If most of your work is in stateless http servers, most of your opinions about programming are actually opinions about programming stateless http servers, which is a smaller and more specific topic. If most of your work is in game engine programming, most of your opinions about programming are actually opinions about game engine programming, which is a smaller and more specific topic.

      Nearly every topic involving Casey Muratori and Jonathan Blow boils down to this:

      • Casey and Jon Blow work in the specialized domain of games. They’re both good at these things.
      • They think that what is true of their domain is universal.
      • Techniques that make sense for programming http servers that do not make sense for game engines or game rendering they say are “wrong”.
      • They present this to a community of people who largely program stateless http servers, who proceed to lose their minds. Both sides are, in a sense, correct, but both sides see the other side as being wrong, because both sides believe they’re talking about the same topic when they are actually talking about different topics.
      • Repeat ad infinitum.

      That’s not to defend Clean Code specifically though, that book is pretty bad.

      1. 24

        There’s a good saying by Paul Bucheit that captures this:

        Limited life experience + generalization = Advice


      2. 8

        Well said. I’m often baffled by the choices made in new versions of C++, but that’s because I have no idea how people use the language in embedded, real-time, or low-latency systems.

        I do think, though, they proselytize principles that cut across domains. Primarily: actually understanding what a computer is doing, actually caring about performance, actually molding your tool set to your domain. This isn’t all bad.

        1. 1

          Well said. I’m often baffled by the choices made in new versions of C++, but that’s because I have no idea how people use the language in embedded, real-time, or low-latency systems.

          How do you mean? I think the only place I routinely see C++ performance being worse than C is iostreams, which to me is more a byproduct of the era than C++ itself.

          1. 3

            I think what GP was saying was not “these features are slow” but instead “The design of the API has decisions that I find questionable but assume makes sense in other contexts”

      3. 7

        I think you could characterize this as a dynamic we have observed, although I think you’re selling a lot of folks in the general programmer community short by generalizing it to “nearly every” or “most” and by saying they themselves over-generalize from their limited frame. Maybe, maybe not. It’s a vast community. As a stateless http server programmer by trade but a “person who likes to understand how things work” by disposition, I always get a lot of value out of hearing from the wisdom of experts in adjacent domains. It doesn’t always have to be relevant to my job for me to get that value from it. It’s not as if I come back to my team hollering that we have to implement a HTTP handler in assembly, but it does help form mental models that from time to time break through the layers of abstraction at which my code sits, increasing opportunities to pattern-match and make improvements that otherwise would have been hard for me to conceptualize.

        Relatedly, the creator of Zig recently drew on some of the same performance-oriented learning from the games community to restructure his language’s compiler and dramatically speed it up. Seems like he applied good judgment to determine these seemingly disparate areas could benefit each other.

        1. 11

          general programmer community

          I think perhaps the really interesting hot take here is that such a community doesn’t exist in any meaningful sense.

          1. 3

            I should have said the set of all programmers

            1. 6

              Sure, sure, but I think that you touched on a really interesting point, right? I think we could make the credible argument that we don’t have “general” programmers and instead have a (large) cluster of web programmers, an even larger cluster of folks who use SQL and Excel, another cluster of embedded programmers who mostly do C and assembly, another of game developers, and so on and so forth. All of those clusters experience the act of programming very differently.

              Anyways, I think you were on to something or at absolute worst had kicked off a really interesting idea. :)

          2. 3

            yeah, “the general programmer community” is about as substantive of a concept as “the general hammering community”. It puts the focus on the hammer instead of the blow. It’s a great way to get people to avoid thinking about the consequences of their work, which is really useful if what you want people to focus on is “I went from using Java to Rust” instead of “I am building systems that violate the consent and boundaries of a large number of people and cause harm to society”.

          3. 1

            It would be a community of moving things around in memory for its own sake and nothing else. Even memtest86 would be too much. “I made a list of things and no one ever used it.” “I printed hello world to /dev/null”. An isolated unapplied spikes-only meetup.

            1. 2

              Hell, some programs for microcontrollers use only CPU registers. ;)

      4. 2

        What was bad about Clean Code?

        1. 6

          Not parent, but I have read the book, and have an opinion: avoid. Much of it teaches fairly bad habits, shows the wrong heuristics, and the code examples range from “meh” to downright awful.

          Strictly speaking the book is not all bad. Much of it is fairly reasonable, and some of its advice, as far as I recall, is actually good. Problem is, the only people capable of distinguishing the good from the bad are people who don’t need the book in the first place. The rest are condemned to take the whole thing at face value, and in the end we get SOLID zealots that blindly follow principles that makes their programs 3 to 5 times bigger than they could have been (not even an exaggeration).

          Unless you’re a historian, I would advise you to read A Philosophy of Software Design by John Ousterhout instead. Here’s a teaser.

          1. 4

            I like this article on the subject of Clean Code. In particular, the code examples that have been taken straight from the book just show the kind of havoc that following all the advice blindly can cause. For example, the prime generator example at the end of the article is 70 lines long and requires 7 functions with “readable” names such as smallestOddNthMultipleNotLessThanCandidate. By comparison, a simple sieve of Eratosthenes function takes roughly 20 lines of code and does not needlessly split the logic into unnecessary auxiliary functions.

            1. 2

              The function names you mention were put into an example of refactoring another example source code (in the book). I pick Bob’s exploded version with long names over the original source code every day. It’s not that I like the name. I prefer it to the original, because the original is obfuscated in some places.

              Honestly, I really have an impression that most people criticizing the book didn’t read it. There is a lot of good advice in the book, and maybe some shitty details. But those shitty details shouldn’t invalidate the good advice and I think that most of people think that it should. I’m really happy I haven’t followed the advice that the book shouldn’t be read.

        2. 3

          In a nutshell: it espouses premature abstraction and a dogmatic expert-beginner approach to programming.

          Clean Code leaves the reader in the first of the three states of mastery.

          1. 2

            Too bad the tradition taught in Clean Code is the wrong one to follow.

            We should start from a better one. Such as A Philosophy of Software Design.

      5. 2

        Games are pretty broad.

        Even within a single game there are lots of modules with varied constraints. include rendering, sound, gameplay, controls, platform compatibility, tools… Some of it needs top performance, some of it needs top flexibility… from the outside I would guess expertise acquired when developing indie games as significant as Braid and The Witness is very likely to be relevant in many other domains. Perhaps even most.

        1. 5

          The Witness took seven years to develop, was privately funded off of a prior success, uses a custom engine, has very sophisticated optics and rendering, and has no combat, no networking, and no skeletal animations. Even in games, The Witness itself is highly irregular in terms of development. Most people don’t have seven years and $800k to make a puzzle game with a fine-grained attention to rendering and no combat. It’s an extremely specific context. The other thing I find so weird is that on HN and Lobsters people constantly bring up The Witness, but working in games, it’s not a game I hear talked about often.

          1. 3

            Two points:

            • Before The Witness there’s this success you mention: Braid. So it’s not just the one game, and he worked on other things before.
            • Many things go into the development of a single game. Especially if you do most of those things yourself. The game may be specific, but the development required to achieve it… not so much.

            The other thing I find so weird is that on HN and Lobsters people constantly bring up The Witness, […]

            This is a link involving Casey Muratori, and a comment thread mentioning one of his closest peers, Jonathan Blow. Of course The Witness was gonna come up.

      6. 1

        Really well said. It’s hard, because on the one hand I’d like to think that there are some universal truths about software. And maybe there are. But, so much is context-dependent.

    12. 1

      in the process of transitioning from vim to neovim. I work in gamedev so I do a lot of work on Windows and I’ve found nvim-qt to work really well, plus there’s a lot of neat plugins I want to check out, but I’ve basically just got my legacy vim setup working in neovim and haven’t really checked out any of the stuff that’s native to neovim. I mostly do Rust services and desktop stuff at the moment, with some Python tools stuff that has to run on Windows because it’s used by some gamedev colleagues that aren’t using Linux.

    13. 40

      And so we have moved from ‘no such thing as themes/modes’ to ‘a nice feature some users appreciate’ to ‘not offering this feature is SWATting your users’.

      1. 20

        I realise we’re on the internet, but this seems a bit of an uncharitable interpretation?

        The author’s offering an actual solution to a problem that some people have. I didn’t see any discussion as to the moral rectitude of dark vs light mode.

        I will admit I never really considered dark mode to be an accessibility feature until receiving a request for dark mode on app that was going to be used in darkened industrial control rooms. For some people, it really is important.

        1. 13

          offering an actual solution to a problem

          It’s not that easy. If you setup a whole theme you have to change a lot more. If you have images, they will still have white background. If you use SVGs (to scale it better, hello fellow 4k users, “please don’t blurry me”), then you probably have black lines on black background now.. There are tons of other tiny things that can break, and I don’t expect people who actually just write some blogs to test this for every configuration.

          1. 5

            You can recolour SVG with CSS, that’s one of the biggest benefits to SVG

          2. 2

            If you use SVGs, then you probably have black lines on black background

            currentColor is a thing.

            1. 1

              I haven’t found a way to do that with multi-colored SVGs in inkscape

          3. 1

            This is absolutely true and is why personally I’m not a user of custom styles (who has the time?). But it’s a start, if you acknowledge that it’s not the intended use and may have mixed results.

            1. 2

              Maybe ship the CSS of your choice and let your browsers reader-mode take over from there for all the custom requirements of others ? Otherwise I’ll probably just default to no CSS, so it looks like disabling CSS completely. And even that won’t work very well if you need colors for Code highlighting.

        2. 20

          I didn’t see any discussion as to the moral rectitude of dark vs light mode.

          Describing sites as ‘flashbanging them’ involves a moral judgement.

          1. 27

            Honestly? I feel like you need to personally be slapped with a large trout.

            Is that “advocating physical violence”? or “a ridiculous sentence born of exaggeration that’s telling you to lighten the fuck up”? You decide!

          2. 11

            Not particularly…in many computer games, notably the Counter-Strike series, a flashbang was implemented with a bright white flash that fades. (At LAN parties, you could always tell a successful flash by the CRT lighting up an opponent’s face and the wall behind them. Good fun.)

            I figured the author was alluding to that big white flash upon switching to a new web page that did not handle that dark mode preference gracefully.

          3. 12

            Clearly it’s a joke.

          4. 4

            People tend not to sugarcoat unpleasant things. When I was asking support of a brokerage service/website about their plans on adding dark mode, I described their then-super-bright website just as it felt: that it resembles an interrogation when I’m using it in the evening.

            Got a chuckle from the support person. They suggested using browser addon but added dark mode some time later anyway.

          5. 0

            Wow, website’s really gone downhill hasn’t it

      2. 17

        Why is this unhelpful, unproductive, inflammatory troll comment the most highly upvoted comment?

        1. 21

          Because the title of the post is just as inflammatory (in wording as well) and everyone vibes with this response. If someone doesn’t like how a website looks then I’d suggest using an extension to force it instead of asking politely through a setting that may or may not be supported. dark reader seems decent.

          1. 9

            Nothing about the original post talks about SWAT teams.

            And the post is a solution to a real issue, the comment is just … literally nothing but inflammatory. There is no value there.

            1. 17

              Flashbangs are typically used by SWAT teams when entering premises occupied by possibly hostile individuals. This isn’t that much of a reach tbh. I don’t see why it bothers you so much considering the OP was just as tongue in cheek. It’s web stuff, barely technical, the post has already been flagged a bunch, and the responses reflect the effort.

              No reason to expend a ton of energy here, gonna go play with stable diffusion.

            2. 9

              because upvote-based discussion boards create positive feedback loops for reactionary paranoia and every forum of that structure devolves over time to produce the same type of discussion structure and lenses on how to interpret content. reddit, HN, lobsters, digg before it … this evolution happens over and over and then someone goes “i know, i’ll fix this by making a new community with better moderation” instead of questioning the fundamental voting structure that underpins every one of these doomed communities.

              1. 3

                Don’t forget Slashdot!

              2. 1

                What’s a better voting structure?

                1. 3

                  I’m not at all convinced voting is the right model, because the people that don’t know about a topic vastly outnumber the people that do know about a topic. Maayyyyybe a pagerank-adjusted system but that has a huge bootstrapping problem, I’m not sure how you’d get that off the ground. Voting isn’t really the goal anyway, the goal is good discussion.

          2. 5

            Opening a white webpage while using darkmode feels just like getting hit with a flashbang in a first-person shooter. That’s how I understood the title, and as a darkmode user and Counterstrike player myself, I fully relate to that description. The analogy doesn’t sound inflammatory at all. In fact, it’s quite accurate.

        2. 7

          I can only speak for myself, but I upvoted the parent comment because I feel that the submission doesn’t bring anything to the table other than its baity title. It looks like the work of less than a minute. Measured, descriptive, non-sensationalized post titles are an important aspect of for me.

          The comment in question has most of the same problems as the story itself, so I don’t exactly love it, but shrug.

      3. 19

        BRB making all my sites even lighter.

        1. 15

          You can take advantage of HDR in Safari:

          1. 5

            oh god, please no

            1. 3

              I’m actually lol-ing. Did you imagine spite being just as powerful as good intentions? I didn’t.

              1. 9

                Truly spiteful would be only going full HDR flashbang if you detect prefers-color-scheme: dark, otherwise showing a normal white.

      4. 8

        There are people with vision impairment problems, that cannot tolerate light themes, so it is a real accessibility issue.

      5. 6

        And so we have moved on from understanding what people are joking about to completely misinterpreting them to farm karma

      6. 5

        it’s a metaphor

    14. 3

      … in the video, there’s text on his shirt, and it reads the correct way, and he writes in front of him, and it also reads the correct way, although he’s viewing the text from the opposite site that the viewer is viewing it from. If it’s correct for us, it’s backwards for him, and vice versa. Is he writing backwards, or did he get a backwards shirt specifically for this and flips the whole video?

      1. 3

        The shirt is printed backwards. He’s a righty, but appears to write with his left hand.

      2. 2

        Yes, he’s writing backwards on a pane of glass.

        1. 3

          No, he just flips he video (as if using a mirror)… and has a backwards printed shirt.

          1. 2

            Funny, I just assumed because I’ve seen other people do it the backwards writing way. Getting the backwards printed just for this instead of a wearing a solid one is dedication.

    15. 13

      Ugh, never use git commit -a. So much crap gets committed that way.

      1. 16

        After getting bitten by git commit -a enough in my early career, I switched to using git add -p to review every diff before committing it.

        1. 5

          FYI, there is also git commit -p, which is like doing git add -p and then immediately starting a commit once you’ve decided which pieces to include.

          1. 1

            I run into too many things I need to go back and change in add -p to make commit -p my norm.

      2. 6

        git add -u is definitely my fave and what I want 95% of the time.

      3. 5

        I often default to git commit -av but the diff is right there in my commit window so I can double-check it’s not adding too much. If it is then I back out the changes (git reset) and then selectively add them (git add -p).

        You can enable verbose for all commit commands in git config too, then you never forget to add -v. git config --global commit.verbose true

    16. 30

      for me, the most exciting thing about golang is that i can easily walk junior engineers through a codebase with 0 prep. i love accessible code that doesn’t require a krang-like brain to intuit. rust is so non-intuitive to me that i’ve bounced off of it several times, despite wanting to learn it - and i’m a seasoned engineer!

      i didn’t go to school for CS, and i don’t have a traditional background - there are a lot of people like me in the industry. approachability of languages matters, and golang does a fine job.

      it obv has warts. but between the inflammatory title & the cherry picked “bad things”, the article winds up feeling really cynical, and makes me feel like is probably cynical too.

      continues to write fun, stable code quickly in golang

      1. 9

        What to you makes the code written in Go’s monotonous style fun?

        1. 27

          For me—and for most who choose Go—the fun lies in watching your ideas for software come to life. Go is so easy to think in; it enables building stuff without having to fight the language.

        2. 24

          I’d rather work with a stable language, so that I can be creative in the approach to the problem (not the language expression) than a language where I have to spend significant valuable background mental effort on the choice of words

          1. 6

            And you don’t mind having to spend valuable background mental effort on typing if err != nil over and over?

            1. 10

              I do mind, but I think you can argue it produces low cognitive load

            2. 5

              The Rust folks had a similar issue with returning Option and Result and fixed it with the question mark operator.

              The Error type can be named anything, but the community very quickly settled on naming it err, following the convention started by the standard library. The language designers should have just made that the default, and created a similar construct to the question mark.

            3. 5

              after the first few times, it comes naturally for me and i don’t really think about it much. In fact, in situations where it is unnecessary I often have to stop and think about it more.

        3. 20

          bc go code mostly looks the same everywhere thanks to gofmt and a strong stdlib, i spend a lot less time thinking about package choice & a lot more time doing implementation. yesterday i wrote a prometheus to matrix alert bot from scratch in 30 minutes - i spent most of that time figuring out what the prometheus API layout was. now that it’s deployed, i have faith that the code will be rock solid for basically eternity.

          what’s not fun is writing similar code in ruby/python and having major language upgrades deprecate my code, or having unexpected errors show up during runtime. or, god forbid, doing dep management.

          part of that is thanks to go’s stability, which is another good reason to choose it for the sort of work i do.

          having a binary means not having to worry about the linux package ecosystem either - i just stick the binary where i want it to run & it’s off to the races forever.

          to me, that’s the fun of it. focusing on domain problems & not implementation, and having a solid foundation - it feels like sitting with my back against a wall instead of a window. it saves me significant time & having dealt with a lot of other languages & their ecosystems, golang feels like relief.

        4. 4

          it’s a language created at a specific workplace for doing the type of work that those workers do. Do you think bricklayers worry about how to make laying bricks fun?

          1. 4

            continues to write fun, stable code quickly in golang

            That’s why I was asking why they found writing Go fun, it wasn’t out of nowhere. I have received some satisfactory answers to that question, too.

          2. 3

            If it was possible for brick laying to be fun, I’m sure bricklayers would take it.

            1. 8

              Fun fact, Winston Churchill took up brick laying as a hobby. He certainly seemed to think it was fun!

    17. 2

      they claim that Rocket is production-ready. Is anyone using it in production?

        1. 7

          I’m not intentionally moving the goalpost here, but I just really don’t know what they mean by “production-ready”.

          Assuming this is the right repo:

 requires nightly and performs no database interactions; what data it has is stored in yaml files and loaded into memory. I would expect at the minimum to work with stable. If all they mean by “production-ready” is “well, we put it on the internet”, I don’t think that’s a meaningful signal.

          For context, I have a handful of Rocket services at my dayjob and have found it to be a bad experience.

          1. 4

            At $job we were using it for several fairly high volume/traffic services for quite a while.

            We did eventually move off it to warp due to the lack of maintenance/forward movement on the 0.5 release for the last 1.5 years~

    18. 5

      The Go stdlib is extensive to the point that it’s clearly framework-adjacent. But it’s also minimal to the point that you’re almost certainly going to need dependencies to do certain things.

      Do you really need to write your own bespoke routing engine? No you don’t. Use a framework for that.

      1. 3

        mux and chi are both very capable routers, they’re not really frameworks. I prefer to use the approach of just using a simple router and the standard net/http handler interface over having a framework that tries to figure out every possible problem.

        1. 3

          I may be showing my inexperience, but i haven’t yet built a project of the scale where I needed more sophisticated routing than the STL router

          1. 2

            You don’t need to use another router, but it’s a pain to have to manually do things like reject POST requests to GET endpoints and whatnot. Plus you need to special case / to be both the homepage and the 404 page. Nothing insurmountable, but you can also just write your own 200 line library and not have to deal with it anymore.

    19. 6

      I work at a startup that uses Rust and this rings very true to me.

      I like programming Rust. It’s fun and its interesting and it makes me feel smart when I use it.

      hiring, onboarding, code reviews, estimates, sprint planning, library maturity, being on-call, maintaining existing projects? yeah. That’s a different thing.

      1. 4

        A lot of people talk up the reliability and ease of maintenance in rust. What do you find challenging about it? Are there edge cases you wish you had known about before?

        1. 1

          well, I can only comment on my experience. When I say maintaining existing projects, I don’t think “edge cases” is the right framing, because the problem is not really something about the language itself; it’s not a problem with Rust’s capabilities as a language. I also don’t think “reliability” is the right framing because it’s assuming that maintenance just means keeping the thing online.

          In reality, maintenance really means that engineer A wrote something that fit requirements alpha, and now the requirements have changed to requirements beta, and engineer B has to modify the project that was built by someone else for different requirements to fit their new requirements. Maybe engineer A is gone, maybe engineer A is busy; you can’t just require that all changes go through the same person, because if you can’t exchange work fluidly between engineers, you’re never going to keep the velocity you need in a startup environment. In a startup, requirements tend to change very rapidly because you’re still finding product-market fit.

          Hiring people that already know how to program Rust is hard enough. Hiring people that know how to operate Rust in production is even harder. Applicants that do known Rust very often have not used Rust in a team or production setting.

          The obvious problem with hiring a bunch of people that don’t really know Rust is that you wind up with a lot of projects that were written by people who were learning Rust when they wrote it. Most people think the consequence of this is that you have two styles of code floating around your org: “Rust by experienced Rust programmers” and “Rust by inexperienced Rust programmers”. Reality is actually much worse. In reality, people who come to Rust from Go, Python, JavaScript, Java, etc… each has a specific and idiosyncratic way of learning Rust. Instead of having 2 styles of code in your org, you really wind up with N+1 styles of code in your org: 1 style for each of the N contexts that you hired from and then 1 style for people that are writing code in your org’s native style. If you have a stable team that’s been working together and they’re all coming from $language and they all start programming Rust together, you wind up with two kinds of code: “Rust by people thinking like $language” and “Rust by people who have been writing Rust for a while”. When you have a team that’s new to working with one another and comes from a large diversity of contexts and programming backgrounds, what you really wind up with is “Rust by people thinking like $language” for each $language in the set of all languages that describe the prior experience of all of your engineers.

          Operators have a pretty hard time with it. Most people say that learning to write Rust comfortably and productively takes about three months of programming Rust regularly, but what about the people on your team that need to interact with your projects on an infrequent basis? It’s very common for someone to have a role where they do a lot of operations-focused work, but might want to contribute to code on an infrequent basis. Maybe they want to update a library because a vulnerability was found in one of our dependencies, do they have the toolset? Can they update the dependency? What if they find a small bug, will they be able to fix a small bug they noticed in production, or will they have to send it to another engineer to fix it? Virtually all of these engineers could fix a small bug in a Go, Python, or Java program, but fixing even a small bug in a Rust program is often very difficult for people who don’t use the language regularly. Often times they can’t even understand what the error messages are talking about.

          The testing ecosystem of Rust is also fairly immature. Support for benches in tests is still in nightly, and none of us want to be depending on nightly in production. Custom test runners is also a nightly feature. Sure, these things exist, but there’s not a lot of stable tooling build on top of them, because their foundations are not stable.

          For context, it’s a game company, we’re making an MMO, and the client uses Unreal Engine. Our engineers come from places like Epic, Google, Amazon, Riot, Blizzard, etc. These are smart, experienced engineers, people with a lot of production experience on large scale server deployments and large scale multiplayer games.

          I think Rust is a very capable language and I think my teammates are very capable engineers; I don’t think the problem is either the capabilities of the language or the capabilities of the people. I enjoy writing Rust, and I think most people I work with would say the thing, but thinking it’s cool and thinking it’s effective are different things. I’m convinced it’s cool, but I’m not convinced that it’s particularly well-suited for rapidly-growing teams.

          1. 1

            First off, thank you for taking the time to leave a detailed comment!

            I misunderstood what you meant by maintenance. What you are describing I might think of as brown field development or legacy code bases. E.g., continued development and evolution of a codebase. I was thinking about operational maintenance. I assumed that might imply rough tooling around maintenance tasks and therefore be a bit of an edge case in program life cycle. I think I have a better idea what you mean now.

            I see how it could be challenging to get everyone to write the same code style in Rust. It’s a bit of a kitchen sink language and it’s fairly young so there may not be enough cultural norms (like the Zen of Python) or people to enforce said norms. By contrast, Go, Python, and Java have normalized conventions, which makes it easier to assimilate into the community.

            It makes me wonder how long it took other languages to “find their voice”. C++ is famous for shops having their own subset so it seems like not every language necessarily comes to normalize a set of conventions. I’m not deep in the Rust ecosystem so I don’t feel qualified to say how that is developing. Hopefully enough people will develop the cultural knowledge to provide a good base new shops to adopt Rust.

            Thanks again for your thoughts!

    20. 35

      … until you do.

      I firmly believe a suite of interacting microservices represents one of the worst technical designs in a world where there are many particularly bad options. Microservices are harder to observe, harder to operate, impose rigid boundaries that could make change management harder, and almost always perform worse than the monolith they replace. You should almost never chose them for any perceived technical reasons!

      Where microservices might just suck less than monoliths is organizationally, something that this article doesn’t even touch on – which is honestly infuriating, because it’s easy to build a straw argument for/against on technical grounds, but IMO those barely matter.

      Your monolith probably was great when it had n people hacking on it, because they could all communicate with each other and sync changes relatively painlessly. Now you’ve grown (congrats!) and you have 3n, or god help you 10n engineers. Merging code is a nightmare. Your QA team is growing exponentially to keep up, but regression testing is still taking forever. Velocity is grinding to a halt and you can’t ship new features, so corners get cut. Bugs are going into production, and you introduce a SRE team whose job is to hold the pager and try to insulate your engineers from their own pain. Product folks are starting to ask you for newfangled things that make you absolutely cringe when you think about implementing them.

      You could try to solve this by modularizing your monolith, and that might be the right decision. You land with tight coupling between domains and a shared testsuite/datastore/deployment process, which could be OK for your use case. Might not, though, in which case you have to deal with tech that’s slower and harder to observe but might actually let your dev teams start shipping again.

      1. 12

        Yeah, services in general (not just “microservices”) are an organizational thing. It’s basically leaning into Conway’s Law and saying that the chart of components in the software was going to end up reflecting the org chart anyway, so why not just be explicit about doing that?

      2. 10

        You could try to solve this by modularizing your monolith, and that might be the right decision. You land with tight coupling between domains and a shared testsuite/datastore/deployment process, which could be OK for your use case. Might not, though, in which case you have to deal with tech that’s slower and harder to observe but might actually let your dev teams start shipping again.

        There’s a third option here: several monoliths. Or just regular, non-micro services. Not as great as one monolith, but scales better than one, technically better than microservices, and transitions more easily into them if you really need them.

        1. 13

          This is my first port of call after “plain ol’ PostgreSQL/Django” stop cutting it. Some people say services should be “small enough and no smaller”, but I think “big enough and no bigger” is closer to the right way to think about it.

      3. 5

        I wonder if that is true. I used to be a firm believer of what you are saying and have said the same thing in my own words.

        However, I am not so sure anymore. The reason is that a lot of the organizational efforts people make are only made when people run microservices, when I don’t see anything preventing from the same (in terms of goals) efforts would be taken with monoliths.

        I’ve seen companies shift from monoliths to microservices and they usually end up initially having the same problems and then work around it. There’s usually huge process changes, because switching to microservices alone seemed to make things even worse for a while. Another part of those switches tends to be people being granted time to have good interfaces and here is where I wonder if people are partially mislead.

        While microservices to some degree enforce sane interfaces - unless they don’t and your organization still creates a mess - on a technical level nothing hinders you from creating those sane interfaces and boundaries within a monolith. From a technical perspective whether you call a function, a REST API, use RPC, etc. doesn’t make a difference only that that the standard function call is usually the most reliable.

        Of course this is anecdotal, but it happened more than once that “preparing a monolith for migrating to microservices” resulted in all the benefits being reaped. Of course it’s not likely that anyone stops there. The usual mindset is “our goal was to use microservices, we won’t stop one step before that when all the hard stuff is already done”.

        But there’s a lot more complexity added than just going over HTTP. Adding more moving parts as you mention bring a lot of issues with it.

        In other words. I am on the fence. I agree with you, it matches what I have seen, but given that I see first companies at least partly converting microservices to monoliths again for various reasons and simply keeping concerns separate - something that good software engineering also should do - I wonder if it wouldn’t make sense to find a way to organize monoliths like microservices to lower the complexity. Maybe this could be implemented like a pattern, maybe code analysis could help, maybe new programming paradigms, maybe a new or modern way of modularization.

        Or in other words, when even people who making a living off Kubernetes and microservices say that Monoliths are the Future I’d at least step back and consider.

        But then again I think it might simply depend on company culture or even the particular team implementing a project. People work differently, use different tools, frameworks, languages, different ways of time and project management work best for different people. So maybe that’s what it boils down to. So maybe just don’t listen to people telling you that you NEED to use one or the other. There’s enough of highly successful projects and companies out there going completely opposite directions.

        1. 6

          While microservices to some degree enforce sane interfaces - unless they don’t and your organization still creates a mess - on a technical level nothing hinders you from creating those sane interfaces and boundaries within a monolith. From a technical perspective whether you call a function, a REST API, use RPC, etc. doesn’t make a difference only that that the standard function call is usually the most reliable.

          Don’t forget that the other problems - shared persistence layer, shared deployment, etc. are still there.

          As a junior I got to see up close some of the problems a big monolith can present: it was about mid-year and we had to do a pretty big launch that crossed huge parts of that monolith by Q4 to win a (fairly huge for us!) contract. It was going to take many deployments, tons of DB migrations, and an effort spanning multiple teams. We all knew our initial “good plan” wasn’t going to work; we just didn’t know how badly it’d be off. The architects and leads all argued over the path, but we basically realized that we were trying to condense 3 quarters of work into 2.

          We pulled it off, but it sucked:

          • All the other work had to be paused: didn’t matter if it was in a completely unrelated area, we didn’t have the QA bandwidth to be sure the changes were good, and we could not risk a low priority change causing a rollback & kicking out other work

          • We deployed as much as we could behind feature flags, but QA was consistently short of time to test the new work, so we shipped tons of bugs

          • We had to pay customers credits because we gave up on our availability SLAs to eke out a few more release windows

          • We had to relax a ton of DB consistency – I can’t remember how many ALTER TABLE DROP CONSTRAINTs our DBAs ran, but it was a lot. This + the above lead to data quality issues …

          • … which lead to us hitting our target, but with broken software; we basically hit pause on the next two months of work for the DBAs and devs to go back and pick up the broken pieces

          Much of our problem came about because we had one giant ball of mud on top of one ball of mud database; if we’d been working on an environment that had been decomposed along business domains that had been well thought out and not evolved, we might’ve been fine.

          Or we might’ve still been screwed because even with clean separation between teams, and the ability to independently work on changes, we still were deploying a single monolith - which meant all our DB changes / releases had to go together. Dunno.

          But then again I think it might simply depend on company culture or even the particular team implementing a project. People work differently, use different tools, frameworks, languages, different ways of time and project management work best for different people. So maybe that’s what it boils down to. So maybe just don’t listen to people telling you that you NEED to use one or the other. There’s enough of highly successful projects and companies out there going completely opposite directions.

          ^^ – the best two words any programmer can say are “it depends”, and that goes double for big architectural questions.

      4. 3

        i like microservices because when one fails you can debug that individual component while leaving the others running. sometimes you can do this with monolith designs, but not typically, in my experience.

        1. 14

          That’s usually a very small advantage compared to the loss of coherent stack traces, transactions and easy local debugging. Once you have a Microservices you have a distributed system and debugging interactions between Microservices is orders of magnitude harder than debugging function calls in monoliths

          1. 5

            Yes, this is what I jokingly call the law of conservation of complexity. Your app has got to what it has got to do and this by itself brings a certain amount of interaction and intertwining. That does does not magically go away if you cut up the monolith into pieces that do the same thing. You just move it to another layer. For some problems this makes things easier, for others it does not.

            1. 3

              See also “law of requisite variety” in cybernetics.

            2. 1

              I’m a huge fan of the concept of conserved complexity. I find that it particularly shines when evaluating large changes. I’ll often get a proposal listing all the ways some project will reduce complexity. However, if they can’t tell me where the complexity is going, it’s clear they haven’t thought things through enough. It always has to go somewhere.

      5. 3

        I’m genuinely curious if people who claim microservices solve an organisational problem have actually worked somewhere where they have been used for a few years.

        It was so painful to try and get three or four separate teams to work on their services in order to get a feature out. All changes need to be in backwards compatible steps to avoid downtime (no atomic deployment) and anything that needed an interface change was extremely painful. Lets not even get into anything that needed a data migration.

        A lot of places get around this pain by always creating new services instead of modifying old ones, and there is a lot of duplication. It’s not a ball of mud, it’s much worse.

        The idea that you have to communicate or work together less because you’re using microservices is… I’ll be kind here… flawed.

        IME everything slows down to a crawl after a few years.

        1. 2

          I’ve been through this and where I see things slowing to a crawl, it’s where the teams and their connections with the other teams are weak - and the organisational priorities are conflicting.

          This happens with multiple teams working on different parts of a monolith.

          With microservices, we get to avoid everyone _ else_ being affected as much as they would have. This is Conway again. We can fix the teams. We can grow the teams (in maturity and capability). We can fix the organisational boundaries that interfere with communications. We can align priorities.

          Al of the above needed to happen anyway with a monolith, but we used to have hundreds - sometimes thousands - of people being stuck or firefighting because there were some teams unable to collaborate on a feature.

          Feature teams are a great answer to this general problem, but they are hard to make happen where there are huge pieces of tech that require esoteric skillsets and non-transferable skills (‘I only want to write C#’).

          I’m seeing developers enjoying picking up new languages, tools, and concepts, and I’m seeing testers become developers and architects, and us actually getting some speed to market with exceptional quality.

          This isn’t because of microservices. It’s because the organisation needed to be refactored.

          Microservices aren’t what we need. They are slightly wrong in many ways, technically, but we now build with functions, topics, queues (FIFO where we need to avoid race conditions: not all distributed systems problems are hard to solve), step functions, block storage (with a querying layer!) - and other brilliant tools that we wouldn’t have been able to refactor towards if we hadn’t moved to microservices - or something else - first.

      6. 3

        I’ve spent the past 12 years working on a service implemented as a bunch of microservices. At first, the Corporation started a project that required interfacing to an SS7 network, and not having the talent in-house, outsourced the development to write a service that just accepted requests via the SS7 network, and forward them to another component to handle the business logic. The computers running the SS7 network required not only specialized hardware, but proprietary software as well. Very expensive, but since the work this program did was quite small, hardware was minimized, compared to the hardware to run the business logic (and the outsourced team was eventually hired as full time employees).

        A few years down the road, and now we need to support SIP. Since we already had a service interfacing with SS7, it was just easier to implement a service to interface with SIP and have it talk to the same backend that the SS7 service talked to.

        Mind you, it’s the same team (the team I’m on) that is responsible for all three components. Benefits: changes to the business logic don’t require changes to the incoming interfaces (for the most part—we haven’t had to mess with the SS7 interface for several years now for example). Also, we don’t need to create two different versions of our business logic (one for SS7, which requires proprietary libraries, and one for SIP). It has worked out quite well for us. I think it also helps in that we have only one customer (one of the Oligarchic Cell Phone Companies) we have to support.

      7. 2

        Where microservices might just suck less than monoliths is organizationally

        that depends on your perspective. I remain convinced that the primary function of microservices is to isolate the members of a laboring force such that they cannot form a strong union. That is, take Conway’s Law and reverse it; create a policy to specifically -introduce- separation between workers and they won’t have a reason to talk, which makes it less likely that they’ll unionize. In that framing, the primary function of microservices to prevent programmers from unionizing.

        1. 2

          I chuckled.

          Truly, people around me (as far as I can notice, including public figures covered by media) tend not to think about communicating effectively with others and instead tend to vilify them and otherwise avoid having the conversations necessary for further progress.

          Perhaps it’s just a simple fact that most people are not trained in communication and IT people specifically have not had that much hands-on experience to compensate. Not that rest of the population were that much better at it (on average).

          In short, I wouldn’t attribute the phenomena to malice. I think that IT people are not unionizing simply because that means talking to people, which is (on average) exhausting and hard.

        2. 1

          Would be interesting to know then if microservices are less common in this country where more or less everyone is in a union already.

      8. 1

        if increasing the number of developers meant you can’t ship new features and corners got cut then you need to reduce the number of developers. the shapes of organizations can change, they must. it’s important for us to fight for changing them for material reasons like reducing complexity and friction.

        making the monolith modular is a good example of when we realize solving the real problem is very hard so we solve a different problem instead. problem is we didn’t have that second problem. and in fact maybe it makes our real problem (reality) harder to solve in the future.