Threads for benjajaja

    1.  

      Just tried, instantly sold! Also coming from years of tiling WMs.

      1. 2

        Absolutely loved reading this. Balatro is really super fun and genuinely viral: I found out about it from a friend who got it from some obscure spanish website. It’s incredibly well made, and the story (this story) how it got made is an absolute dream for many.

        1. 9

          I would be genuinely curious to know if anyone here knows of a decent European cloud provider. I’m not going to name names, but there is a prominent one that frequently takes down whole regions for days and doesn’t count it toward their downtime. Another prominent one lost a whole datacenter to a fire because it had inadequate fire suppression.

          1. 12

            I keep hearing that Lidl (Schwarz) is building a more serious Cloud, but I have never tried nor met anyone who has https://www.stackit.de

            1. 9

              You may be interested in the author’s thoughts. In general, Europe has nothing like an Amazon Web Services; Europe does have “Scaleway, OVH, Hetzner, Leaseweb, Contabo and ionos”, but those are not really competitors to the big clouds.

              That said, I strongly feel that hosting a mail server isn’t actually beyond the collective power of the European continent.

              1. 6

                Hetzner is a long time name in the server / VPS provider, but they are not exactly in the “cloud” space. Same for OVH, probably.

                1. 48

                  No,. there’s a lot of policy discretion. The US government has access to any data stored in the US belonging to non-US persons without basic due process like search warrants. The data they choose to access is a policy question. The people being installed in US security agencies have strong connections to global far right movements.

                  1. 12

                    In 2004 servers operated by Rackspace in the UK on behalf of Indymedia were handed over to the American authorities with no consideration of the legal situation in the jurisdiction where they were physically located.

                    /Any/ organisation- governmental or otherwise- that exposes themselves to that kind of risk needs to be put out of business.

                    1. 5

                      I seem to remember an incident where instapaper went offline. The FBI raided a data centre and took a blade machine offline containing blade servers they had warrants for, and instapapers, which they didn’t. So accidents happen.

                      Link: https://blog.instapaper.com/post/6830514157

                      1. 2

                        Yes, but in that case the server was in an American-owned datacenter physically located in America (Virginia), where it was within the jurisdiction of the FBI.

                        That is hardly the same as a server in an American-owned datacenter physically located in the UK, where it was not within the jurisdiction of the FBI.

                        Having worked for an American “multinational” I can see how that sort of thing can happen: a chain of managers unversed in the law assumes it is doing “the right thing”. Which makes it even more important that customers consider both the actual legal situation and the cost of that sort of foulup when choosing a datacenter.

                    2. 2

                      The US government has access to any data stored in the US belonging to non-US persons without basic due process like search warrants.

                      Serious question, who’s putting data in us-west etc when there is eu data centres? And does that free rein over data extend to data in European data centres? I was under the impression that safe harbour regs protected it? But it’s been years since I had to know about this kind of stuff and it’s now foggy.

                      1. 18

                        It does not matter where the data is stored. Using EU datacenters will help latency if that is where your users are, but it will not protect you from warrants. The author digs into this in this post, but unfortunately, it is in Dutch: https://berthub.eu/articles/posts/servers-in-de-eu-eigen-sleutels-helpt-het/

                        1. 5

                          I re-read the English article a bit better and see he addresses it with sources and linked articles. Saturday morning, what can I say.

                        2. 8

                          Serious question, who’s putting data in us-west etc when there is eu data centres?

                          A lot of non-EU companies. Seems like a weird question, not everyone is either US or EU. Almost every Latin American company I’ve worked for uses us-east/west, even if it has no US customers. It’s just way cheaper than LATAM data centers and has better latency than EU.

                          1. 4

                            Obviously the world isn’t just US/EU, I appreciate that. This article is dealing with the trade agreements concerning EU/US data protection though so take my comment in that perspective.

                        3. 1

                          I don’t see how this is at odds with the parent comment?

                        4. 22

                          That is the one good thing. It has always been unsafe, but now people are finally starting to understand that.

                          1. 31

                            Because it’s dramatically less safe. Everyone saying “it’s the same as before” has no clue what is happening in the US government right now.

                            1. 12

                              And everyone saying it’s dramatically different has no clue what has happened in the US government in the past.

                              1. 9

                                I haven’t personally made up my mind on this, but one piece of evidence in the “it’s dramatically different (in a bad way)” side of things would be the usage of unvetted DOGE staffers with IRS data. That to me seems to indicate that the situation is worse than before.

                                1. 8

                                  You’re incorrect. The US has never had a government that openly seeks to harm its own allies.

                                  1. 6

                                    What do you mean? Take Operation Desert Storm. Or the early Cold War.

                                    1. 3

                                      Not sure what you mean—Operational Desert Storm and the Cold War weren’t initiated by the US nor were Iraq and the USSR allies in the sense that the US is allied with Western Europe, Canada, etc (yes, the US supported the USSR against Nazi Germany and Iraq against Islamist Iran, but everyone understood those alliances were temporary—the US didn’t enter into a mutual defense pact with Iraq or USSR, for example).

                                      1. 3

                                        they absolutely 100% were initiated by the US. yes the existence of a mutual defense pact is notable, as is its continued existence despite the US “seeking to harm” its treaty partners. it sounds like our differing perceptions of whether the present moment is “dramatically different” come down to differences in historical understanding, the discussion of which would undoubtedly be pruned by pushcx.

                                  2. 3

                                    My gut feeling says that you’re right, but actually I think practically nobody knows whether you are or not. To take one example, it’s not clear whether the US government is going to crash its own banking system: https://www.crisesnotes.com/how-can-we-know-if-government-payments-stop-an-exploratory-analysis-of-banking-system-warning-signs/ . The US governmant has done plenty of things that BAD before but it doesn’t often do anything that STRANGE. I think.

                                      1. 1

                                        Oh, yeah. Clearly I’m bad at parsing indentation on mobile.

                            2. 33

                              Just because it was not safe before, doesn’t mean it cannot be (alarmingly) less safe now.

                              1. 1

                                And just because it logically can be less safe now doesn’t mean it is.

                              2. 10

                                It is not. Not anymore. But I don’t want to get into political debate here.

                                1. 85
                                2. 11

                                  This isn’t true, as the US has been the steward of the Internet and its administration has turned hostile towards US’s allies.

                                  In truth, Europe already had a wake-up call with Snowden’s revelations, the US government spying on non-US citizens with impunity, by coercing private US companies to do it. And I remember the Obama administration claiming that “non-US citizens have no rights”.

                                  But that was about privacy, whereas this time we’re talking about a far right administration that seems to be on a war path with US’s allies. The world today is not the same as it was 10 years ago.

                                  1. 2

                                    hm, you have a good point. I was wondering why now it would be different but “privacy” has always been too vague a concept for most people to grasp/care about. But an unpredictable foreign government which is actively cutting ties with everyone and reneging on many of its promises with (former?) allies might be a bigger warning sign to companies and governments world wide.

                                    I mean, nobody in their right mind would host stuff pertaining to EU citizens in, say, Russia or China.

                                  2. 3

                                    Which is to say: its not safe at all and never has been a good idea.

                                  3. 9

                                    After years of xmonad, now sway (with swaymonad which lacks some features), PaperWM looks appealing.

                                    Personally, the most important feature for me is being able to switch to a specific window (workspace) with a shortcut without delay/animation and guaranteed no floating stuff in the way.

                                    Anybody that comes from a “traditional” tiling WM has made the switch to PaperWM wants to share their experience? It looks like one can configure such shortcuts to switch to a number out of a fixed workspace list, and then in each workspace you could have one window, or several.

                                    1. 4

                                      I use PaperWM on my work Ubuntu and it’s pretty decent as far as tiling WMs go. You can customize a lot. I have noticed that it can sometimes crash or freeze (and subsequently cause a restart of the gnome shell) if you do anything related to screen sharing or external monitors.

                                      For reference - on my personal I use Hyprland (but have used pop-shell and i3 before that).

                                      There is also Niri now if you want a full-fledged WM (and not just a gnome extension) that uses this scrolling mechanism.

                                      1. 3

                                        I recently (~1 week ago) switched to Niri. I think I like it so far, but there are a few minor annoyances I’m writing up:

                                        1. I can’t tell where I am in the “scroll”
                                        2. Fullscreen/unfullscreen expels window from column

                                        However, I am consistently getting two more hours of battery life compared to Sway so that’s incredible.

                                    2. 5

                                      It almost feels like the creators of go are taunting us peasant devs: “here’s a union type. but you can’t use it, it’s only for operators, so a handful of number functions that we already added to the core lib. fuck you.”

                                      You can actually use the switch any(v).(type) hack together with the pipe operator. It’s just almost never useful.

                                      1. 8

                                        I wanted to use the best technologies available

                                        […]

                                        I built the frontend using React and TypeScript

                                        I was furious for a couple of seconds.

                                        1. 1

                                          I feel you, but the React stack nowadays is so deep and diverse that it feels like it’s gonna be soon affecting browsers’ architecture. They literary solved all the hard problems in a rather elegant way. I compare to my days with PHP3 and jQuery 🙂

                                          1. 4

                                            While my blog is in PHP. I really enjoy React actually. Also, I very much like this component library: https://mantine.dev/

                                            1. 2

                                              I don’t think it’s elegant by any means. In practice.

                                              1. 3

                                                It basically made functional UIs mainstream, which greatly improved testability, and correctness.

                                                I do remember the millions of websites/small GUIs where you could easily end up in inconsistent states (like a checkbox not being in sync with another related state) and while UI bugs are by no means “over”, I personally experience less bugs of this kind.

                                                (Unfortunately, API calls are still a frequent source of errors and those are often not handled properly by UIs)

                                                1. 1

                                                  Why not? Any points against? What would you use for complex web apps?

                                                  1. 1

                                                    I mean react itself, not your particular pick of options inside that stack.

                                                    1. 1

                                                      React itself is also cool.

                                            2. 2

                                              I always loved the way hollywood visualised stuff. I’d love a 3 dimensional file browser like in the movie “hackers”.

                                              1. 11

                                                Dunno if Hackers used the same one, but the file browser in Jurassic Park used by Lex when she exclaims, “It’s a unix system, I know this!” is fsn, the OpenGL file system navigator that shipped with Irix.

                                                1. 2

                                                  I have a special love for the dead end that was 3D interfaces everywhere.

                                                2. 6

                                                  I’ve always loved JT Nimoy’s writeup on his VFX work in Tron: Legacy, where he insisted on showing real Unix process tools and emacs’ windows control. (Someone even worked up an interactive version of what appears in the movie!)

                                                  1. 3

                                                    I can’t find it atm, but I remember watching a long-ish video on YT, with someone building a block device out of redstone, which was then exposed to the OS, could be formatted and mounted. Of course, it was even more prone to corruption than Seagate’s ST3000DM001.

                                                  2. 3

                                                    “it just works” - today maybe, but it’s built from investment money, so it’s just a matter of time until someone pulls the rug.

                                                    1. 5

                                                      Zed is open-source and licensed under a mix of GPLv3, AGPLv3, and Apache2: https://github.com/zed-industries/zed

                                                      Even if there were no money left to pay anyone, it wouldn’t “stop working” - anyone could still build it from source, modify it, continue to make improvements as a community, etc.

                                                      1. 3

                                                        Yep, and it being Rust it is a simple project to just jump in and start fixing things…

                                                        To be honest, I would even pay them for pro features if they ever asked it. Just to support the development…

                                                        1. 2

                                                          It seems to me that you haven’t managed to undercut the maintenance economics of your competition yet, so a lack of investor money would in my mind be a serious problem for even a community-maintained fork of Zed.

                                                          Your problem is that Zed’s maintenance economics only make sense if it wins the market because you (collectively, not personally I should say) haven’t focused on controlling those costs. Zed owns its own UI framework. You support lots of platforms natively. You integrate with Tree-sitter in a custom way. You’re locked out of providing the Zed experience on the web, and you’re forcing the whole maintenance burden to be borne by the tiny fraction of the programming populace literate in Rust. In other words, by centralizing all your costs you’ve ensured that investor money and a paid core team are likely essential to the project’s ongoing survival

                                                          1. 3

                                                            I personally wouldn’t say Zed integrates with tree-sitter in a custom way, but I guess a better person to ask would be Max Brunsfeld, who both created tree-sitter and cofounded Zed.

                                                            I also suspect that Zed targeting native OS APIs rather than browser APIs makes it easier to update over time compared to Web-based (which is to say, Electron-based) editors. I suspect this because OS APIs tend to highly value backwards-compatibility because they want old applications to continue working, whereas my understanding is that the relevant browser APIs in question both change more frequently and also have breaking changes more often, because they weren’t intended for public consumption (the way the OS APIs were).

                                                            That said, a better person to ask would be Nathan Sobo, who both created Atom (which Electron spun off from) and cofounded Zed.

                                                            1. 1

                                                              I’d love to talk to Max and Nathan if they’d want to talk to me!

                                                              I was a Nuclide user when I worked at Facebook and I’ve followed all that they’ve written and vlogged about Zed so far. I also have quite a lot more context on their specific editor design philosophy that I’ve gained by following the Pulsar project that sprang up to maintain Atom post-sunset. I’m well aware of the difficulties Electron has caused there. That ABI-layer schism in the Atom/Pulsar architecture informed my choice to keep all my tools’ state and logic in JS-land: I wanted to target the most stable, broadly supported contracts of the web platform

                                                          2. 2

                                                            I didn’t mean “stop working” with “pulling the rug”. It’s usually something like, keep adding features that nobody wants, using the product to push other (actually profitable) products, and so on.

                                                        2. 34

                                                          Simply because GHA are not trivially runnable locally i’ve resorted to never rely on the environments and runtimes they provide. All my actions either use nix or docker to run a command that is just as easily launched locally from a justfile. At this point the GHA yaml files only contain a trigger condition and an entrypoint. Sometimes you also need some glue actions to publish lints to the PR or to upload an artifact, but those are not of interest to run locally anyway

                                                          1. 3

                                                            Hey great idea! I’m going to float using Nix in our builds with the team. That would assist with local running too.

                                                              1. 2

                                                                Yeah, it is unfortunate. Maybe some day we will be able to bring it back (I’d like to.) To be frank, though, I’m not totally surprised: building out a weird extension to another platform’s primitive isn’t totally “within the envelope” :’).

                                                            1. 2

                                                              Same here. I “use” GitHub Actions in the sense that I do the bare minimum to get a Bazel binary and just execute a hermetic build graph. No more dealing with bundled crap in their image. I have no idea why they stuff it with tools for every single programming language ever (especially since that stuff is usually outdated, why??).

                                                              1. 2

                                                                i’ve found bazel to be an absolute nightmare for hermetic builds

                                                                both the C++ and shell rules just go rummaging through your environment variables and hardcoded paths looking for toolchain dependencies. is it better with other languages, or are you working around that somehow?

                                                                1. 1

                                                                  I’m primarily working with Python and C#, however for the cases when I had to build C++, using the hermetic Zig toolchain helped tremendously with making things “just work”: https://github.com/uber/hermetic_cc_toolchain

                                                                  I used it to build a version of a very large codebase that works on Glibc 2.27+ (Ubuntu 18.04 and higher, don’t ask why) from any computer, no sysroot required.

                                                              2. 2

                                                                For my latest project I bit the bullet with nix, instead of it just being an afterthought or dev shell. Only one nix github action (plus the ones you mention, which should be builtin to GH not actions IMO). Runs all tests and other checks in the flake, does caching for cargo with crane and cachix whis is super fast. Works like a charm, both locally and on the CI!

                                                              3. 9

                                                                Our code sits in a monorepo which is further divided into folders. Every folder is independent of each other and can be tested, built, and deployed separately.

                                                                So what’s the point of using a single repository then? The code clearly is independent of each other, and they want it to be tested independently of each other.

                                                                1. 20

                                                                  Why would you split it? Maintaining separate repositories just seems like extra bookkeeping and toil.

                                                                  1. 4

                                                                    It is. So much overhead.

                                                                    Multirepo in the same repo is the way to go.

                                                                    1. 3

                                                                      Multirepo in the same repo is the way to go.

                                                                      My understanding of the term “multi-repo” is that it refers an architecture wherein the codebase is split out into separate repositories. Your use seems to mean something different. Are you referring to Git submodules?

                                                                      1. 2

                                                                        Many people consider a monorepo a situation where all the things in the repo have a coherence when it comes to dependencies or build process. For me a monorepo is also worth it if you just put fully independent things in separate subfolders in the same repository.

                                                                        Are you referring to Git submodules

                                                                        I would never. git submodules are bad.

                                                                    2. 3

                                                                      access control, reducing the amount of data developers have to clone, sharing specific repositories with outside organisations, avoiding problems exactly like this blog post outlines, etc.

                                                                      Now I know you’re going to say “well, we’ve got tooling that reads a code owners file for the first, some tooling on top of git to achieve the second, and an automated sync job with a separate history for the third” but all of that sounds like additional tooling and complexity you wouldn’t need if you didn’t do this. I think the monorepo here is the extra bookkeeping and toil.

                                                                      1. 8

                                                                        We tried this too, releasing loosely coupled software in a monorepo all with the same version numbers. In this case semantic versioning doesn’t make sense since a breaking change in one package would cause a major version bump. But another package might not have any changes at all between those major versions. In this case the only versioning scheme that would make sense is date(time) based versioning. But that can be achieved without using a monorepo. I agree with ~fratti, the benefit of a monrepo is not obvious.

                                                                        1. 4

                                                                          Why do you care about the version number? It’s all at the same commit, you don’t have to care about the version.

                                                                          1. 2

                                                                            In the mono-repos I’ve worked in, there have often been a mixture of apps, APIs, and libraries. If I release a new version of the app, I don’t want to release a new version of the libraries or API because it implies a change to downstream users that doesn’t exist. The fact that mono-repo tools and the people who use them encourage throwing away semver is evidence to me that the modularity pendulum has swung from micro-everything to mono-everything in far too extreme a way.

                                                                            1. 2

                                                                              In the mono-repos I’ve worked in, there have often been a mixture of apps, APIs, and libraries. If I release a new version of the app, I don’t want to release a new version of the libraries or API because it implies a change to downstream users that doesn’t exist.

                                                                              Why do you care? The entire point of a monorepo is saying “Everything in the repo at this point works together, so we release it at that commit ID”. In every monorepo I’ve used, the only identifier we ever used for a version was the commit hash when the release of the software and all its in-repo dependencies was cut.

                                                                              It seems very strange to talk about versions in a monorepo – the entire point of a monorepo is to step away from that.

                                                                              1. 1

                                                                                I think there are some folks who are missing what you describe as the point of monorepos. It sounds like the context(s) in which you use them are basically atomic applications. The parts of the application may be deployed in multiple contexts, but they are not intended to be used separately. I can see the appeal of monorepos there. Unfortunately, my experience has been considerably messier. Where the line gets crossed is where the pieces of such applications become public. Libraries get published as packages to registries. Web services get public docs. Now I don’t just have application users, I have users of the pieces of the application. This is where I start to care about versioning, because the users of these pieces care. Mileage clearly varies, but the tendency of people to treat monorepos as the default choice has, for me, resulted in inheriting monorepos that might have started as atomic applications but are no longer so. The benefit has been a few saved git clone commands and some deployment coordination/ceremony. The loss in time to tooling issues has been considerably more than that.

                                                                            2. 2

                                                                              Why do you care about the version number? It’s all at the same commit, you don’t have to care about the version.

                                                                              Are you asking me or about the original article?

                                                                              We release several of the loosely coupled software pieces within the company. In that sense not all is in the same commit (or even same repo), downstream/outside users aren’t, so we need to use version numbers. So in my mind a monorepo really only make sense if you’re okay with datetime-based versioning or if you’re working on tightly coupled pieces of software that you test and release together.

                                                                              About the original article I don’t know why they care or if they even do.

                                                                        2. 4

                                                                          The code clearly is independent of each other

                                                                          I’ve never used a monorepo, nor do I have any strong feelings for/against them. But I have seen them. This is kind of just how they usually end up, I don’t think this defeats the purpose of a monorepo though.

                                                                          1. 1

                                                                            It can be tested, built, and deployed separately - it can also be done together and without juggling versions, repos, dependencies, rollouts…

                                                                          2. 19

                                                                            From the very little Go I’ve written, I gotta say that the explicit if err != nil error handling struck me as something pretty darn good even if a bit verbose. And in my experience outside Go, just knowing its outlook on error handling has nudged me towards writing more satisfying error handling code, too: https://riki.house/programming/blog/try-is-not-the-only-option

                                                                            I also regularly write C++ code which does error handling in this way and do not find it particularly revolting. (Though most C++ I write is very game devy, where an error usually results in a crash, or is logged and the game keeps on chugging along.)

                                                                            I can see why people would not share the same opinion though. Perhaps I’m just slowly becoming a boomer.

                                                                            1. 14

                                                                              Yeah, I think the grievance about Go boilerplate is somewhat misguided. Most of the “boilerplate” is annotating the error with context, and I’m not really aware of an easy way to alleviate this problem. Sure, a ? operator will help you if you’re just writing return err all over the place, but if that’s your default error handling approach then you’re writing substandard code.

                                                                              That said, I don’t love the idea of wedding the syntax so tightly to the error type. I wish this proposal would take a more general approach so it could be used for functions that return T, bool as well (roughly Go’s way of spelling Option<T>), for example.

                                                                              1. 2

                                                                                You can also add stacktraces to errors early. Then you can just return error and don’t need to rely on grepping (hopefully) unique (they are not) error strings to reconstruct a stacktrace by hand.

                                                                              2. 13

                                                                                The problem with the Go error form is not the syntax. It’s the fact that, by using this error-handling form, the Go compiler is basically not involved in enforcing the correctness of your program.

                                                                                Go will fail to compile if you have an unused import, but you can just forget to check an error result. That’s the problem.

                                                                                1. 9

                                                                                  you can just forget to check an error result. That’s the problem.

                                                                                  I agree with you that it’s a more serious problem than the boilerplate, but the overwhelming majority of complaints about Go are about the boilerplate. That said, “you can just forget to check an error result” is also not a particularly serious problem in practice. I’m sure it has lead to bugs in the past, but these problems are relatively rare in practice, because:

                                                                                  1. the errcheck linter exists and is bundled with popular linter aggregators
                                                                                  2. even without a linter, you still get a compiler error when you ignore the error in an assignment (e.g., file := os.Open() instead of file, err := os.Open()
                                                                                  3. for better or worse, Go programmers are pretty thoroughly conditioned to handle errors. A fallible operation without an error check looks conspicuous.

                                                                                  Are these as nice as having error-handling checks in the language? No. Is it a real problem in practice? No.

                                                                                  1. 4

                                                                                    I think the question I would pose, then, is: does type-safety matter or doesn’t it?

                                                                                    If linting is sufficient to enforce program correctness, why bother with static types? Why not use a dynamic language that’s easier to work with?

                                                                                    If I’m accepting the effort of working within a static type system that requires me to correctly annotate all my types, then I also want the type system to take responsibility for the invariants of my program.

                                                                                    If I have to expend the effort of writing types but also have to write a thousand ad-hoc if statements to check my invariants, then Go feels like the worst of both worlds. At least in a dynamic language, I can build higher-level abstractions to do this in a less tedious way.

                                                                                    1. 2

                                                                                      I think the question I would pose, then, is: does type-safety matter or doesn’t it?

                                                                                      I don’t think it’s a binary proposition. It certainly matters more in some cases than others, and I don’t think forcing you to handle errors ends up being an enormous deal in practice. Would I prefer that Go’s type system enforced handling return values? Sure. Has this ever caused me a problem in my ~15 years of extensive Go use? Not that I recall (though I’m sure it has caused someone somewhere a problem at least one time).

                                                                                      If I have to expend the effort of writing types but also have to write a thousand ad-hoc if statements to check my invariants, then Go feels like the worst of both worlds.

                                                                                      Eh, most of the boilerplate with error handling is in annotating the errors with helpful context, which, as far as I know, doesn’t lend itself well to an automated solution unless your solution is not to annotate at all (in which case you’re just trading a little pain up front for more pain at debug time) or to annotate with stack traces or something similarly painful to consume.

                                                                                  2. 4

                                                                                    It’s also not implemented well. It requires observing the difference between defining err and re-assigning err, which is important only because go complected error handling with variable definitions, and uses boilerplate that can’t always limit the variable’s scope to just the scope it needs.

                                                                                    When moving code around or temporarily commenting out parts of it, it forces adjusting the syntax. Sometimes reassignment isn’t possible, and you get err2 too, and a footgun of mixing it up with the first err. This problem doesn’t have to exist.

                                                                                    1. 1

                                                                                      Brad Fitzpatrick made a point about this somewhere… I decided to keep a copy for posterity. It’s ugly, no one should use it. But exposes the issue you put forward of ignored errors.

                                                                                      https://go.dev/play/p/JBQ3zeVMti For your amusement.

                                                                                      1. 1

                                                                                        Go will fail to compile if you have an unused import, but you can just forget to check an error result. That’s the problem.

                                                                                        I did notice that, and totally agree. I’d probably expand that to error out on any unused values overall, not just errors, but at this point it’s probably too big of a breaking change to make it into the compiler itself.

                                                                                      2. 5

                                                                                        You’re comparing it to the wrong thing. The ? propagation is a revolt against C++ style exceptions too:

                                                                                        • Throwing and catching is an alternative parallel way of “returning” values from functions. ? keeps returning regular values in a normal way.

                                                                                        • Exceptions are invisible at the call site. ? keeps the error handling locally visible.

                                                                                        • C++-style exceptions are even invisible in function’s prototype, and Java’s typed exceptions aren’t liked either. ? simply uses normal function return types, keeping the error type explicit and easy to find.

                                                                                        • Exceptions can unwind past many levels of the call stack. ? keeps unwinding one call at a time.

                                                                                        ? is closer to golang’s philosophy than any other error handling strategy. It agrees that error handling should be locally explicit and based on regular values. It just doesn’t agree that the explicitness needs to go as far as taking 3 lines of almost identical code every time.

                                                                                        Experience from Rust shows that just a single char is sufficiently explicit for all the boring cases, and because the errors are just values, all the non-trivial cases can still be handled with normal code.

                                                                                        1. 1

                                                                                          It just doesn’t agree that the explicitness needs to go as far as taking 3 lines of almost identical code every time.

                                                                                          I usually want to add context to my errors, so the 3 lines are rarely “almost identical”. Do people routinely omit additional context in Rust, or does the ? somehow allow for more succinct annotations than what we see in Go? As far as I can tell, it seems like the ? operator is only useful in the ~5% of cases where I want to do nothing other than propagate an unannotated error, but people are making such a big fuss about it that I’m sure I’m misunderstanding something.

                                                                                          1. 4

                                                                                            Rust solves this without need to abandon ?.

                                                                                            ? calls a standard From trait that converts or wraps the error type you’re handling into the error type your function returns, and the conversion can can have a custom implementation. There’s a cottage industry of macro helpers that make these mappings easy to define.

                                                                                            It works well with Rust’s enums, e.g. if your function returns my::ConfigFileError, you can make std::io::Error convert to ConfigFileError::Io(cause), and another type to ConfigFileError::Syntax(parse_error). Then another function can convert that config error into its own ServerInitError::BecauseConfigFailed(why), and so on. That handles ~80% of cases.

                                                                                            For other cases there are helper functions on Result, like .map_err(callback) that run custom code that modifies the error type. The advantage is that this is still an expression and chains nicely:

                                                                                            let x = do().map_err(Custom::new)?
                                                                                                .do_more().context(obj)?
                                                                                                .etc().with_context(callback)?;
                                                                                            

                                                                                            And for complex cases there’s always match or if let Err(err) that is like golang’s approach.

                                                                                            The go codebases that I work with are very often just return nil, err, and at best return nil, custom.Wrap(err) which is like the From::from(err) that ? calls.

                                                                                            1. 2

                                                                                              Thanks, I am aware of these but you summarized them well. I guess my feeling is that this largely moved the annotation boilerplate into creating a new error type. This doesn’t seem to me to be an enormous net win if at all. In my experience, creating new error types can be quite burdensome, at least if you want to do a good, idiomatic job of it (IIRC, when I used some of the macro libraries, they would throw difficult-to-debug error messages). I may have been holding it wrong, but with Go I can just return fmt.Errorf(…) and move on with life. That’s worth a lot more to me than the absolute lowest character count. 🤷‍♂️

                                                                                              1. 1

                                                                                                fmt.Errof isn’t strongly typed. bail!("fmt") does that in Rust, and that’s okay for small programs, but libraries typically care to return precise types that allow consumers of the library to perform error recovery, internationalisation, etc.

                                                                                                You’re right that it moves boilerplate to type definitions. I like that, because that boilerplate is concentrated in its own file, instead of spread thinly across the entire codebase.

                                                                                                1. 1

                                                                                                  fmt.Errorf isn’t strongly typed. bail!(“fmt”) does that in Rust, and that’s okay for small programs, but libraries typically care to return precise types that allow consumers of the library to perform error recovery, internationalisation, etc.

                                                                                                  I agree, and in Go we only use fmt.Errorf() if we are annotating an error. Error recovery works fine, because Go error recovery involves peeling away the annotations to get at the root error. This is probably not ideal, but it’s idiomatic and doesn’t cause me any practical problems whereas with Rust I have to choose between being un-idiomatic (and bridging my personal idiom with idioms used by my dependencies), or writing my own error types for annotation purposes, or using a macro library to generate error types (and dealing with the difficult-to-debug macro errors) all of which involve a pretty high degree of tedium/toil. I don’t love the Go solution, but it mostly gets out of my way, and I haven’t figured out how to make Rust’s error handling get out of my way. :/

                                                                                                  You’re right that it moves boilerplate to type definitions. I like that, because that boilerplate is concentrated in its own file, instead of spread thinly across the entire codebase

                                                                                                  Yeah, I agree that making dedicated error types makes sense when you are reusing errors across the codebase but for simply annotating errors we are almost always concerned with a one-off error so there’s no “spread thinly across the entire codebase” to worry about.

                                                                                                  I’m not trying to be a Go fanboy here; I don’t particularly like Go’s error handling, but for all its warts and oddity, it mostly gets out of my way. I feel like I spend at least an order of magnitude less time meddling with errors in Go than I do in Rust, even though Rust has ? (my constraint is almost never keystrokes). My issues with Go are mostly theoretical or philosophical, while my issues with Rust are regrettably practical. :(

                                                                                            2. 1

                                                                                              I usually want to add context to my errors, so the 3 lines are rarely “almost identical”. Do people routinely omit additional context in Rust, or does the ? somehow allow for more succinct annotations than what we see in Go?

                                                                                              You’d usually do something like .context(/* error context here */)? in such cases. Though, to be honest, while I’m not a Go programmer, I’ve seen some Go code, and it seems most projects don’t add context to errors in most cases?

                                                                                              Personally I haven’t found the need to add context to errors in Rust anywhere close to 95% of the time. Usually a stacktrace has already been captured and there isn’t much to add.

                                                                                              1. 3

                                                                                                Though, to be honest, while I’m not a Go programmer, I’ve seen some Go code, and it seems most projects don’t add context to errors in most cases?

                                                                                                It seems to vary a lot, and especially older projects (before the fmt.Errorf() stuff was added) probably don’t attach context.

                                                                                                Personally I haven’t found the need to add context to errors in Rust anywhere close to 95% of the time. Usually a stacktrace has already been captured and there isn’t much to add.

                                                                                                Usually I want to add identifiers (e.g., filepaths, resource identifiers, etc) to errors, so rather than “permission denied”, you get “opening /foo/baz.txt: permission denied”. I also haven’t found a very satisfying way to attach stack traces to errors–do you check each call site to determine whether it has already attached the stack trace? Maybe that’s reasonable, but what I usually do is make sure each function adds an annotation that says what it’s doing (e.g., OpenFile() will annotate its errors with fmt.Errorf("opening file %s: %w", path, err)). I definitely don’t think that’s an ideal solution; it’s just the best one I’ve found so far. 🤷‍♂️

                                                                                          2. 1

                                                                                            Though most C++ I write is very game devy

                                                                                            Also probably no exceptions is a natural technical constraint for you already, right

                                                                                            1. 2

                                                                                              Depends on your requirements; I’d say for a small game, the runtime & binary size cost of exceptions isn’t large enough to matter that much for you.

                                                                                          3. 2

                                                                                            No, seriously, the idea that every terminal should implement support for every image format on its own, and do it in the right way (which we have yet to cover), is just something that will never happen.

                                                                                            It’s already a miracle if one protocol is implemented correctly, beyond being able to “dump” an image at the prompt.

                                                                                            I wrote a library for ratatui that generalizes the three protocols (Sixel, Kitty, and iTerm2). The hardest part is testing an array of terminals! Anybody knows how I could screenshot-test a variety of terminals on CI?