Threads for neilalexander

    1. 6

      See also the draft release notes: https://tip.golang.org/doc/go1.20

      1. 1

        I’m pretty excited for arenas, but it’s a bit unfortunate that it’s behind a environment variable. I guess they couldn’t prototype it outside of the standard library, so this is the UX we’re left with.

        1. 1

          It’s common for new Go features to appear behind flags or environment variables in one version and then to be enabled by default in later versions.

        2. 1

          It’s a cool idea, but I worry that it will lead to crashes in practice.

    2. 5

      Frankly this is a mess. Why are there so many channels? If you want to fan out, it is much cleaner, simpler and less error prone to have a single work queue and N worker goroutines that “steal” work from it, posting back the results.

      Example: https://go.dev/play/p/r2d--RW6L_T

      In addition, it makes the best available use of worker resources to “work steal” because workers that are completing faster tasks can get through more of the queued items while other workers are busy processing slower tasks.

      If you try to subdivide the work up front by creating a channel for each worker and splitting the items across them, and some work takes longer than others, then you are guaranteed to end up stuck with workers sat idle because “their” work is done but others are still busy.

    3. 2

      I’ve done quite a bit of work with WebAssembly and I really do want to be optimistic about it, but as long as it’s still necessary to trampoline out to JavaScript in order to do anything useful in the browser (like interact with WebSockets, WebRTC etc.) then it’s never going to be more than a toy.

      The proposed WASI interfaces on the surface seem like they should be the answer to that, but it isn’t really clear that they will be in their actual implementations if they are only going to provide fragmented access to “web-first” APIs. For example, WebSockets are better than normal sockets in exactly zero ways and are worse than normal sockets in several ways, which at this point is ultimately the story of the “web” platform as a whole.

    4. 12

      Apple calls these “Silicon** CPUs just to throw in a little meaningless terminology confusion

      Not to be overly pedantic, but they don’t call them “Silicon”, they call them “Apple Silicon” — as in “silicon made by Apple”, not “Silicon, the Apple product”.

      1. 1

        Thanks, fixed (and fixed the weird formatting too, I’m still on the fence about emacs smart parens, sometimes it’s great sometimes it messes up my markdown).

    5. 5

      Is there a reason that you wouldn’t just use the “Require branches to be up to date before merging” option in the branch protection rules? The user will be prompted to update the branch with a button and any merge conflicts with the parent branch will be highlighted automatically. It is far less noisy than bot-driven comments.

    6. 2

      Great for graphics, but I’m itching for browser access to the more modern compute capabilities of GPGPUs. Anyone know what’s up with WebGPU?

      1. 1

        what legitimate use cases exist for this? the only thing I can think of is websites that mining shitcoins on users’ systems… which isn’t a legitimate use case in my book.

        1. 3

          My use case is that students in graphics courses have different operating systems on their laptops. This makes it hard to create exercises everyone can run and modify. The browser is the perfect sandbox

        2. 2

          I’m interested in running neural networks. Here’s a list of around 30 other applications: https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units#Applications

          1. 2

            I know why GPGPUs are useful, but not why a web browser needs to use them. That’s what I was specifically asking about.

          2. 1

            Why would you do want to do this in the browser instead of a standard process?

    7. 4

      The same authors also propose allowing use of 0/8 and 240/4.

      1. 15

        240/4 feels like the only one that could have legs here. I can’t see a world where 0/8 and 127/8 are anything but eternal martians, with anyone unlucky enough to get an IP in that space just doomed to have things never work.

        Can we just have IPv6 already? :/

        1. 4

          totally agree, we should have had IPv6 10 years ago - and yet here in Scotland my ISP cannot give me IPv6.

          1. 2

            Vote with your feet and change ISP.

            1. 4

              Neither of the two broadband ISPs available where I live provide IPv6. Voting with my feet would have to be uncomfortably literal.

        2. 2

          Call me naive but I’m actually not sure if 0/8 would be such a big problem. I’ve surely never seen it actively special cased like 127/8. Which might just mean my experience with different makes of switches etc is not the best, but for 127/8 I don’t even need to think hard about 10 things that would break, wheres 0/8 is more like “I’d have to check all the stuff and it might work”

        3. 1

          That’s weird, I thought I’ve seen an IP address like 0.3.something publicly routable. Not completely sure, but I vaguely remember seeing something along those lines and thinking it was weird.

      2. 7

        Allocating 240/4 is the most realistic option because equipment that has hardcoded filters for it is either some really obscure legacy stuff that shouldn’t even be allowed access to the public Internet (like any unmaintained networked device) or, if maintained, should have never had it hardcoded and should be fixed.

        Maybe it’s their secret plan: make two infeasible proposals to make the 240/4 proposal look completely sensible by comparison. ;)

        1. 2

          In all seriousness, I don’t think you have any concept of how much aging network kit there is out in the world which will never see a software upgrade ever again (either because the manufacturer don’t release them anymore, or because “it ain’t broke, why fix it?”).

          1. 1

            I know it quite well, but whose problem is it? Those people are already at a much greater risk than not being able to reach newly-allocated formerly reserved addresses.

            1. 1

              That may be the case but it’s ultimately everyone’s problem — there are network operators who will end up having to take on the support burden from users who can’t reach these services (whose hands may be tied for other reasons, e.g. organisational, budgetary etc), there are service operators who will end up having to take on the support burden from users who can’t reach their services (who can do basically nothing because it’s your network problem not ours), and there are users who will no doubt be unhappy when they can’t reach these services and don’t understand why (my friend or colleague says this URL works but for me it doesn’t).

    8. 6

      A slightly related Go nit, the case of structure members determines whether they’re exported or not. It’s crazy, why not explicitly add a private keyword or something?

      1. 19

        why not explicitly add a private keyword or something?

        Because capitalization does the same thing with less ceremony. It’s not crazy. It’s just a design decision.

        1. 4

          And limiting variable names to just “x”, “y” and “z” are also simpler and much less ceremony than typing out full variable names

          1. 1

            I’m not sure how this relates. Is your claim that the loss of semantic information that comes with terse identifiers is comparable to the difference between type Foo struct and e.g. type public foo struct?

          2. 1

            That is actually a Go convention, too. Two-letter or three-letter variable names like cs instead of customerService.

      2. 6

        This would be a more substantive comment chain if you can express why it’s crazy, not just calling it crazy. Why is it important that it should be a private keyword “or something”? In Go, the “or something” is literally the case sensitive member name…which is an explicit way of expressing whether it’s exported or not. How much more explicit can you get than a phenotypical designation? You can look at the member name and know then and there whether it’s exported. An implicit export would require the reader to look at the member name and at least one other source to figure out if it’s exported.

        1. 7

          It’s bad because changing the visibility of a member requires renaming it, which requires finding and updating every caller. This is an annoying manual task if your editor doesn’t do automatic refactoring, and it pollutes patches with many tiny one-character diffs.

          It reminds me of old versions of Fortran where variables that started with I, J, K L or M were automatically integers and the rest were real. 🙄

          1. 5

            M-x lsp-rename

            I don’t think of those changes as patch pollution — I think of them as opportunities to see where something formerly private is now exposed. E.g. when a var was unexported I knew that my package controlled it, but if I export it now it is mutable outside my control — it is good to see that in the diff.

          2. 2

            I guess I don’t consider changing the capitalization of a letter as renaming the variable

            1. 2

              That’s not the point. The point is you have to edit every place that variable/function appears in the source.

              1. 3

                I was going to suggest that gofmt‘s pattern rewriting would help here but it seems you can’t limit it to a type (although gofmt -r 'oldname -> Oldname' works if the fieldname is unique enough.) Then I was going to suggest gorename which can limit to struct fields but apparently hasn’t been updated to work with modules. Apparently gopls is the new hotness but testing that, despite the “it’ll rename throughout a package”, when I tested it, specifying main.go:9:9 Oldname only fixed it (correctly!) in main.go, not the other files in the main package.

                In summary, this is all a bit of a mess from the Go camp.

                1. 1

                  It looks like rsc’s experimental “refactor” can do this - successfully renamed a field in multiple files for me with rf 'mv Fish.name Fish.Name'.

        2. 5

          The author of the submitted article wrote a sequel article, Go’ing Insane Part Two: Partial Privacy. It includes a section Privacy via Capitalisation that details what they find frustrating about the feature.

      3. 4

        A slightly related not-Go nit, the private keyword determines whether struct fields are exported or not. It’s crazy, why not just use the case of the field names saving everyone some keypresses?

      4. 2

        I really appreciate it, and find myself missing it on every other language. To be honest, I have difficulty understanding why folding would want anything else.

      5. 2

        On the contrary, I rather like that it’s obvious in all cases whether something is exported or not without having to find the actual definition.

    9. 22

      (context: I’ve used Go in production for about a year, and am neither a lover nor hater of the language, though I began as a hater.)

      With that said, my take on the article is:

      1. The “order dependence” problem is a non-problem. It doesn’t come up that often, dealing with it easy – this is simply low-priority stuff. If I wanted to mention it, it would be as an ergonomic nitpick.
      2. The infamous Go error handling bloat, while annoying to look at, has the great benefit of honesty and explicitness: You have to document your errors as part of your interface, and you have to explicitly deal with any error-producing code you use. Despite personally really caring about aesthetics and hygiene – and disliking the bloat like the author – I’ll still take this tradeoff. I also work in ruby, and while raising errors allows you to avoid this boilerplate, it also introduces a hidden, implicit part of your interface, which is worse.

      It’s also worth pointing out Rob Pike’s Errors are Values which offers advice for mitigating this kind of boilerplate in some situations.

      1. 22

        There’s a difference between explicitness and pointless tediousness.

        Go’s error handling is more structured compared to error handling in C, and more explicit and controllable compared to unchecked exceptions in C++ and similar languages. But that’s a very low bar now.

        Once you get a taste of error handling via sum types (generic enums with values), you can see you can have the cake and eat it too. You can have very explicit error documentation (via types), errors as values, and locally explicit control flow without burdensome syntax (via the ? syntax sugar).

        1. 4

          I agree.

          But Go, eg, is not Haskell, and that’s an explicit language design decision. I think Haskell is a more beautiful language than Go, but Go has its reasons for not wanting to go that direction – Go values simple verbosity over more abstract elegance.

          1. 15

            If it’s Go’s decision then ¯\_(ツ)_/¯

            but I’ve struggled with its error handling in many ways. From annoyances where commenting out one line requires changing = to := on another, silly errors due to juggling err and err2, to app leaking temp files badly due to lack of some robust “never forget to clean up after error” feature (defer needs to be repeated in every function, there isn’t errdefer even, and there’s no RAII or deterministic destruction).

            1. 6

              Sounds like you’re fighting the language 🤷

            2. 5

              there isn’t errdefer even

              I mean, it’s a pretty trivial helper func if you want it:

              func Errdefer(errp *error, f func()) {
                  if (*err) != nil {
                      f()
                  }
              }
              
              func whatever() (err error) {
                  defer Errdefer(&err, func() {
                     // cleanup
                  })
                  // ...
              }
              

              In general, to have fun in Go, you have to have a high tolerance for figuring out what 3 line helper funcs would make your life easier and then just writing them. If you get into it, it’s the fun part of writing Go, but if you’re not into it, you’re going to be “why do I have to write my own flatmap!!” every fourth function.

            3. 3

              commenting out one line requires changing = to := on another

              I do not agree that this is a problem. := is an explicit and clear declaration that helps the programmer to see in which scope the variable is defined and to highlight clear boundaries between old and new declarations for a given variable name. Being forced to think about this during refactoring is a good thing.

              1. 1

                Explicit binding definition by itself is good, but when it’s involved in error propagation it becomes a pointless chore.

                That’s because variable (re)definition is not the point of error handling, it’s only self-inflicted requirement go made for itself.

                1. 3

                  Go takes the stance that error propagation is not different than any other value propagation. You don’t have to agree that it’s a good decision, but if you internalize the notion that errors are not special and don’t get special consideration, things fall into place.

            4. 1

              commenting out one line requires changing = to := on another

              IMHO := (outside of if, for & switch) was a mistake; I prefer a C-style var block at the top of my function.

              silly errors due to juggling err and err2

              I think that this is mostly avoidable.

          2. 6

            Yup, Go (well, presumably Rob Pike) made a lot of explicit design decisions like this, which drove me away from the language after a year or two and many thousands of LOC written.

            Beside the awfulness of error handling, other big ones were the inane way you have to rename a variable/function just to change its visibility, the lack of real inheritance, and the NIH attitude to platform ABIs that makes Go a mess to integrate with other languages. The condescending attitude of the Go team on mailing lists didn’t help either.

          3. 3

            There is no value in verbosity, though. It’s a waste of characters. The entire attitude is an apology for the bare fact that Go doesn’t have error-handling syntax.

            1. 11

              What you label “verbosity” I see as “explicitness”. What to you is a lack of error-handling syntax is to me a simplification that normalizes execution paths.

              It’s very clear to me that the people who dislike Go’s approach to error handling see errors as a first-class concept in language design, which deserves special accommodation at the language level. I get that. I understand the position and perspective. But this isn’t an objective truth, or something that is strictly correct. It’s just a perspective, a model, which has both costs and benefits. This much at least is hopefully noncontroversial. And Go makes the claim that, for the contexts which it targets, this model of error handling has more costs than benefits. If you want to object to that position then that’s fine. But it’s — bluntly — incorrect to claim that this is some kind of apologia, or that Go is missing things that it should objectively have.

              1. 5

                It often feels to me that people who complain about error handling in Go have never suffered dealing with throwing and catching exceptions in a huge codebase. At least in Go, you can be very explicit on how to handle errors (in particular, non-fatal ones) without the program trying to catapult you out of an escape hatch. Error handing is tedious in general, in any language. I don’t think Go’s approach is really any more tedious than anywhere else.

                1. 5

                  Error handing is tedious in general, in any language. I don’t think Go’s approach is really any more tedious than anywhere else.

                  Yep, and a bit more — it brings the “tedium” forward, which is for sure a short term cost. But that cost creates disproportionate long-term benefits, as the explicitness reduces risk otherwise created by conveniences.

            2. 5

              The argument isn’t that verbosity has a value in itself – it doesn’t.

              The argument is that if you have to choose between “simple, but concrete and verbose” and “more complex and abstract, but elegant”, it’s better to choose the former. It’s a statement about relative values. And you see it everywhere in Go. Think about the generics arguments:

              People: “WTF! I have to rewrite my function for every fucking datatype!”.
              Go: “What’s the big deal? It’s just some repeated code. Better than us bloating the language and making Go syntax more complex”

              They caved on that one eventually, but the argument is still germane.

              As I said, I don’t personally like all the decisions, and it’s not my favorite language, but once I got where they were coming from, I stopped hating it. The ethos has value.

              It all stems from taking a hard line against over-engineering. The whole language is geared toward that. No inheritance. You don’t even get map! “Just use a for loop.” You only see the payoff of the philosophy in a large team setting, where you have many devs of varying experience levels working over years on something. The “Go way” isn’t so crazy there.

      2. 3

        Java included exceptions in the function signature and everyone hated those, even Kotlin made them optional. Just like how this Python developer has grown to enjoy types, I also enjoy the explicit throws declarations.

      3. 3

        you have to explicitly deal with any error-producing code you use.

        Except if you forget to deal with it, forget to check for the error, or just drop it.

    10. 3

      Unicode. Seriously, I’d rewrite Unicode specifications from scratch.

      1. 2

        What would you change?

        1. 7

          I would go back much further and redesign the alphabet and English spelling rules.

        2. 4

          I for one would not admit emojis into unicode. Maybe let whatever vendors want standardize something in the private use areas. But reading about new versions of unicode and the number of emojis added has me wondering about the state of progress in this world.

          1. 5

            Customers demand emojis. Software vendors have to implement Unicode support to accommodate that. Unicode support is more widespread.

            I take that as a win.

            Besides, sponsoring emoji funds Unicode development to some extent.

            1. 3

              MSN Messenger had emoji-like things 20+ years ago, but they were encoded as [[:picture name:]]. This works, because they are pictures, not characters. Making them characters causes all sorts of problems (what is the collation order of lower-case lambda, American flag and poop in your locale? In any sane system, the correct answer is ‘domain error’).

              Computers have been able to display small images for at least a decade before Unicode even existed, trying to pretend that they’re characters is a horrible hack. It also reinvents the problems that Chinese and other idiographic languages have. A newspaper in a phonographic language can introduce a neologism by rearranging existing letters, one in an ideographic language has to either make a multi-glyph word or wait for their printing press to be updated with the new symbols. If I want a new pictogram in a system that communicate images, I can send you a picture. If I want to encode it as unicode then I need to wait for a new unicode standard to add it, then I need to wait for your and my software to support it.

          2. 1

            On the contrary, shipping new emoji is a great way to trick people into upgrading something when they might not otherwise be motivated. If you have some vulnerability fixes that you need to roll out quickly, bundle them alongside some new emoji and suddenly the update will become much more attractive to your users. Works every time. All hail the all-powerful emoji.

            1. 1

              Sure, let software vendors push security updates with emojis. Unicode the standard doesn’t need to do that.

    11. 6

      I wish this included some explanation of why these are useful or explanations of why they are. Like, why use zerolog over logrus/zap/whatever? Why use gorilla/mux over gin or go-chi or the standard library? Why have that overly-complex unique code thing to determine where a logged error is thrown instead of making sure your errors have stack traces?

      1. 4

        I’m not the author of the post, but I can try to answer some questions:

        Like, why use zerolog over logrus/zap/whatever?

        zerolog and zap is what the author of logrus admit he would write if he wrote logrus/v2. zerolog claims to be a “better” version of zap (there are claims of performance improvement on other people’s computers)

        Why use gorilla/mux over gin or go-chi or the standard library?

        gorilla/mux is much lightweight as opposed to gin which has renderers, etc… Also, there are claims of very high performance on other people’s computers. With gorlla/mux you can plug your own json de-/serialization library which is faster than gin’s.

        Why have that overly-complex unique code thing to determine where a logged error is thrown instead of making sure your errors have stack traces?

        ¯\(ツ)/¯ No idea… The unique code approach feels very hacky and brittle to me.

        1. 3

          I would even suggest “httprouter” as a more lightweight alternative to gorilla/mux.

          1. 2

            Most “higher level” Go routers are probably based on it anyways.

        2. 1

          We use unique logging codes for our logs, and it makes it easier to locate a particular statement in either the logs, or the source code. Now, we generate the unique code by hand, but it’s not a horrendous issue for us (we have a separate document that records each logging statement).

      2. 1

        I’d be curious to know the best practices for annotating errors with stack traces. Any good library suggestions?

        1. 3

          github.com/pkg/errors has some nice helpers, as described in its documentation. The issue is that it requires changing your code to using errors.Errorf() instead of fmt.Errorf().

          Go2 will do it differently, but natively

          1. 3

            Also, errors.Wrapf is very useful when handling errors from packages which don’t use errors.

          2. 2

            You can wrap errors using fmt.Errorf() and the %w verb, and then later deal with them using errors.Is() or errors.As() and friends.

            1. 1

              That doesn’t give you stack traces though, correct?

        2. 2

          YES. This is probably my biggest issue with supporting production Go code. Errors are still too hard to locate.

        3. 1

          I personally use this to manage all that. If you use the errors.Wrap() function to wrap errors, it’ll add stack traces. And if you want to make a new error, the errors.New() function will also add a stack track where it was called. Then when you’re logging the error make sure it’s being logged in a way that will not just print the message. Most loggers should have a way to do that (I know zerolog and logrus do).

        4. 1

          Shameless plug for my errors library: https://pkg.go.dev/github.com/zeebo/errs/v2

          • captures stack traces, but only once even if wrapped multiple times
          • plays nicely with standard library errors package with errors.Is and errors.As
          • has a “tag” feature to let you associate and query tags with errors
          • helper to keep track of groups of errors
    12. 5

      I’ve been using an M1 for a while now. Screen, battery, and performance are great (but performance is not spectacular, there is still a lot of lag, just a lot less than you’re used to). Would have liked a USB port. Didn’t like the software. There’s no proper package manager (wasn’t very pleased with brew) hotkeys are very weird, no ‘snap to left side of the screen’, safari misses many features (like print selection or changing html), I have to reinstall the printer drivers after each update, and I often get random error messages in the terminal.

      1. 4

        I have to reinstall the printer drivers after each update

        Something I unfortunately didn’t know until quite recently: most printer/scanner driver packages for Macs are worthless, because macOS already knows how to talk to most printers and scanners. Especially most scanner drivers are bad to install because there are better scanning packages available that just use the OS’ own scanning framework.

        (I wish I had found this out years ago)

      2. 4

        I have to reinstall the printer drivers after each update

        Are you sure you need printer drivers? I’ve used Macs for 20+ years and various printers, and can’t remember the last time I had to do that (though I remember quite well being annoyed at having to do it on Windows). Are you not connected over USB or three network?

        1. 2

          Good suggestion, but I really need them to enable ‘manual duplex’ printing.

      3. 3

        safari misses many features (like print selection or changing html)

        What do you mean Safari can’t change HTML? The developer tools can do that and more.

        1. 1

          Ah, so it’s an extension, that makes sense!

      4. 1

        wasn’t very pleased with brew

        Have you given MacPorts a try?

      5. 1

        and I often get random error messages in the terminal.

        Curious about this one. What kind of error messages?

        1. 1

          I should have said ‘warning messages’! But what I get a lot is:

          objc[849]: Class AMSupportURLConnectionDelegate is implemented in both /usr/lib/libauthinstall.dylib (0x1fce89160) and /System/Library/PrivateFrameworks/MobileDevice.framework/Versions/A/MobileDevice (0x1166202b8). One of the two will be used. Which one is undefined.
          
      6. 1

        no ‘snap to left side of the screen’

        Check out https://rectangleapp.com/

    13. 26

      The article treats Go and Rust as on equal footing when it comes to safety. This is quite wrong, unless “safety” refers only to memory safety: Rust is far safer than Go in general, for a number of reasons: algebraic data types with exhaustive pattern matching, the borrow checker for statically preventing data races, the ability to use RAII-like patterns to free resources automatically when they go out of scope, better type safety due to generics, etc.

      Of course, both languages are safer than using a dynamically typed language. So from the perspective of a Python programmer, it makes sense to think of Go as a safer alternative.

      There may be certain dimensions along which it makes sense to prefer Go over Rust, such as the learning curve, the convenience of a garbage collector, etc. But it’s important to be honest about each language’s strengths and weaknesses.

      1. 23

        On the other hand, if you do think about memory safety, it’s not quite so clear cut. In Go, I can create a cyclic data structure without using the Unsafe package and the GC will correctly collect it for me. In Rust, I have to write custom unsafe code to both create it and clean it up. In Go I can create a DAG and the GC will correctly clean it up. In Rust, I must use unsafe (or a standard-library crate such as RC that uses unsafe internally) to create it, and clean it up.

        However, in Go I can create an object with a slice field and share it between two goroutines and then have one write update it in a loop with a small slice and a large slice and the other goroutine read until it sees the base of the small slice and the length of the large slice. I now have a slice whose bounds are larger than the underlying object and I can violate all of the memory-safety invariants without writing a single line of code using the Unsafe package. In Rust, I could not do this without incorrectly implementing the Sync trait, and you cannot implement the Sync trait without unsafe code.

        Go loses its memory safety guarantees if you write concurrent software. Rust loses its memory safety guarantees if you use non-trivial data structures. C++ loses its memory safety guarantees if you use pointers (or references).

        1. 12

          Go loses its memory safety guarantees if you write concurrent software. Rust loses its memory safety guarantees if you use non-trivial data structures. C++ loses its memory safety guarantees if you use pointers (or references).

          This is fantastically succinct. Thanks, I might use this myself ;)

        2. 6

          In Rust, I have to write custom unsafe code to both create it and clean it up

          No, you really don’t.

          You can create it with no unsafe code (outside of the standard library) and no extra tracking by using Box::leak, it will just never be cleaned up.

          You can create it with no unsafe code (outside of the standard library) and reference counted pointers by using Rc::new for forward pointers and Rc::downgrade for back pointers, and it will be automatically cleaned up (at the expense of adding reference counting).

          You can make use of various GC and GC like schemes with no unsafe code (outside of well known libraries), the most famous of which is probably crossbeam::epoch.

          You can make use of various arena datastructures to do so with no unsafe code (outside of well known libraries), provided that that form of “GC all at once at the end” fits your use case, e.g. typed arena.

          1. 3

            You can create it with no unsafe code (outside of the standard library)

            The parenthetical here is the key part. The standard library implementations for all of the things that you describe all use unsafe.

            1. 6

              No, it isn’t. That’s how the entire language works, encapsulate and abstract unsafe things until they are safe. To argue otherwise is to argue that every allocation is bad, because implementing an allocator requires unsafe (and the standard library uses unsafe to do so)…

              Unsafe code is not viral.

            2. 1

              Note also that the Rust Standard library has special dispensation for unsafe and unstable features – it can assume a particular compiler version, it can use unsafe code that would be unsound without special knowledge of the compiler, and it can compel the compiler to change in order to support what the stdlib wants to do.

      2. 13

        Of course, both languages are safer than using a dynamically typed language.

        I wish people would stop saying that. Especially with “of course”. We can believe all we want, but there is no data supporting the idea that dynamically typed languages are inherently less safe. Again, I don’t care why you think it should be the case. First show me that it actually is, then try to hypothesize as to why.

        1. 21

          I often find production bugs in dynamically typed systems I work on which are due to issues that would be caught be a type checker for a modern type system (e.g., null reference errors, not handling a case of an algebraic data type when a new one is added, not handling new types of errors as the failure modes evolve over time, etc.). That’s an existence proof that having a type checker would have helped with safety. And this is in a codebase with hundreds of thousands of tests and a strict code review policy with code coverage requirements, so it wouldn’t be reasonable to attribute this to an insufficient test suite.

          Very large companies are migrating their JavaScript codebases to TypeScript precisely because they want the safety of static types. Having been privy to some of those discussions, I can assure you there were made with a lot of consideration due to the enormous cost of doing so.

          Going down the academic route, dependent types let you prove things that tests cannot guarantee, such as the fact that list concatenation is associative for all inputs. I personally have used Coq to prove the absence of bugs in various algorithms. As Dijkstra said, “program testing can be used to show the presence of bugs, but never to show their absence”. Types, on the other hand, actually can show the absence of bugs if you go deep enough down the rabbit hole (and know how to formally express your correctness criteria).

          You don’t have to believe me, and perhaps you shouldn’t since I haven’t given you any specific numbers. There are plenty of studies, if you care to look (for example, “A Large Scale Study of Programming Languages and Code Quality in Github; Ray, B; Posnett, D; Filkov, V; Devanbu, P”). But at the same time, there are studies that claim the opposite, so for this topic I trust my personal experience more than the kind of data you’re looking for.

          1. 2

            Yet Go lacks many of those features, such as algebraic data types, exhaustive switches, and has both nil and default values.

            1. 3

              Yes, hence my original claim that Rust is far safer than Go. But Go still does have a rudimentary type system, which at least enforces a basic sense of well-formedness on programs (function arguments being the right type, references to variables/functions/fields are not typos, etc.) that might otherwise go undetected without static type checking. Since these particular types of bugs are also fairly easy to catch with tests, and since Go programs often rely on unsafe dynamic type casting (e.g., due to lack of generics), Go is not much safer than a dynamically typed language—in stark contrast to Rust. I think one could reasonably argue that Go’s type system provides negligible benefit over dynamic typing (though I might not go quite that far), but I do not consider it reasonable to claim that static types in general are not capable of adding value, based on my experience with dependent types and more sophisticated type systems in general.

              1. 2

                Since these particular types of bugs are also fairly easy to catch with tests, and since Go programs often rely on unsafe dynamic type casting (e.g., due to lack of generics), Go is not much safer than a dynamically typed language—in stark contrast to Rust.

                But only <1% of Go code is dynamically typed, so why would you argue that it’s not much safer than a language in which 100% of code is dynamically typed? Would you equally argue that because some small amount of Rust code uses unsafe that Rust is no safer than C? These seem like pretty silly arguments to make.

                In my experience writing Go and Rust (and a whole lot of Python and other languages), Go hits a sweet spot–you have significant type safety beyond which returns diminish quickly (with respect to safety, anyway). I like Rust, but I think your claims are wildly overstated.

          2. 2

            so for this topic I trust my personal experience more than the kind of data you’re looking for.

            I wonder how much of this is us as individual programmers falling into Simpson’s Paradox. My intuition says that for large, complex systems that change infrequently, static typing is a huge boon. But that’s only some portion of the total programs being written. Scientific programmers write code that’s more about transmitting mathematical/scientific knowledge and easily changeable for experimentation. Game scripters are looking for simple on-ramps for them to change game logic. I suspect the “intuitive answer” here highly depends on the class of applications that a programmer finds themselves working on. I do think there’s an aspect of personality here, where some folks who enjoy abstraction-heavy thinking will gravitate more toward static typing and folks who enjoy more “nuts-and-bolts” thinking may gravitate toward dynamic typing. Though newer languages like Nim and Julia are really blurring the line between dynamic and static.

          3. 2

            Have you done Coq to prove the correctness of anything that wasn’t easy by inspection? I’ve looked at it and I’m definitely interested in the ideas (I’m working through a textbook in my spare time), but I’ve never used it to prove anything more complicated than, say, linked list concatenation or reversing.

            And how do you generate the program from your code? Do you use the built-in extraction, or something else?

          4. 2

            I often find production bugs in dynamically typed systems I work on that are due to things that would be caught be a type checker

            I can offer an equally useless anecdotal evidence of my own practice where bugs that would be caught by a type checked happen at a rate of about 1/50 to those caused by mis-shapen data, misunderstanding of domain complexity and plain poor testing, and when they do, they’re usually trivial to detect and fix. The only thing that tells me is that software development is complex and we are far from making sweeping statements that start from “of course”.

            Very large companies are migrating their JavaScript code bases to TypeScript for exactly for that reason.

            Sorry, the “thousand lemmings” defense won’t work here. Out whole industry has been investing countless engineer-years in OO abstractions, but then people started doing things without it and it turned out OO wasn’t the requirement for building working systems. Software development is prone to fads and over-estimations.

            Types, on the other hand, actually can show the absence of bugs

            That’s just plain wrong. Unless you mean some very specific field of software where you can write a formal specification for a program, but to this day it’s just not practical for anything that’s useful.

            1. 5

              It’s clear that many people find value in static types even if you don’t. Maybe you make fewer mistakes than the rest of us, or maybe you’re working in a domain where types don’t add as much value compared to others. But you shouldn’t try to invalidate other people’s experiences of benefitting from static types.

              they’re usually trivial to detect and fix

              I prefer to eliminate entire categories of errors without having to detect and fix them down the line when they’ve already impacted a user.

              That’s just plain wrong.

              Maybe you haven’t used formal verification before, but that doesn’t mean it isn’t used in the real world. There’s a great book series on this topic if you are interested in having a more open mind. I’ve used these kind of techniques to implement parts of a compiler that are guaranteed correct. Amazon also uses deductive techniques in multiple AWS teams (example), and there’s a decent chance you have indirectly benefitted from some of those efforts. So, my claim is not “just plain wrong”. As you alluded to, it usually doesn’t make sense to invest that much in those kinds of formal guarantees, but it’s nice that types can do that for you when you need them to.

              At this point, it seems like you aren’t interested in having a good faith discussion, with your abrasive comments like “I don’t care why you think it should be the case”, “equally useless anecdotal evidence”, and dismissing a demonstrable claim as “just plain wrong”. I think you have some good points (e.g., I completely agree about your position on OO) and could be more effective at delivering them if you didn’t seem so invested in discounting other people’s experiences.

              I respect your opinion that I should not have stated that Rust and Go are “of course” safer than dynamically typed languages. In particular, Go’s type system is so underpowered that I can see a reasonable argument that the ceremony of appeasing it without reaping the guarantees that a better type system would give makes it more difficult to build robust software than not having types at all. I certainly wouldn’t say the same for Rust, though. Rust often forces me to handle error cases that I didn’t even know were possible and would never think to test.

              1. 1

                But you shouldn’t try to invalidate other people’s experiences of benefitting from static types.

                Go’s type system is so underpowered that I can see a reasonable argument that the ceremony of appeasing it without reaping the guarantees that a better type system would give makes it more difficult to build robust software than not having types at all.

                Do you not think this is invalidating the experience of Go users who benefit from usage of the language?

                1. 1

                  I said I could see it as a reasonable argument, not that I personally agree with it. I’m trying to give some validity to what isagalaev is saying and potentially meet them in the middle by acknowledging that not all type systems provide a clear benefit over dynamic typing. But I already stated my stance in my original top-level comment: that Go’s type system is still better than no type system when it comes to safety (not memory safety, but a more broad notion of safety).

                  It is true, though, that Go’s type system is quite weak compared to other type systems. That’s not the same as saying that people don’t benefit from it. On the contrary, I’ve claimed the opposite—which is what started this whole discussion in the first place.

              2. 1

                abrasive comments

                Apologies on that, fwiw.

            2. 2

              […] caused by mis-shapen data, […]

              Statements like these always make me suspect the author doesn’t appreciate just how much can in fact be captured by even a relatively simple type system. “Make illegal states unrepresentable” has become something of a mantra in the Elm community in particular, and I can’t remember the last time I saw a bug in a typed FP language that was due to “mis-shappen data” get past the type checker.

              I think there’s a tendency to try to compare languages by lifting code wholesale from one language into the other, assuming it would be written more or less the same way, which is often not the case. So, if you see a something that throws TypeError, of course you assume that would get caught by a static type system. Folks who have only worked with type systems like those in Java/Go/C generally look at null/nil bugs and assume that those wouldn’t get caught, even though they’re impossible in other systems. It’s easy to point out “null isn’t a thing in this language,” but what’s a bit harder to capture is that a lot of things that aren’t obviously type errors that crop up at runtime in a dynamically typed language would likely be captured by types in a program writing with the benefit of a modern type system. Obviously it won’t if you just write a stringly-typed program, but… don’t do that, use your tools.

              1. 1

                “Make illegal states unrepresentable” has become something of a mantra in the Elm community in particular, and I can’t remember the last time I saw a bug in a typed FP language that was due to “mis-shappen data” get past the type checker.

                It’s not about illegal states. I mean code expecting a response from an HTTP call in a particular schema and getting a different one. I don’t see how this problem can be prevented at compile time. Or more subtly, getting data in a correct shape (say, an ISO-formatted time string), successfully parsing it (into an internal datetime type), but then producing the result that the user doesn’t expect because it assumes the time to be in a different time zone.

                (Also, I don’t see how making illegal states unrepresentable is in any way endemic to type-checked languages. It’s just a good architectural pattern valid everywhere.)

                1. 1

                  I mean code expecting a response from an HTTP call in a particular schema and getting a different one.

                  Ah, you are talking about something somewhat different then. Yes, obviously types can’t statically prove that inputs that come in at runtime will be well-formed. However, they can force you to deal with the case of a parse failure – which many languages make it really easy to forget. “Forgetting a case” is another one of those things that I think people often incorrectly assume aren’t (or can’t easily be made) type errors. It’s hard to say what you should do in that case without more context, but it makes it hard to introduce a bug by omission.

                  If the bug is just that the programmer was mistaken about what the endpoint’s schema was (or the server operator changed it inappropriately), I’ll agree that just having static types in your client program does not really help that much, though it might be a bit easier to track down since the error will occur right at the point of parsing, rather than some time later when somebody tries to use the value.

                  That said, I’ll point to stuff like protobuf, capnproto, and even swagger as things that are trying to bridge this gap to some degree – there is still an opportunity to just assign the entirely wrong schema to an endpoint, but they narrow the space over which that can happen substantially; once you’ve got the right schema rigged up the programmer is unlikely to get the shape of the data wrong, as that’s just defined in the schema.

                  Or more subtly, getting data in a correct shape (say, an ISO-formatted time string), successfully parsing it (into an internal datetime type), but then producing the result that the user doesn’t expect because it assumes the time to be in a different time zone.

                  Dealing with fiddly distinctions like this is something types are really great at. I have some good points of comparison with date/time stuff, as I’ve worked on projects where this stuff is core to the business logic in both Python and Haskell. Having a type system for it in Haskell has been a godsend, and I wish I’d had one available when doing the prior project.

                  Somewhat simplified for presentation (but with the essentials in-tact), the Haskell codebase has some types:

                  • A date & time with an attached (possibly arbitrary) time zone, “ZonedDateTime”
                  • A date & time in “local time,” where the timezone is implied by context somehow. “LocalDateTime”

                  As an example of where this is useful: the code that renders the user’s feed of events in order expects a list of LocalDateTime values, so if you try to pass it the datetime with some arbitrary timezone, you’ll get a type error. Instead, there’s a function timeInZone which takes a ZonedDateTime and a TimeZone, and translates it to a LocalDateTime with the provided timezone implied. So in order to get an event into a users feed, you need to run it through this conversion function, or it won’t type check.

                  (Also, I don’t see how making illegal states unrepresentable is in any way endemic to type-checked languages. It’s just a good architectural pattern valid everywhere.)

                  It’s a lot easier to do it when you can actually dictate what the possible values of something are; if in the event of a bug a variable could have any arbitrary value, then your options for enforcing invariants on the data are much more limited. You can put asserts everywhere, but having static types is much much nicer.

        2. 6

          Also, people talk about “dynamic languages” like they’re all the same, while they are often as different as C and Rust, if not more.

          Writing safe Javascript is an impossible nightmare..

          Writing safe Python is easy and fun. I do think it would be safer with a good type-system, but at the same time, a shorter and more idiomatic code (where you don’t have to fight with the compiler) brings its own sort of safety and comfort.

          1. 4

            Writing safe Python is easy and fun.

            Python has its own share of footguns, such as passing strings somewhere where bytes are expected, or new, unhandled exceptions being added to libraries you’re calling.

            Mypy doesn’t completely protect you. IME, many errors occur at the seams of typed/untyped contexts. Which is not surprising, but it is a downside of an after-the-fact, optional type checker.

            1. 1

              Yeah, and mypy has far less coverage for third-party packages than TypeScript. IME when I use TS I’m surprised if I don’t find a type package, whereas with mypy I’m surprised if I do.

              1. 1

                About one in twenty times I do see a types package in DefinitelyTyped which is very incorrect. Almost always only with libraries with small user bases.

                1. 1

                  Incorrect as in “this parameter is a number but is typed as a string”, or too loose/too restrictive?

                  1. 1

                    Varies. I’ve seen symbols typed as strings, core functionality missing, functions that couldn’t be called without “as any”.

            2. 1

              Python isn’t perfect, but unhandled exceptions can happen in C++ too..

              It doesn’t have to be after the fact. You can check types at run-time, and there are libraries that will help you with that. It comes at a slight performance cost, of course (but if that matters, why are you using Python?), but then you gain the ability to implement much more sophisticated checks, like contracts, or dependent types.

              Anyway, at least personally, type errors are rarely the real challenge when writing software.

        3. 3

          there is no data supporting the idea that dynamically typed languages are inherently less safe

          If we use the Gary’s Types document for the definition of a dynamically typed language, I am able to find some research:

          I do agree that this amount of research is far away from us being able to say “of course”.

          1. 3

            Not gonna take a position here on the actual static vs dynamic debate, but the second paper you linked is deeply flawed. I wrote a bit about the paper, and the drama around it, here: https://www.hillelwayne.com/post/this-is-how-science-happens/

            1. 1

              Awesome! Thank you for sharing.

        4. 2

          I think it stands to reason that statically typed languages are safer than dynamically typed languages. I’m vaguely aware of some studies from <2005 that compared C++ to Python or some such and found no significant difference in bugs, but I can’t imagine those findings would hold for modern mainstream statically typed languages (perhaps not even C++>=11). Personally I have extensive experience with a wide array of languages and my experience suggests that static typing is unambiguously better than dynamic typing; I hear the same thing from so many other people and indeed even the Python maintainers–including GVR himself–are compelled. Experience aside, it also stands to reason that languages in which entire classes of errors are impossible would have fewer errors in total.

          So while perhaps there isn’t conclusive data one way or the other, experience and reason seem to suggest that one is better than the other.

          1. 1

            Experience aside, it also stands to reason that languages in which entire classes of errors are impossible would have fewer errors in total.

            What this (very common) argument is missing is that a) those errors are not as frequent as the words “entire classes of errors” may lead to believe, that b) testing covers many of those errors better (by testing for correct values you’re getting correct types for free), and that c) instrumenting your code with types isn’t free, you get bloated and more rigid code that may lead to more serious errors in modeling your problem domain.

            1. 4

              I’ve personally found that relying on a decent type system makes my code more flexible, as in easier to refactor, because I can rely on the type system to enforce that all changes are valid.

            2. 1

              I’ve developed and operated Python services in production for about a decade. Type errors were the most common kind of errors we would encounter by a wide margin. We were very diligent about writing tests, but inevitably we would miss cases. Some of these were “this function is literally just an entry point that unmarshals JSON and passes it into a library function… I don’t need a test” but they would forget to await the library function.

              Moreover, how do you reconcile “annotating your code takes too much time and makes your code too inflexible, but writing tests is good value for money”? Am I misunderstanding your argument? Note also that tests are useful for lots of other things, like documentation, optimizations, refactoring, IDE autocomplete, etc.

              1. 1

                Moreover, how do you reconcile “annotating your code takes too much time and makes your code too inflexible, but writing tests is good value for money”?

                Types are not a replacement for tests, you have to write tests anyway. And they are good value for money, which is one of the few things we actually know about software development. So the proposition I’m making is that if you write good tests they should cover everything a type checker would. Because, essentially, if you check all your values are correct then it necessarily implies that types of those values are correct too (or at least, they work in the same way).

                Now, to your point about “but inevitably we would miss cases” — I’m very much aware of this problem. I blame the fact that people write tests in horribly complicated ways, get burnt out, never achieve full test coverage and then turn to type checking to have at least some basic guarantees for the code that’s not covered. I’m not happy with this, I wish people would write better tests.

                You example with a test for a trivial end point is very telling in this regard. If it’s trivial, then writing a test for it should also be trivial too, so why not?

                1. 1

                  I disagree. In my extensive experience with Python and Go (15 and 10 years, respectively), Go’s type system grants me a lot more confidence even with a fraction of the tests of a Python code base. In other words, a type system absolutely is a replacement for a whole lot of tests.

                  I specifically disagree that checking for a value guarantees that the logic is correct (type systems aren’t about making sure the type of the value is correct, but that the logic for getting the value is sound).

                  While 100% test coverage would make me pretty confident in a code base, why burn out your engineers in pursuit of it when a test system would reduce the testing load significantly?

                  With respect to “trivial code thus trivial tests”, I think this is untrue. Writing “await foo(bar, baz)” is trivial. Figuring out how to test that is still cognitively burdensome, and cognition aside it’s many times the boilerplate.

                  Lastly, people rarely discuss how static type systems make it harder to write certain kinds of bad code than dynamic type systems. For example, a function whose return type varies depending on the value of some parameter. The typical response from dynamic typing enthusiasts is that this is just bad code and bad Go code exists too, which is true but these kinds of bad code basically can’t exist in idiomatic Go code and they are absolutely pedestrian in Python and JavaScript.

                  At a certain point, you just have to get a lot of experience working with both systems in order to realize that the difference is really quite stark (even just the amount of documentation you get out of the box from type annotations, and the assurance that that documentation is correct and up to date).

                  1. 1

                    I specifically disagree that checking for a value guarantees that the logic is correct

                    I didn’t say anything about the whole logic. I said if your values are correct then it necessarily implies your types are correct. Specifically, if you have a test that does:

                    config = parse_config(filename)
                    assert config['key'] == 'value'
                    

                    Then it means that parse_config got a correct value in filename that it could use to open and parse the config file. In which case it also means filename was of correct type: a string, or a Path or whatever the language’s stdlib could use in open(). That’s it, nothing philosophical here.

                    While 100% test coverage would make me pretty confident in a code base, why burn out your engineers in pursuit of it

                    Let me reiterate: if achieving 100% test coverage feels like a burn-out, you’re doing it wrong. It’s actually not even hard, especially in a dynamic language where you don’t have to dependency-inject everything. I’m not just fantasizing here, that’s what I did in my last three or four code bases whenever I was able to sell people on the idea. There’s this whole ethos of it being somehow impossibly hard which in many applications just doesn’t hold up.

                  2. 1

                    Some more :-)

                    Writing “await foo(bar, baz)” is trivial. Figuring out how to test that is still cognitively burdensome, and cognition aside it’s many times the boilerplate.

                    Huh? The tooling is out there, you don’t have to invent anything: https://pypi.org/project/pytest-asyncio/

                    @pytest.mark.asyncio
                    async def test_some_asyncio_code():
                        res = await library.do_something()
                        assert b'expected result' == res
                    

                    At a certain point, you just have to get a lot of experience working with both systems in order to realize that the difference is really quite stark

                    Just to clarify here, I mostly work in Python, but I also work extensively in JavaScript, TypeScript, Kotlin and Rust (much less in the latter than I would like). And my experience tells me that types is not the most significant feature that makes a language safe (for whatever value of “safe”). It is also subjective. I do absolutely trust you that you find working in Go more comfortable, but it’s important to understand that the feeling doesn’t have to be universal. I would hate to have to program in Go, even though it’s a simple language.

        5. 1

          Genuine question: how could it be shown? You would need at least two projects of similar scope, in similar areas, written by programmers of similar skill (which is hard to evaluate on its own) and similar level of understanding of the problem area (which means that rewrites of the same code base can’t count), differing only in their choice of static/dynamic typing. How could such a research be possible?

          More generally: is there any solid research about which languages are better? Is there any data that programs written is the assembly language are more or less error-prone than those written in Python? This should be intuitively obvious, but is there data? I tried to find anything at all, but only discovered half-baked “studies” that don’t control for either the programmer experience or the complexity of the problem area.

          My point is, how can we do better than a bunch of anecdotes here?

          1. 3

            Right, I think one can admit that, as a field, we’re not in amazing shape epistemologically, but we are still left having to actually make decisions in our day to day, so not having opinions isn’t really an option – we’re stuck going on experience and anecdote, but that’s not quite the same as having no information. It’d be nice to have conclusive studies. Unfortunately, all I’ve got is a career’s worth of anecdata.

            I have no idea how to study this kind of thing.

            1. 3

              Yep, this was exactly my point: when someone asks for studies that conclusively show that X is better than Y in this context, I think that they are asking for too much and people are justified in saying “of course” even in the abscence of rock-solid evidence.

          2. 2

            I would go so far as to argue that, often, when people insist on data for something like this, they are actually being disingenuous. If you insist on (quantitative) data when you know that none exist or are likely to exist in the future, then you are actually just saying that you want to unquestioningly maintain the status quo.

            1. 2

              I would go so far as to argue that, often, when people insist on data for something like this, they are actually being disingenuous.

              … Why would it be disingenuous to ask for data? This just sounds absurd. Moreover there really isn’t consensus around this topic. Take a look at Static v. dynamic languages literature review, this has been an ongoing topic of discussion and there still isn’t a conclusion either way. Regardless this perspective frightens me. It sounds a lot like “I have an opinion and data is hard so I’m going to call you disingenuous for disagreeing with me.” This isn’t the way to make good decisions or to tolerate nuanced opinions.

              “The problem with any ideology is it gives the answer before you look at the the evidence. So you have to mold the evidence to get the answer you’ve already decided you’ve got to have.” – Bill Clinton

              1. 2

                Why would it be disingenuous to ask for data?

                It isn’t, and I didn’t say that it was.

                I said it was potentially disingenuous to insist on (quantitative) data as a prerequisite for having a discussion. If good data don’t exist, then refusing to talk about something until good data do exist is just another way of defending whatever the status quo is. What it basically says is that regardless of how we made the current decision (presumably without data, since data don’t exist), the decision cannot be changed without data.

                I’m honestly not sure how you came up with that interpretation based on what I wrote. I didn’t even say which side of the issue I was on.

                Edit: you can also substitute “inconclusive data” for “no data”.

                1. 2

                  I’m honestly not sure how you came up with that interpretation based on what I wrote. I didn’t even say which side of the issue I was on.

                  I think this is a difference between our interpretations on “insist”. I tend to read “insist” as an earnest suggestion, not a hard prerequisite, so that’s where my my disagreement came from. I didn’t mean to imply anything about which side you were on since that’s immaterial to my point really. I agree that categorically refusing to discuss without sufficient data is a bit irresponsible since in real life humans are often forced to make decisions without appropriate evidence.

                  If good data don’t exist, then refusing to talk about something until good data do exist is just another way of defending whatever the status quo is.

                  My thinking here is, at what point does this become a useless thought exercise? Static typing isn’t new and is gaining ground in several languages. There’s already a “programmer personality” identity based around static typing and “healthy” Twitter communities of folks who bemoan static or dynamic languages. At some point, the programming community at large gains nothing by having more talking heads philosophizing about where and why they see bugs. You can take a cursory search on the internet and see folks advocating for pretty much any point on the spectrum of this debate. To me this discussion (not this Lobsters thread, but the greater discussion as a whole) seems to have reached the point where it’s useless to proceed without data because there’s no consensus around which point on the spectrum of static/dynamic does actually lead to fewer (if any point does at all) bugs. And if more philosophizing doesn’t help us arrive at a conclusion, it really boils down to the same thing: your personal feelings and experiences, in which case the discussion is more of a form of socializing than a form of actual discussion. In other words, without data, this discussion trends more toward bikeshedding than actually answering the question under discussion.

                  1. 2

                    In other words, without data, this discussion trends more toward bikeshedding than actually answering the question under discussion.

                    That’s fair. I agree that this particular topic has been pretty well discussed to death.

                  2. 1

                    there’s no consensus around which point on the spectrum of static/dynamic does actually lead to fewer (if any point does at all) bugs

                    I think consensus is emerging slowly—static languages seem to have grown more popular in the last decade to the extent that many JavaScript developers are converting to TypeScript, Python developers are embracing Mypy, most (all?) of the most popular new languages of the last 10-15 years have been statically typed (the Go community in particular seems to consist of a lot of former Python and Ruby devs), etc. On the other hand, I scarcely if ever hear about people switching to dynamically typed languages (once upon a time this was common, when the popular static offerings were C, C++, and Java). It’s possible that this emerging consensus is just a fad, but things do seem to be converging.

              2. 1

                I suspect the problem is that this question is enormously multivariate (skill of developers, development methodology, testing effort to find bugs, different language features, readability of the language, etc).

                It’s entirely likely that we have been studying this for a long time and yet the variable space is so large that we’ve hardly scratched it at all. And then some people come along and interpret this lack of conclusive data as “well, static and dynamic must be roughly equal” which seems strictly more périlleuse than formulating one’s own opinion based on extensive experience with both type systems.

                I don’t think your Clinton quote applies because we’re talking about forming an opinion based on “a career’s worth of anecdata”, not letting an ideology influence one’s opinion. Everyone admits that anecdata is not as nice as conclusive empirical data, but we don’t have any of the latter and we have lots of the former and it seems to point in a single direction. In this case the ideological take would be someone who forms an opinion based on lack of evidence and lack of subjective experience with both systems.

          3. 1

            Genuine question: how could it be shown? You would need at least two projects of similar scope, in similar areas, written by programmers of similar skill (which is hard to evaluate on its own) and similar level of understanding of the problem area (which means that rewrites of the same code base can’t count), differing only in their choice of static/dynamic typing. How could such a research be possible?

            Remember that “project scope”, “project area” (systems, web, database, etc), “license status” (GPLv3, proprietary, etc) are all dimensions for a project that can be recorded and analyzed. There is a rich literature of statistical methods to compensate for certain dimensions being over or underrepresented, and for compensating for incomplete data. If we can come up with a metric for being error-prone (which is difficult, so perhaps we need multiple metrics (NB: Code complexity metrics are a good look here to look at the challenges)), and we can faithfully record other dimensions of projects, we can try to rigorously answer this question. The big barriers here usually involve data siloing (proprietary projects rarely share relevant data about their projects) and just manpower (how many developers, especially of open source projects, really have the time to also gather stats about their contributors and their bugs when they can barely hold together the project itself in their generous free time, or can get approval in a proprietary project).

            That said there’s this stubborn “philosopher-programmer” culture in programming circles that doesn’t seem particularly interested in epistemological work which also often muddles the conversation, especially if the topic under discussion has a lot of strong opinions and zealotry involved.

          4. 1

            The short answer is, I don’t know. I had some ideas though. Like an experiment where you solicit a public participation from a bunch of people to write from scratch something not complicated, and yet not trivial. Like, I don’t know, a simple working chat client for an existing reference server implementation. You don’t impose any restriction: people could work in any language, use libraries, etc. Time capped by, say, a couple of weeks. And then you independently verify the results and determine defects of any sort. And then you look at correlations: does number of defects correlate with dynamic/static nature? Programmer’s experience? Programmer’s experience with a language? Amount of active working hours? Something else?

            My hypothesis is that we can’t actually evaluate a language in a vacuum. Instead a language+programmer is a actually an atomic unit of evaluation. Like in auto racing you can’t say which driver is the best (although it’s people’s favorite pastime), you can only talk about a driver in a particular car driving on particular tracks.

            1. 2

              There is a 1994 paper called “Haskell vs. Ada vs. C++ an Experiment in Software Prototyping Productivity” which doesn’t touch on static vs dynamic typing, but, I think, follows about the same format as what you’re proposing, right? That paper is widely disliked due to its methodological problems. An example of its discussion: https://news.ycombinator.com/item?id=14267882

        6. 1

          We can believe all we want, but there is no data supporting the idea that dynamically typed languages are inherently less safe

          I think that some people tend to use the term “safe” to mean both “memory-safe” and “bug-resistant”, whereas others would use the term “safe” to refer to “memory-safe” only.

          I can quite believe that applications written in dynamically-typed languages might be vulnerable to a whole class of bugs that aren’t present in their statically-typed equivalents because, unless you’re very careful, type coercion can silently get you in ways you don’t expect. You could write an entire book on the many mysterious ways of the JavaScript type system, for example.

          That said, these aren’t really bugs that make the program unsafe if you’re talking about memory safety. It’s not terribly likely that these sorts of bugs are going to allow unsafe memory accesses or cause buffer overflows or anything of the sort.

          1. 2

            applications written in dynamically-typed languages might be vulnerable to a whole class of bugs [ … ] , type coercion can silently get you in ways you don’t expect.

            Dynamic languages is not just JavaScript. For example Python and Clojure don’t do type coercion, nor do they silently swallow access to non-existing names and attributes.

      3. 4

        Of course, both languages are safer than using a dynamically typed language. So from the perspective of a Python programmer, it makes sense to think of Go as a safer alternative.

        Returns also diminish. Using Go rather than Python will probably reduce type errors by 95% while Rust would reduce them by 99%. And that additional 4% of type errors may not be worth the hit to productivity (yes, I assert that Go is quite a lot more productive than Rust for most applications, even though I am a fan of both languages). Note also that these are just type errors; there are lots of errors which neither Rust nor Go can protect against.

        1. 5

          there are lots of errors which neither Rust nor Go can protect against.

          “How does your language handle null references?”

          Prohibited by the type system at compile time.

          “Nice, nice. What about out-of-bounds array accesses?”

          Sometimes detectable even at compile time and at any rate detected and safely handled at runtime.

          “Wonderful. So obviously then if your allocator reports that you’ve run out of memory…”

          Instant, unrecoverable crash, yes.

          1. 2

            I’m not sure how that relates to my “there are lots of errors which neither Rust nor Go can protect against.” statement that you’re quoting. Yes, those are categories of errors that Rust protects against and Go does not, but there are still lots of other errors that neither language protect against.

            1. 1

              My point is that one of the errors that neither protect you against is out-of-memory errors, which has always baffled me. Rust doesn’t even panic (which could be recovered), but aborts.

              OOM is much more often treated as a situation where it’s seemingly okay to absolutely crash compared to other resource-exhaustion situations (nobody would be like “oh the disk is full, let’s just crash and not even attempt to let the programmer deal with it”).

              1. 2

                I don’t know the rationale for this in Rust, but I’m aware that there’s been some discussion of this in the C++ standards committee. Gracefully handling out-of-memory conditions sounds really useful, but there are two problems:

                • In several decades of the C++ specification defining exception behaviour for operator new exhausting memory and longer for C defining malloc as returning NULL when allocation fails, there are no examples of large-scale systems outside of the embedded / kernel space that gracefully handle memory exhaustion in all places where it can occur. Kernels generally don’t use the standard may-fail APIs and instead use two kinds of allocations, those that may block and those that may fail, with the vast majority of uses being the ones that can block.
                • Most *NIX systems deterministically report errors if you exhaust your address space (which is not easy on a 64-bit system) but don’t fail on out-of-memory conditions at the allocation point. They will happily report that they’ve allocated memory but then fault when you try to write to it.

                If you do get an out-of-memory condition, what do you do? If you’re disciplined and writing a very low-level system, then you do all of your allocation up-front and report failure before you’ve tried any processing. For anything in userspace, you typically need to do a load of cleanup, which may itself trigger allocation.

                In general, the set of things for which it is possible to gracefully handle allocation failure are so distinct from everyday programming that it’s difficult to provide a generic mechanism that works for both. This is what malloc(3) and malloc(9) are such different APIs.

              2. 2

                Part of the trouble is that in most languages, it’s really hard to actually do much of anything without further allocation. Especially in languages where allocation can happen implicitly, this really does seem like an “above the program’s pay grade” kind of thing.

                That said, Rust is decidedly not one of those languages; this is an API design choice, and it has indeed often felt like an odd one to me.

    14. 11

      I like Apple hardware a lot, and I know all of the standard this-is-why-it-is-that-way reasoning. But it’s wild that the new MacBook Pros only have two USB-C ports and can’t be upgraded past 16GB of RAM.

      1. 18

        Worse yet, they have “secure boot”, where secure means they’ll only boot an OS signed by Apple.

        These aren’t computers. They are Appleances.

        Prepare for DRM-enforced planned obsolence.

        1. 9

          I would be very surprised if that turned out to be the case. In recent years Apple has been advertising the MacBook Pro to developers, and I find it unlikely they would choose not to support things like Boot Camp or running Linux based OSs. Like most security features, secure boot is likely to annoy a small segment of users who could probably just disable it. A relevant precedent is the addition of System Integrity Protection, which can be disabled with minor difficulty. Most UEFI PCs (to my knowledge) have secure boot enabled by default already.

          Personally, I’ve needed to disable SIP once or twice but I can never bring myself to leave it disabled, even though I lived without it for years. I hope my experience with Secure Boot will be similar if I ever get one of these new computers.

          1. 12

            Boot Camp

            Probably a tangent, but I’m not sure how Boot Camp would fit into the picture here. ARM-based Windows is not freely available to buy, to my knowledge.

            1. 7

              Disclaimer: I work for Microsoft, but this is not based on any insider knowledge and is entirely speculation on my part.

              Back in the distant past, before Microsoft bought Connectix, there was a product called VirtualPC for Mac, an x86 emulator for PowerPC Macs (some of the code for this ended up in the x86 on Arm emulator on Windows and, I believe, on the Xbox 360 compatibility mode for Xbox One). Connectix bought OEM versions of Windows and sold a bundle of VirtualPC and a Windows version. I can see a few possible paths to something similar:

              • Apple releases a Boot Camp thing that can load *NIX, Microsoft releases a Windows for Macs version that is supported only on specific Boot Camp platforms. This seems fairly plausible if the number of Windows installs on Macs is high enough to justify the investment.
              • Apple becomes a Windows OEM and ships a Boot Camp + Windows bundle that is officially supported. I think Apple did this with the original Boot Camp because it was a way of de-risking Mac purchases for people: if they didn’t like OS X, they had a clean migration path away. This seems much less likely now.
              • Apple’s new Macs conform to one of the new Arm platform specifications that, like PREP and CHRP for PowerPC, standardise enough of the base platform that it’s possible to release a single OS image that can run on any machine. Microsoft could then release a version of Windows that runs on any such Arm machine.

              The likelihood of any of these depends a bit on the economics. In the past, Apple has made a lot of money on Macs and doesn’t actually care if you run *NIX or Windows on them because anyone running Windows on a Mac is still a large profit-making sale. This is far less true with iOS devices, where a big chunk of their revenue comes from other services (And their 30% cut on all App Store sales). If the new Macs are tied more closely to other Apple services, they may wish to discourage people from running another OS. Supporting other operating systems is not free: it increases their testing burden and means that they’ll have to handle support calls from people who managed to screw up their system with some other OS.

              1. 2

                Apple’s new Macs conform to one of the new Arm platform specifications

                We already definitely know they use their own device trees, no ACPI sadly.

                Supporting other operating systems is not free

                Yeah, this is why they really won’t help with running other OS on bare metal, their answer to “I want other OS” is virtualization.

                They showed a demo (on the previous presentation) of virtualizing amd64 Windows. I suppose a native aarch64 Windows VM would run too.

            2. 2

              ARM-based Windows is available for free as .vhdx VM images if you sign up for the Windows Insider Program, at least

          2. 9

            In the previous Apple Silicon presentation, they showed virtualization (with of-course-not-native Windows and who-knows-what-arch Debian, but I suspect both native aarch64 and emulated amd64 VMs would be available). That is their offer to developers. Of course nothing about running alternative OS on bare metal was shown.

            Even if secure boot can be disabled (likely – “reduced security” mode is already mentioned in the docs), the support in Linux would require lots of effort. Seems like the iPhone 7 port actually managed to get storage, display, touch, Wi-Fi and Bluetooth working. But of course no GPU because there’s still no open PowerVR driver. And there’s not going to be an Apple GPU driver for a loooong time for sure.

          3. 2

            I think dual-booting has always been a less-than-desireable “misfeature” from Apple’s POV. Their whole raisin de et is to offer an integrated experience where the OS, hardware, and (locked-down) app ecosystem all work together closely. Rip out any one of those and the whole edifice starts to tumble.

            So now they have a brand-new hardware platform with an expanded trusted base, so why not use it to protect their customers from “bad ideas” like disabling secure boot or side-loading apps? Again, from their perspective they’re not doing anything wrong, or hostile to users; they’re just deciding what is and isn’t a “safe” use of the product.

            I for one would be completely unsurprised to discover that the new Apple Silicon boxes were effectively just as locked down as their iOS cousins. You know, for safety.

            1. 3

              They’re definitely not blocking downloading apps. Federighi even mentioned universal binaries “downloaded from the web”. Of course you can compile and run any programs. In fact we know you can load unsigned kexts.

              Reboot your Mac with Apple silicon into Recovery mode. Set the security level to Reduced security.

              Remains to be seen whether that setting allows it to boot any unsigned kernel, but I wouldn’t just assume it doesn’t.

              1. 4

                They also went into some detail at WWDC about this, saying that the new Macs will be able to run code in the same contexts existing ones can. The message they want to give is “don’t be afraid of your existing workflow breaking when we change CPU”, so tightening the gatekeeper screws alongside the architecture shift is off the cards.

            2. 2

              I think dual-booting has always been a less-than-desireable “misfeature” from Apple’s POV. Their whole raisin de et is to offer an integrated experience where the OS, hardware, and (locked-down) app ecosystem all work together closely. Rip out any one of those and the whole edifice starts to tumble.

              For most consumers, buying their first Mac is a high-risk endeavour. It’s a very expensive machine and it doesn’t run any of their existing binaries (especially since they broke Wine with Catalina). Supporting dual boot is Apple’s way of reducing that risk. If you aren’t 100% sure that you’ll like macOS, there’s a migration path away from it that doesn’t involve throwing away the machine: just install Windows and use it like your old machine. Apple doesn’t want you to do that, but by giving you the option of doing it they overcome some of the initial resistance of people switching.

              1. 7

                The context has switched, though.

                Before, many prospective buyers of Macs used Windows, or needed Windows apps for their jobs.

                Now, many more prospective buyers of Macs use iPhones and other iOS devices.

                The value proposition of “this Mac runs iOS apps” is now much larger than the value proposition of “you can run Windows on this Mac”.

                1. 2

                  There’s certainly some truth to that but I would imagine that most iOS users who buy Macs are doing so because iOS doesn’t do everything that they need. For example, the iPad version of PowerPoint is fine for presenting slides but is pretty useless for serious editing. There are probably a lot of other apps where the iOS version is quite cut down and is fine for a small device but is not sufficient for all purposes.

                  In terms of functionality, there isn’t much difference between macOS and Windows these days, but the UIs are pretty different and both are very different from iOS. There’s still some risk for someone who is happy with iOS on the phone and Windows on the laptop buying a Mac, even if it can run all of their iOS apps. There’s a much bigger psychological barrier for someone who is not particularly computer literate moving to something new, even if it’s quite like similar to something they’re more-or-less used to. There are still vastly more Windows users than iOS users, though it’s not clear how many of those are thinking about buying Macs.

                  1. 2

                    There are still vastly more Windows users than iOS users, though it’s not clear how many of those are thinking about buying Macs.

                    Not really arguing here, I’m sure you’re right, but how many of those Windows users choose to use Windows, as opposed to having to use it for work?

                    1. 1

                      I don’t think it matters very much. I remember trying to convince people to switch from MS Office ‘97 to OpenOffice around 2002 and the two were incredibly similar back then but people were very nervous about the switch. Novell did some experiments just replacing the Office shortcuts with OpenOffice and found most people didn’t notice at all but the same people were very resistant to switching if you offered them the choice.

          4. 1

            That “developer” might means Apple developers.

        2. 3

          Here is the source of truth from WWDC 2020 about the new boot architecture.

        3. 2

          People claimed the same thing about T2 equipped intel Macs.

          On the T2 intels at least, the OS verification can be disabled. The main reason you can’t just install eg Linux on a T2 Mac is the lack of support for the ssd (which is managed by the T2 itself). Even stuff like ESXi can be used on T2 Macs - you just can’t use the built in SSD.

          That’s not to say that it’s impossible they’ve added more strict boot requirements but I’d wager that like with other security enhancements in Macs which cause some to clutch for their pearls, this too can probably be disabled.

      2. 10

        … This is the Intel model it replaces: https://support.apple.com/kb/SP818?viewlocale=en_US&locale=en_US

        Two TB3/USB-C ports; Max 16GB RAM;

        It’s essentially the same laptop, but with a non-intel CPU/iGPU, and with USB4 as a bonus.

        1. 1

          Fair point! Toggling between “M1” and “Intel” on the product page flips between 2 ports/4 ports and 16GB RAM/max 32GB RAM, and it’s not clear this is a base model/higher tier toggle. I still think this is pretty stingy, but you’re right – it’s not a new change.

      3. 5

        These seem like replacements for the base model 13” MBP, which had similar limitations. Of course, it becomes awkward that the base model now has a much, much better CPU/IGP than the higher-end models.

        1. 2

          I assume this is just a “phase 1” type thing. They will probably roll out additional options when their A15 (or whatever their next cpu model is named) ships down the road. Apple has a tendency to be a bit miserly (or conservative, depending on your take) at first, and then the next version looks that much better when it rolls around.

          1. 2

            Yeah, they said the transition would take ~2 years, so I assume they’ll slowly go up the stack. I expect the iMacs and 13-16” MacBook Pros to be refreshed next.

            1. 3

              Indeed. Could be they wanted to make the new models a bit “developer puny” to keep from cannabalizing the more expensive units (higher end mac pros, imacs) until they have the next rev of cpu ready or something. Who knows the amount of marketing/portfolio wrangling that goes behind the scenes to suss out timings for stuff like this (billion dollar industries), in order to try to hit projected quarterly earnings for a few quarters out down the road.

              1. 5

                I think this is exactly right. Developers have never been a core demographic for Apple to sell to - it’s almost accidental that OS X being a great Unix desktop, coupled with software developer’s higher income made Macs so popular with developers (iOS being an income gold mine helped too, of course).

                But if you’re launching a new product, you look at what you’re selling best of (iPads and Macbook Air’s) and you iterate on that.

                Plus, what developer in their right mind would trust their livelihood to a 1.0 release?!

                1. 9

                  I think part of the strategy is that they’d rather launch a series of increasingly powerful chips, instead of starting with the most powerful and working their way down - makes for far better presentations. “50% faster!” looks better than “$100 cheaper! (oh, and 30% slower)”.

                  1. 2

                    It also means that they can buy more time for some sort of form-factor update while having competent, if not ideal, machines for developers in-market. I was somewhat surprised at the immediate availability given that these are transition machines. This is likely due to the huge opportunity for lower-priced machines during the pandemic. It is prudent for Apple to get something out for this market right now since an end might be on the horizon.

                    I’ve seen comments about the Mini being released for this reason, but it’s much more likely that the Air is the product that this demographic will adopt. Desktop computers, even if we are more confined to our homes, have many downsides. Geeks are not always able to understand these, but drive the online conversations. Fans in the Mini and MBP increase the thermal envelope, so they’ll likely be somewhat more favourable for devs and enthusiasts. It’s going to be really interesting to see what exists a year from now. It will be disappointing, if at least some broader changes to the form factor and design aren’t introduced.

                2. 1

                  Developers have never been a core demographic for Apple to sell to

                  While this may have been true once, it certainly isn’t anymore. The entire iPhone and iPad ecosystem is underpinned by developers who pretty much need a Mac and Xcode to get anything done. Apple knows that.

                  1. 2

                    Not only that, developers were key to switching throughout the 00s. That Unix shell convinced a lot of us, and we convinced a lot of friends.

                    1. 1

                      In the 00s, Apple was still an underdog. Now they rule the mobile space, their laptops are probably the only ones that make any money in the market, and “Wintel” is basically toast. Apple can afford to piss off most developers (the ones who like the Mac because it’s a nice Unix machine) if it believes doing so will make a better consumer product.

                      1. 2

                        I’ll give you this; developers are not top priority for them. Casual users are still number one by a large margin.

                  2. 1

                    Some points

                    • Developers for iOS need Apple way more than Apple needs them
                    • You don’t need an ARM Mac to develop for ARM i-Devices
                    • For that tiny minority of developers who develop native macOS apps, Apple provided a transition hardware platform - not free, by the way.

                    As seen by this submission, Apple does the bare minimum to accommodate developers. They are certainly not prioritized.

                    1. 1

                      I don’t really think it’s so one-sided towards developers - sure, developers do need to cater for iOS if they want good product outreach, but remember that Apple are also taking a 30% cut on everything in the iOS ecosystem and the margins on their cut will be excellent.

              2. 2

                higher end mac pros

                Honestly trepidatiously excited to see what kind of replacement apple silicon has for the 28 core xeon mac pro. It will either be a horrific nerfing or an incredible boon for high performance computing.

      4. 4

        and can’t be upgraded past 16GB of RAM.

        Note that RAM is part of the SoC. You can’t upgrade this afterwards. You must choose the correct amount at checkout.

        1. 2

          This is not new to the ARM models. Memory in Mac laptops, and often desktops, has not been expandable for some time.

      5. 2

        I really believe that most people (including me) don’t need more than two Thunderbolt 3 ports nowadays. You can get a WiFi or Bluetooth version of pretty much anything nowadays and USB hubs solve the issue when you are at home with many peripherals.

        Also, some Thunderbolt 3 displays can charge your laptop and act like a USB hub. They are usually quite expensive but really convenient (that’s what I used at work before COVID-19).

        1. 4

          it’s still pretty convenient to have the option of plugging in on the left or right based on where you are sitting so disappointing for that reason

        2. 4

          I’m not convinced. A power adapter and a monitor will use up both ports, and AFAIK monitors that will also charge the device over Thunderbolt are pretty uncommon. Add an external hard drive for Time Machine backups, and now you’re juggling connections regularly rather than just leaving everything plugged in.

          On my 4-port MacBook Pro, the power adapter, monitor, and hard drive account for 3 ports. My 4th is taken up with a wireless dongle for my keyboard. Whenever I want to connect my microphone for audio calls or a card reader for photos I have to disconnect something, and my experiences with USB-C hubs have shown them to be unreliable. I’m sure I could spend a hundred dollars and get a better hub – but if I’m spending $1500 on a laptop, I don’t think I should need to.

          1. 2

            and AFAIK monitors that will also charge the device over Thunderbolt are pretty uncommon

            Also, many adapters that pass through power and have USB + a video connector of some sort only allow 4k@30Hz (such as Apple’s own USB-C adapters). Often the only way to get 4k@60Hz with a non-Thunderbolt screen is by using a dedicated USB-C DisplayPort Alt Mode adapter, which leaves only one USB-C port for everything else (power, any extra USB devices).

      6. 1

        I’ve been trying to get a Mac laptop with 32GB for years. It still doesn’t exist. But that’s not an ARM problem.

        Update: Correction, 32GB is supported in Intel MBPs as of this past May. Another update: see the reply! I must have been ignoring the larger sizes.

        1. 3

          I think that link says that’s the first 13 inch MacBook Pro with 32GB RAM. I have a 15 inch MBP from mid-2018 with 32GB, so they’ve been around for a couple of years at least.

        2. 1

          You can get 64GB on the 2020 MBP 16” and I think on the 2019, too.

    15. 18

      Correct me if I’m wrong, but isn’t the notice-and-takedown provision of the DMCA exclusively relevant to copyrighted material per se, and not applicable to the anticircumvention provision? And it even seems a bit of a stretch to claim this is even a circumvention; youtube-dl merely requests the same data as a browser, I don’t think it has any functionality related to DRM.

      1. 3

        As I understand it, the root issue was that the source code had test cases that specifically linked to copyrighted material. I suspect it would have been otherwise ignored apart from that.

      2. 1

        At least by German law the distribution of copyrighted software and software to circumvent copyright is illegal.

        I‘m sure other countries have similar laws.

        1. 28

          But youtube-dl is not a tool built to circumvent copyright. Especially, as YouTube does host CC-licensed material and tons of their material is owned by their creators, which can give you a license to download and use all the time. The DMCA notice makes a very careful point to only refer to the YouTube standard license. The problem there is that YT does not provide another method to exercise your right to copy.

          Also, while the court in Hamburg is known for its… creativity and industry-friendlyness, it also is not unusual that its decisions don’t survive revisions.

          1. 3

            Good point! Then the software would only go against the end-user license agreement of YouTube? But I guess that can’t be enforced with a DMCA. I hope the court regards this!

            1. 4

              Especially as YouTube would have to be the party to go to court over this. Also, the Terms of Service are aimed at the user of youtube-dl.

              1. 1

                What’s next? “You use the computer, therefore you’re stealing our content? Even though it’s not our content at all we want you to stop and shut down your computer immediately”.

                1. 1

                  I don’t understand your reply. The only thing I noted is that in the scenario the OP arrived at (ToS enforcement), the legal parties would be different and the RIAA cannot be in the picture. And YouTube has no interest in suing its users.

                  Nothing is next.

                  1. 1

                    Oh, no, I’ve meant to reply to your previous comment, and with it agreeing with it. Basically, RIAA is trying to take down youtube-dl because it was used to download copytighted content. But so was the entire computer. That’s what I’ve meant when I’‘ve asked “what’s next”.

                    I don’t know why I replied to this comment, not your previous one though.

                    1. 3

                      Ah, yes! Thanks for clarifying. Yep, the problem is that we need a fundamental reform, not escapism. IMHO, centrialisation vs. decentralisation is a red herring. I will just lead to the situation we had years ago: going after the nodes and the creators, with a less clear battlefield.

                      IMHO, this situation is less bad. It’s visible and I’m sure we’ll read about some lawyer filing a counter-claim next week or so.

            2. 3

              Also youtube-dl does not distribute the content. There are laws and enough court decisions in germany that private copies and their tools are allowed. youtube-dl would probably be seen as this kind of tool.

          2. 1

            But youtube-dl is not a tool built to circumvent copyright.

            That may not matter. It is a tool build to circumvent the inconvenience of not having an easy way to download videos from the website. That inconvenience almost certainly counts as a “technical measure” (there’s some obfuscation going on). It doesn’t matter whether the works behind it is protected. Circumventing the technical measure does.

            Now while the technical measure does have to restrict access to protected work, it may not have to be its primary purpose. If YouTube obfuscated download capability primarily to get users to come back & see ads, the fact is that it restrict access to protected works, and that may be enough.

            1. 3

              The may is very load-bearing here, though. The problem is that every right and rule is subject to weighting in court. And for example Germany has the right to create private copies for personal use/archival. Circumvention of copy protection is illegal, but copy protection is not only a technical measure, but also needs a clear marker on the source material. So, in Germany: both the algorithm must exist and the source must be marked as copyrighted and protected.

              This is actually a reasonable rule: it avoids the situation where “open” things are wrapped to be closed.

              If you look closely at what the RIAA quotes (I’m yet trying to find the decision they quote): they talk about a “service”, so probably about an intermediary helping users. I have not yet found which decision they actually refer to, someone on Twitter assumed this one: http://www.rechtsprechung-hamburg.de/jportal/portal/page/bsharprod.psml?doc.id=JURE180006255&st=ent&doctyp=juris-r&showdoccase=1&paramfromHL=true#focuspoint

              A very tl;dr: This describes a case of a server-based service which allows you to grab the audio track of a YT video (I assume to download albums from YT). This is commercial circumvention of copy protection. The case document even goes into a lot of detail to express how the service is not just a proxy for the individual user in all cases: because it is an ad-based service, the defendant was not able to claim to enable easy private copies, it indeed monetises each copy. The question whether the user was allowed to take this copy was expressively ruled out, the sticking point was that illegal copies are monetised. Sounds like a classic stream-ripping service for me, which are indeed very damaging to video platforms (the classic was to rip a stream from a player, put it in you own, with your own ads: the platform pays the streaming cost, you get the ad value).

              What the RIAA seems to rely on is that this case does mention that it assumes that the protective measure is effective (interestingly by describing how it is not usable through non-developer functions in Mozilla Firefox, maybe that’s a good feature suggestion?) but that may still lose out in weighting against the interest of the user to get their own copy. But whether they are right, at this moment does not matter here. I would not even assume that the RIAA has checked this case to fully apply to their thing: they don’t need to, they just have to present a 50% non-bullshit case to GitHub. GH is not obliged to check further then that.

        2. 5

          For all the complaints about the US DMCA, generally Europe has some of the harshest and most extreme copyright-regime rules, up to and including the disastrous new mandate for basically everyone to implement a YouTube-style pre-filter on all uploads.

          1. 1

            Is there a similar law or not? I think your comment is a little bit off-topic.

            1. 10

              US DMCA is a huge act. It is all the rules around all things digital. What people usually refer to are DMCA Takedowns, which I actually find reasonable, especially as they have a clear procedure. Thats section 512. It actually goes into details of what platform providers are not liable for. (Caching, etc.) I’d actually love if a German law were that direct.

              Broken down, if you are a service provider hosting user content, you are not liable if the following procedure is in place:

              • Someone can send you a “takedown notice”, in which they tell you that they are the copyright holder and that they believe this is their content, which you promptly respond to.
              • As time is of the essence here, you don’t have to check this claim for validity, but instead have to forward this notice to the user, at the same time making their content inaccessible.
              • The user can file a counter-claim, in which case the 2 parties can go to court and will notify you of the results. During this time, the claim is contested and you can continue serving the data.

              In theory, fraudulent takedown notices can lead to the other side suing back, but that rarely happens, especially around groups like the RIAA and that’s where it issue lies.

              Now, you may agree with copyright or not, if you run a public service, you will have to implement a procedure here. And the DMCA procedure is actually straight-forward and easy to implement. It’s worth it, as it takes you out of the danger zone.

              https://www.law.cornell.edu/uscode/text/17/512

              Background: I was part of the legal review and setup for crates.io around GDPR and DMCA. I can tell you, both are equally often misinterpreted.

              The problem here is that the RIAA here does not invoke 512, but instead claim the illegality of the tool outright.

              Finally, to be clear: I don’t support a lot of this stuff, but I don’t have the liberty to ignore them. Also, the RIAA is very much in the wrong here, in my opinion. Also, to be clear, there are reasonable takedown requests. On code hosts, that’s usually someone ripping off the license and renaming the library and publishing a copy. On other sites, it may be nude pictures someone took of his GF.

            2. 6

              Look up the recent EU Copyright Directive (originally known as “Article 13”) for a starter. With the US political system mostly deadlocked these days, the copyright lobby has turned its attention – with much success – to Europe, and the regime which will soon be in place there makes the US DMCA system look almost reasonable by comparison.

        3. 1

          circumvent copyright

          not DRM?

          Ytdl simply extract links.

          1. 4

            Well, not that simply. The takedown letter says it circumvents something called

            YouTube’s “rolling cipher”

            which was determined as an “effective technical measure” by the (copyright-mafia-adjacent apparently; and not under US jurisdiction) Hamburg Regional Court.

            Indeed one of the test cases mentioned by the RIAA is described as ’Test generic use_cipher_signature video (#897).

            And apparently what that means is running some JS function (in a tiny interpreter of a tiny subset of JS) to deobfuscate the links.

            This is absolutely not what we would perceive as “real” DRM, but it does technically attempt to ‘manage’ some ‘digital rights’, lol.

            1. 1

              And apparently what that means is running some JS function (in a tiny interpreter of a tiny subset of JS) to deobfuscate the links.

              That “some JS function” is running the JavaScript sent by YouTube to the user in response to a request for a video, and looks to be fetched each time a video is requested by the YouTube extractor. I could see a stronger argument for “circumvention” if they had re-implemented the logic in Python or saved the JavaScript into the repository. As it stands currently, this seems a really big stretch.

    16. 18
      1. 7

        I clicked this like 5 times :|

      2. 5

        Take your upvote and go!

    17. 6

      Here are a bunch of utilities I use daily and recommend:

      • One Switch, paid - useful for quickly connecting AirPods/bluetooth headphones, toggling multiple settings on/off. I find keep awake useful when I need to monitor something without having to touch keyboard/mouse.
      • Pastebot, paid - a nice clipboard manager. There are open source alternatives available but I like the interface and extensibility.
      • Lunar, free - open source utility with nice UI to automatically manage your external monitor brightness. Warning - I have read on several places that lot of cheaper monitors have limited read/write cycles in their internal flash and this may cause them to be bricked after 100000 or so cycles. Can’t find the link though.
      • 1Password, paid - Great password manager.
      • BetterSnapTool, paid - adds useful window controls. Also checkout out BetterTouchTool to customize mouse/keyboard but I use Bartender and BetterTouchTool doesn’t play nicely with that.
      • iStat Menus, paid - advanced system monitor.
      • Safari, free - Unless you us some Firefox/Chrome specific extension, Safari is a great, if under featured browser. It has the best battery life among browsers on Mac and has great system specific features like play/pause controls on keyboard, picture in picture (may need extension to enable the button on all sites but Touch Bar will usually show a PiP button).
      • Things, paid - To Do, Project manager. Nice UI and lot of useful features.
      1. 4

        This HN comment is the first time that I have heard about the EEPROMs write limit. Note that I took the comment at face value because of the author and I haven’t researched it further.

      2. 3

        I have to second your Safari recommendation. Even better with an ad-blocker like Better—the architecture of content blocker plugins in Safari means that they generally don’t negatively affect page load performance at all.

        1. 1

          I use Better with Safari and it’s my daily driver for personal use. Of course web dev tools leave a bit to be desired and not all extensions are there. Hopefully extension issue will be resolved with upcoming changes to adopt web extensions API.

      3. 1

        Safari will get WebExtensions soon too, so extension support will likely be similar across all 3 browsers.

        1. 1

          Yes, I remember that from WWDC. But Apple still wants extensions in App Store, and that may limit devs since that costs $99 a year.

          1. 1

            Let’s hope there’s a way to sideload.

      4. 1

        Regarding battery life, I’ve found that Opera also has much better battery life than Chrome or Firefox.

    18. 2

      meaning an array is passed as a so-called “fat pointer”, i.e. a pair consisting of a pointer to the start of the array, and a size_t of the array dimension

      This sounds a lot like Go’s slices in a sense, with syntax like foo[a:b]. It still doesn’t give you much of a guarantee about the underlying memory though.

    19. 1

      The funny thing is that I agree with almost everything the author has said. Yet I still enjoy writing programs in Go quite a lot. Maybe I am not a real programmer :)

      1. 1

        Yes, Go is definitely not perfect, but I still very much enjoy using it. It’s relatively easy to predict how Go will behave in a number of cases and there aren’t too many surprises. The opinionated formatting and error handling makes it quite easy to pick up someone else’s code and follow it.

        1. 1

          Another aspect that I enjoy is its fast compilation. Languages that have more complicated types tend to be much slower to compile and I don’t like to wait around.

    20. 1

      In Go, only pointer and interface types can have a value of nil, which they will have if they’re uninitialized.

      Slice and map types can also be nil. If you do var foo map[T]T then foo will be a nil map by default. If you do var foo []T then foo will be a nil slice by default.

      They both happen to act the same as empty maps/slices when given to len() etc.