1. 5

    I feel like most of the code reviews I’ve been doing recently include me saying “Instead of return null values here, we can make this an Optional so users of this method know they have to handle the empty case”. Sometimes my coworkers grumble but the amount of NPEs we’ve had in production has definitely gone down. Extra boilerplate up front saves time in the future!

    1.  

      The good news is that “instead of returning a null value, make it Optional” is one and the same in Python: If the value can be None, then its type is already effectively Optional and should be marked as such (or equivalently, Variant[None, …]). I don’t think there exists anything to disagree about.

      As for reviewing code when tradeoffs exist (such as in strongly typed languages, where Optional actually does something – it adds a machine word to the size of the type), I’m glad you say “can” and present it as a tip with a certain reason. In code review, “considerate” is more helpful than “holder of popular opinion”!

    1.  

      This reminds me a lot of the initializer list style that puts the commas on the left. E.g.,

      silly_class::silly_class()
          : fluffy_kittens(1)
          , exuberent_puppies(0)
          , sparkly_unicorns(0)
      {
          // ...
      }
      
      1.  

        That’s also a good point. That’s what I call an infix comma, and the common problem is what I would describe as infix syntax (separators that go between elements, as opposed to pre- and postfixes): When writing a list vertically, they make you, or permit you to, omit something essential at the beginning or end. Neither is obviously good for adding or subtracting items at either end.

        In the initializer list example, we chose to omit the comma at the top, likely for the aesthetic reason of lining up all the commas below the colon, but the first line is still special, so it would be nicer if the colon was a comma too. That would make the list prefix comma separated.

      1.  

        As an alternative fix, enum switches are better when they return, i.e. when the switch is in its own function.

        As such, switch should have been an expression, but that’s nothing that can’t be fixed by an enclosing closure:

        auto [f_next, s_suffix] = [&]() -> std::tuple<Foo, std::string_view> {
            switch (f)
            {
                case Foo::Alpha:
                    return {Foo::Alpha, "is nothing"};
                case Foo::Beta:
                    return {Foo::Gamma, "is important"};
                case Foo::Gamma:
                    break; // default
            }
            return {Foo::Alpha, "is very important"};
        }();
        

        There are many reasons that together point at this solution:

        • With the appropriate warning level (-Werror=switch for Gcc), enum switches with unhandled values become compilation errors. But only when you don’t use default. This is great, because compilation errors are better than runtime errors. Not that we will eschew the runtime check, but we must not use the default word.
        • When the switch is in a function of its own, there is a better place for the default handling: After the switch. This place is reachable, as the compiler will insist, which means that you can’t simply forget to handle it (control reaches end of non-void function). What happens here is that the compiler reminds us of all the other physically representable values. Which you may actually want to handle if the enum value was received numerically. If you don’t want to treat them specially, it is zero cost to just designate one of the cases as the default, as above. In any case, we have explicitly handled invalid values too, and the compiler will make sure we do.
        • Making the function anonymous makes it an easy choice for the compiler to inline.
        • Making wrong code look wrong: In order to forget the return, you have to forget the return expression. Only this point is taken care of by the break; case solution in the post.
        1. 10

          If I were to add something:

          • Multiline editing: While you can write multiple lines in other REPLs, like Bash and Python, you can’t navigate up and down in what you type, and you can’t retrieve a multiliner back in its full glory. Because they all use readline. Fish has basically reimplemented readline. A simple multiline command:
            begin
                make
                and sudo make install
            end
            
            I write multiliners like this all the time, even quite long ones, because it’s easy to edit the previous command. When it’s time to save, just make it a function and funcsave it.
          • Alt+Left/Alt+Right to go back and forth between previous directories.
          • Bracketed paste: It used to be that when accidentally pasting some substantial text in the shell, the shell would immediately go on to execute each line of it as a command, without waiting for your enter key. Bracketed paste is a protocol between the terminal and the shell, that most terminals support, that lets the shell distinguish between newlines and the enter key.
          • End of quoting hell (POSIX shell’s big mistake): Fish doesn’t just interpret whitespace and glob characters in the contents of variables. That’s of course completely nuts. Instead, every variable is an array, so you don’t need word splitting and this evil kind of glob expansion.
          • Nullglob and failglob by default: Fish doesn’t just pass failing glob expansions unexpanded. How many iterations does this iterate?
            for i in nonexisting/*
                echo $i
            end
            
            Zero! And does this execute ls at all?
            ls nonexisting/*
            
            Nope!
          1. 9

            The autocompletion is the killer feature for me as well. I used Bash for years without knowing about Ctrl+R history search, and I never learned how to search backwards with it. Fish just searches the history, so you don’t have to decide that before starting to type, and you don’t need to learn how to use the arrow keys to iterate the results. Fish made me instantly more productive than I ever was with Bash.

            1. 3

              I would wager that fish style autocomplete is an objective indisputable advantage. Being able to look at the man pages documentation as you type is practically non intrusive and adds value that you don’t have in other shells.

              Fish history and bash’ i-search history always felt limited to me. My problem with them is that I can only look at a result a once. Peeking through history never felt smooth to me. Until I found this: https://github.com/oh-my-fish/plugin-percol

              This is the one thing that makes me maintain a fishconfig rather than just using fish as is out of the box. I have also written this https://github.com/plainas/icl to keep frequently used commands neatly organised and documented.

            1. 3

              A lot of this is for C11 compatibility (where C11 largely standardised a fairly old GNU extension as-is). It has some interesting interactions with other features.

              The most annoying part that I’ve encountered is that you can’t use the C-style array initialisation for initialiser lists. This is useful when your array indexes are an enum and you want to get a compile failure if you miss one or put them in the wrong order, something like (in C):

              struct Foo *handlers[] = { [Thing] = &thingHandler, [OtherThing] = &otherThingHandler, [LastThing] = NULL };
              

              You can’t do this with C++.

              1. 1

                Not to say C99

              1. 2

                This is a very interesting idea. Python seems like it’s in a weird position. On the one hand, the ecosystem is thriving in a number of niches. It’s huge and hard to see it being displaced. On the other hand, the language designers seem to have lost the thread about what made people like Python in the first place, and packaging in particular is a nightmare that threatens to kill the whole thing. Making a new language that is Zig to Python’s C has a lot of potential: you can fix the language warts, add types in a more integrated way than MyPy, have a solid story about installation, and keep compatibility with Numpy and OpenCV or whatever.

                1. 1

                  add types in a more integrated way than MyPy

                  Yes, the type and also the existence of variables. They would be nice if they were static. Every time Python throws an attribute error or a type error one hour into a test, i wish i was writing in a static language.

                  Lazy me doesn’t see the point in running MyPy before something already smells wrong.

                1. 13

                  Rust is a really strange choice. It’s a systems programming language. A desktop environment is, by definition, in the domain of application development. I can see wanting to write some core components (bits of the graphics stack, for example, or DBUS broker) in a systems language because you want performance and I can see wanting to write things like video CODECs in a safer language because you want them to work on untrusted data but writing most of a DE in Rust doesn’t make sense to me.

                  Competing stacks are using JavaScript, C#/F#, Java/Kotlin, or Dart. All of these are far easier to be productive in than Rust, at the expense of some memory overhead and tail latency, neither of which matters for typical apps (GC pauses are far smaller than a human perceives these days and the memory overhead from GC is often offset by reduced fragmentation). I wouldn’t recommend any of these for low-latency, low-jitter, scalable server applications or for use in performance-critical parts of an OS kernel (Rust would be a much better fit here) but desktop environments are a very different problem space.

                  1. 10

                    It’s a systems programming language.

                    I think it’s slightly more nuanced than that. Quality of implementation matters a lot, and things like “build system which just works” play a great role. On the QoI metric, rust seems to score significantly higher than almost all listed alternatives. For JS, the fact that it needs a transpilation step adds irreducible accidental complexity, and npm adds a lot of reducible complexity. F#‘s build system last time I look was “write XML file which specifies the order in which to compile the files”. Java/Kotlin need a JVM implementation, which is not something that “just works” (and also gradle). Don’t know about C#. I do feel that due to tooling, the language and the ecosystem “just working”, Rust might be more productive for such large collaborative projects, despite the fact that manual memory management is pure mental overhead. It’s easier to figure lifetimes than to figure out how to ship non-statically linked software in a reasonable way.

                    Dart (and also Go) is the language whose QoI is actually great. For GUI specifically, live reloading natively supported by the whole toolchain in particular is big.

                    1. 5

                      C# is IMHO a good fit for desktop stuff; considering it was one of the reasons why MS developed it, and why Gnome developers initially worked on Mono. Their experience of developing desktop apps in C (Evolution specifically) was excruciating, and they thought C# was the ideal. It’s funny how much people forget Mono tried to integrate with Gnome and thus the typical GNU stack of the time; GTK#, pkg-config and the compiler, and even autotools/GAC integration. Too bad that didn’t work out…

                      1. 5

                        “build system which just works” ship non-statically linked software in a reasonable way

                        I am not arguing that Rust the language is necessarily a bad choice here, but isn’t it also true that Rust the build system hasn’t really been used at this kind of scale yet and whether it will “just work” for this case is not a given? Things like compile times and code bloat (due to static linking and potentially multiple versions of the same crate present in the build) come to mind.

                        1. 6

                          Yeah, that’s true! Specifically, if we imagine the thing as one giant monorepo which contains everything from low-level libraries, to frameworks for app developers to apps themselves, than I think we don’t have an evidence that Cargo works great, and even some evidence to the contrary: “integrating Cargo into Buck/Bazel/Pants” is a common problem.

                          OTOH, if we imagine a more distributed ecosystem, with independent applications having independent repositories and semvers, then I think we have some evidence that Rust works: crates.io ecosystem looks rather healthy.

                          The question of “can you build runtime ecosystem in Rust?” is completely open. As, maybe we actually can compile everything statically and exist on top of just kernel syscalls? Or maybe we actually need some runtime sharing via IPC / dynamic libraries? On this topic, I hope Rust stability guarantees would be an inspiration – at least some part of the community values stable APIs a lot (see, eg, serde 1.0.flipping 130: ). Having win32-stable API on Linux to do things humans interact with would be nice.

                      2. 7

                        My feeling is that desktops need reliability and every bit of space+time performance that can be squeezed out. So it seems like a DE is a perfect fit for Rust, possibly better than many other fields where it’s being forcibly pushed.

                        1. 14

                          Back around 2007ish, I was actively working on Etoile and was starting to replace Objective-C with JIT’d / AoT-compiled Smalltalk for application development. For a few weeks, I accidentally broke the compiler and so I was using the incredibly slow AST interpreter for the Smalltalk code. I was doing this on a 1.2 GHz Celeron M and the only reason that I noticed that the compiler was broken was that I read some logs that spat out an error message that it failed to dlopen the LLVM libraries. It made absolutely no difference to my user-perceptible performance.

                          My 4-year-old phone has more cores, faster cores, and more RAM than the computer I was using back then.

                          1. 2

                            reliability and every bit of space+time performance that can be squeezed out

                            This is the same way that I feel. DE must be very, very light weight. It’s what the user interacts with so it’s got to be simple, reliable, and what better way than a memory-safe low level language without external dependencies.

                            1. 2

                              I mean, they do, but applying a little common sense can get you where you want to be in terms of performance, too :-). For example Electron gets a lot of hate but, between the animations and the compositor input lag, Electron applications are pretty much as fast as native GTK applications. Over here in the FOSS world we have PTSD from experiments with JavaScript in the late 00s but that’s on us.

                            2. 1

                              I think to many people, Rust is the default language for any serious new project. Unless of course specific circumstances favor another language, such as interfacing with existing code – the common case.

                              So apart from the common case that something else is more important, my general reasoning would be:

                              • Maintainability: If I’m writing something substantial with a lot of contributors, all dynamic languages are out. I have tried Python with Mypy, but no: If some refactoring didn’t apply cleanly, it’s still a runtime error.
                              • It must not encourage bad practices: Anything that matches *-oriented is likely too narrow minded. OOP in particular is good for resource wrappers, but not as a default way to structure code: It encourages state, makes code untestable and restricts free code reuse. That disqualifies C++, whereas Go’s interfaces and Rust’s traits – OOP toned way down, are fine and good.
                              • A language can cover many “levels”: I’m not simply buying the argument that you need a high level language to write high level code: What matters in this regard is how high abstractions it enables. C++ is the benchmark in this category, but Zig has better compile time inference and Rust has things like serialization and typestate machines.
                                • Corollary: I don’t see the point in GC when you can have reference counted types.
                              • I don’t fight the borrow checker: I don’t know if it’s only me, but I find myself fighting C++‘s many papercuts much more than Rust’s borrow checker. Either, you care about ownership, in which case the borrow checker is your friend, or you don’t, in which case you use reference counting.

                              (Just my thinking. I may not be representative.)

                            1. 1

                              Why invent paging in userspace when the kernel has better defaults?

                              1. 3

                                The kernel, in general, doesn’t. The kernel’s paging algorithm is completely oblivious to the workload. It has approximately one bit of knowledge per page (was this modified recently?) and it doesn’t know anything about sub-page data structures. It doesn’t know that these (not necessarily contiguous) 4 MiB of state are related to a specific query and so are likely to be accessed together. It doesn’t know that this data structure can be recreated more cheaply than demand-paging it back from disk. It doesn’t know that this data structure will require you to fault these 14 pages back in on demand for this specific access pattern.

                                The kernel’s paging strategy has to be completely generic, userspace can do a lot better because userspace has a lot of knowledge about data structures and access patterns. The only advantage the kernel has is global knowledge of memory pressure but a database that is the only process running on a database server also has this. Some operating systems provide interfaces that expose this information to userspace.

                              1. 1

                                Not sure what the point of this is when Wayland has been on EGL already. Why not just deprecate and later remove X11 support? It’s not like the Firefox team has infinite resources.

                                1. 3

                                  There’s a large number of people who will need to get FF updates but won’t be / can’t be on Wayland. I expect another decade of X11 having a significant market share. The Wayland protocols take ages to stabilise… Even the trivial ones like screensaver inhibitor take 6 years and counting.

                                  1. 1

                                    Wayland can’t do anything X can’t do and X can do plenty Wayland can’t. Really, the intelligent, rational decision would be to remove Wayland support…

                                    1. 1

                                      Aside from security, I know 3 things only Wayland can do, which I know on top of my head because they are that irritating in X:

                                      • Alt+Tab out of any fullscreen program (unless implemented/emulated in said program, if I understand correctly)
                                      • Displaying video without tearing
                                      • Work after a kernel upgrade without rebooting (just my experience: Weston works whenever X doesn’t)

                                      That said, I would agree that Wayland is the least usable right now. Firefox in Xwayland can’t reorder tabs, and in Firefox in Wayland mode (MOZ_ENABLE_WAYLAND=1), copypaste is broken and hover text flows outside the screen.

                                  1. 7

                                    While the headline surprises nobody, a takeaway from the conclusion is that bug reports are valuable.

                                    But not every product has a bug tracker. Is this a bit underappreciated in commercial software? Especially, I want to say that as a user, it goes just as much the other way: Being able to write a good old bug report is a thing to appreciate about the open source development model.

                                    A recent example from my own experience: Where is Microsoft’s bug tracker? I was without Teams for Linux for one month, at great personal cost, because Microsoft doesn’t have a bug tracker. Well, Teams in the browser actually has a “Help” menu to report bugs. Did that; didn’t get any feedback. As a result, I didn’t get the help I needed, and I couldn’t provide any help either. Only after Microsoft found out about it themselves could I get it working again. While I’m fond of the saying that software is like sex – it is best when it is free, if there is one thing that makes me hate commercial software, it is a helpless experience. Let’s not make this an aspect of it!

                                    1. 1

                                      I use histogram, but I keep wishing for a byte level diff algorithm. Not primarily for human consumption, but for rebase to see changes for what they are instead of creating pointless conflicts. It would save me an hour every week.

                                      That would also fix the problem of indentation changes. Yes, there is –ignore-all-space, but no, it doesn’t display indentation changes for what they are. Besides that the shown indentation level is wrong, which is significant for python.

                                      1. 2

                                        Interesting that they ran into the need for destructive move semantics. That would open a world of typestate programming – type level state machines, like the one that was attempted here, that can be used to ensure that an API is used correctly.

                                        I hope that comes to C++ one day (and we can deprecate nondestructive moves). In a way, it seems inevitable: If zero-cost abstractions are a goal at all, then this would be a broader one (in that classes don’t need a moved-from state).

                                        But in case it’s too late to add to C++, what about adding ownership and destructive move semantics to C? Imagine a type attribute, “alive”, that means that the program won’t compile if you forget to end the lifetime of such an object – you have to pass it to a function that destroys it. You get the safety of implicit destructors, but explicit, which is even better: The destructor function would be any normal function: It could take other arguments, so you don’t need to keep references in the object for use by the destructor, and you can enforce correct destruction order.

                                        1. 12

                                          Disjoint captures is my favorite feature, that was a wishlist item for a lot of people for a long time. To be clear, it never stopped me from building anything, but it was a “this should really exist but I don’t want to be the one to work on it” item for me.

                                          1. 2

                                            Possibly related: If the compiler can capture single struct fields, shouldn’t it be able to borrow single slice elements?

                                            error[E0499]: cannot borrow `vec` as mutable more than once at a time
                                               |
                                            10 |     take_three(&mut vec[0], &mut vec[1], &mut vec[2]);
                                               |     ----------      ---          ---          ^^^ third mutable borrow occurs here
                                               |     |               |            |
                                               |     |               |            second mutable borrow occurs here
                                               |     |               first mutable borrow occurs here
                                               |     first borrow later used by call
                                            

                                            It is already possible to solve this, but rather manually: There is a function, slice::split_at_mut(), which partitions the slice in two. Not very ergonomic to have to use, especially for more than 2 elements.

                                            In lack of a better word, I think “automatically split borrows” should also exist. When I read about disjoint captures, I had a hope they would have fixed this too, so I had to try (rustc 1.56, edition 2021), but apparently not.

                                          1. 3

                                            It’s even worse. Most users are unable to input URL when prompted verbally. They use the search field on their home page.

                                            1. 2

                                              Yes, it’s pretty much the 60+ method in my impression. I hope kids these days know better.

                                              But even when you know the difference, there is one more hurdle: My memory, at least, is not just case insensitive, but TLD insensitive: Was it .com, .org or .net again? Sometimes, I guess wrong, and find a shady spam site insetad. I need a TLD resolver, and I use both search engines and Wikipedia for that.

                                              1. 3

                                                High school computer science teacher here, and I can assure you the kids these days do not know better, unfortunately.

                                                1. 2

                                                  More like 50+, at least around me. They also love to share their passwords with me. :shrug:

                                                  1. 1

                                                    I really hate the TLD proliferation. Now you can’t even just assume a company is .com, now it might be .pizza. Or one of those godawful dots in the middle of the word. Those are the worst.

                                                    I’ve used search for things I was even quite confident of just because of that.

                                                1. 2

                                                  Why not try to bring these feature into the main git project? Then it would not only be an order of magnitude faster, but also reach orders of magnitudes more developers. If it is the time for Rust in Linux, it is also time for Rust in Git.

                                                  1. 8

                                                    If it is the time for Rust in Linux, it is also time for Rust in Git.

                                                    I thought Rust in Linux is only for drivers on a few platforms. Rust in git means you will cut off everyone that is on an arch not supported by Rust.

                                                    1. 2

                                                      True, but we can’t wait forever for those platforms to either “Rust or die”. If Rust and Zig are to any extent the new infrastructure languages, and enough other good new software is being written in them, that nobody wants to rewrite in C, then it’s a bit inevitable.

                                                      1. 1

                                                        Well, it’s not as though I can run Git on my Amiga anyway. There are just too many unixisms in Git.

                                                        1. 1

                                                          I do know someone’s working on an AmigaOS port of libgit2. Never say never!

                                                          1. 1

                                                            I know there’s an AmigaOS 4 (PowerPC) port, but not one for OS 3 (m68k), so it’s obviously nontrivial. It’s not as if Amiga developers are unfamiliar with source control; the OS itself has survived migrations all the way from RCS.

                                                            And, I might add, it’s just in time for m68k support being added to LLVM.

                                                      2. 5

                                                        If it is the time for Rust in Linux, it is also time for Rust in Git.

                                                        That seems like a rather complicated way to say that it’s not time for Rust in Git.

                                                        1. 2

                                                          I imagine they might have some justifiable hesitance to add a dependency to their builds.

                                                          Aside from that I agree that bringing it in, (as well as rewriting “rebase” as a command that invokes “move” so that it transparently gets faster without users having to change anything) is an obviously good idea.

                                                        1. 1

                                                          Like with API usability, there is something called abstraction inversion: If it’s too user friendly, you can’t use it without great difficulty.

                                                          Sometimes, I mean always, a bag of tools is more useful than a framework.

                                                          1. 17

                                                            So it surprised me to learn that some (many?) folks in the open source and academic world hate -Werror with passion.

                                                            That’s because the Open Source world deploys source code to the entire world. You can not possibly come even close to covering all the systems people will try to build your code on, so making the mistake of having Werror on by default means you get a ton of feedback about things not building for unimportant reasons. That gets old very quickly. Warning flags are also quite volatile over time. In short, the in-house proprietary way of thinking here just doesn’t work, because the deployment scenario is entirely different, and you have no control over it.

                                                            Since this is a “strong opinions, loosely held” blog, I’ll follow suit:

                                                            The objectively correct approach here is to have a developer mode which enables ultra strict warnings (far more than mentioned in the post) as errors, and use those on CI, but have the default be a more conservative set of warnings with broad compatibility, and without Werror. That way the code quality is as strict as you like (which is, after all, the point), but you aren’t breaking things for users because of some silly warning. No good comes from that - the feedback is almost always useless, and the initial impression of your software is “it doesn’t even compile”.

                                                            1. 3

                                                              a ton of feedback about things not building for unimportant reasons

                                                              And specifically this usually happens when building with a newer or just another compiler. (So many projects just test on whatever gcc they have – every time I upgrade clang I get new stupid Werror fails…)

                                                              Werror as-is is a complete disaster. “Only use this for development” flags inevitably end up in shipped build systems. No flag should be this fragile.

                                                              If only compiler developers got together and agreed on common warning sets. If one could say -Werror=2021 and any future version of clang and gcc interpreted this as “enable whatever warnings we agreed on in 2021” it would be usable.

                                                              1. 3

                                                                Yeah, I suspect people with the luxury of only “supporting” some small set of compilers/versions really underestimate how hard it is to get a warning-free build across a vast swath of versions, especially if you’re being strict about it, and double plus especially if you’re using Weverything in clang with explicit exceptions (which is obviously absolute madness in conjunction with default Werror, but I’ve seen it…).

                                                                “Only use this for development” flags inevitably end up in shipped build systems. No flag should be this fragile.

                                                                Meh. Here I think I disagree. Any build system used to deploy code to users needs to have some kind of configuration mechanism, and if you have to actively opt-in to warnings being errors, well… you specifically asked for warnings to be errors, so of course they are? Maybe I’ve been lucky, but I’ve never really been bothered by people doing this.

                                                                That said, flag stability is annoying, but I think that’s a different problem - Werror just does what it says on the tin. Compiler authors could never fully agree to such a thing, it’s more or less equivalent to agreeing on a common set of implementation-specific details. The current system of GCC and clang mostly agreeing where possible is as good as that’s going to get, I’d say. Even if they agreed on a common subset, you’d end up using the extra ones anyway (because many are useful but compiler-exclusive), and we’re back to the same problem. MSVC is off in another universe entirely, but it always is.

                                                                1. 1

                                                                  If one could say -Werror=2021

                                                                  You can! You just have to specify which warnings those are:

                                                                  -Werror=format-security -Werror=switch -Werror=maybe-uninitialized …
                                                                  

                                                                  This decouples the “which warnings are errors” question from the general warning level question, which is subject to those volatile warning categories (like -Wall, -Wextra, -Wpedantic). I think this is the only sane way to use -Werror, at least as a default build option in open source.

                                                                  1. 1

                                                                    Sure, but aside from having an ugly loooooong list in CFLAGS, the problem is that I don’t know of a resource that answers questions like “give me the warnings supported by both the N last gcc versions and the last N clang versions”. At least having that as a website would be something.

                                                              1. 10

                                                                I agree with the criticism of the use of systems Hungarian notation, but I wish people would learn that this wasn’t what Hungarian notation was originally intended for. It was to capture things that weren’t part of the type system. For example, in Excel, both the row and column numbers might be ints, but in Hungarian notation one would have a c or col prefix, the other a r or row prefix. If you found yourself writing rCurrent + cNext then you’d double check and make sure you really wanted to add a row to a column. It was only later when the Windows team got hold of it that they started using it to encode things that were already in the type system.

                                                                1. 1

                                                                  That’s fascinating! Do you have any further readings to hand on the the history of that convention?

                                                                  1. 2

                                                                    The Wikipedia article linked to in the post describes this.

                                                                    I hadn’t heard of that either — I thought HN was just a workaround for BCPL not having data types.

                                                                    1. 5

                                                                      Joel on hungarian notation (2005) – this may have been the original source (Wikipedia doesn’t consider blogs a reliable source).

                                                                      I would sum it up as: The point wasn’t to repeat what’s already in the type system, but on the contrary, to distinguish semantic differences that are important to keep apart, yet aren’t helped by the type system.

                                                                1. 2

                                                                  I think most of this makes sense outside of Rust too, as far as it applies. It’s just that Rust has the right abstractions, and lacks the bad abstractions, more than any other language (which btw I think should be a fundamental principle of language design).

                                                                  Let me focus on one thing:

                                                                  A common pattern in Object Oriented languages is to accept a reference to some object, so you can call its methods later on. On its own, there is nothing wrong with this

                                                                  I’ve seen this – a network of objects pointing at each other – and I actually think there is something wrong with this (in any language, that can’t be solved by wrapping it in Rc<RefCell<>> and such): It makes dependencies implicit. The symptom is having to comment in the code that one function needs to be called before another, because you can’t see from their calls that they access the same data. Better make dependencies explicit in real code by passing references down the call graph.