Threads for samhh

  1. 11

    I use bash instead of zsh, but the thing that helped me there is adopting direnv. It allows you to have only the things sourced you really need in that directory. e.g. some js build tool or other heavy things do not have to be initialized everywhere only in the project directories where you need them.

    1. 7

      This also pairs extremely well with Nix dev shells.

      1. 1

        seconding this, direnv is a game changer and combined with editor-project specifics means I don’t have nearly as much pain working on various code stacks (up until I run up against some unfavorable defaults in Emacs or whatever…)

      1. 9

        I find FP in general at a disadvantage in readability possibly because it works in expressions, not statements

        In general statements can at least in part be understood separately, expressions tend to make me have to understand the whole thing.

        It could also be just that haskell/(most fp) code tends to not have enough intermediate variables/functions to explain each chunk but I don’t think thats the only reason. I don’t really understand it but I do find it to be true.

        Maybe if the add helper function was left in it’d be easier to read the haskell insert, but I’ve read it 5-6 times now & i still cant penetrate it. I’m finding myself having to re-read the definition of Trie many times & i forget the order of arguments for the Map methods so I’m trying to infer it.

        The code definitely looks & feels “elegant” in a mathematical sense, but i don’t think that means anything for the readability. It just means it has less specific components & more generic… which I’d argue only hurts readability.

        1. 12

          I’d put that down to familiarity.

          Expressions in pure FP are great because there’s no implicit state to think about. Even if you’re in a monadic context, maybe using the State monad even, it’s all encapsulated in the expression. The only thing to track is closure bindings.

          Imperative statements can be brutal. Like with FP you need to track closure bindings, but these can also change between statements! That’s a major, major source of complexity that most programmers have gotten so used to they don’t even question it.

          1. 2

            If there is one thing that (pure) FP does better, it’s referential transparency, which is the actual guarantee that an expression (which is the only way to express a program in haskell) is readable in isolation, and replaceable by its computed value.

            So it’s definitely a “break down complex expression” problem, which could be eased by using the where syntax in haskell, which is a way to name sub-expressions in a larger one.

            1. 2

              hm, typically “expressions” simply means “statement that has a value”. For example, Python has these “yoda conditionals” that allow you to use if as an expression: x = "wtf" if 1 > 2 else "okay". In Ruby you’re able to use the regular if like that as well, because if isn’t a statement: x = if 1 > 2 then "wtf" else "okay" end. Expressions make code more composable in general, which helps a lot with code-generating code (e.g. macros), which is a big reason Lisps prefer expressions over statements.

              Overusing the ability to embed expressions in other expressions makes code less readable, that’s true. But it doesn’t have to be. It’s like overusing operator overloading in C++ or something. When a language is more expressive it also allows for more convoluted code.

              For example, using intermediate variables is a choice that you can use in FP code just as in more “imperative” code. Not using variables is just a way to show off and make code unreadable, undebuggable and unmaintainable.

              it has less specific components & more generic… which I’d argue only hurts readability.

              Fully agree on that one though!

              1. 1

                I think a way to improve functional programming readability is to use point-free style, or other styles that allow breaking down big expressions into small semantic units that are as easy to understand as statements.

                I found OP’s Haskell code a bit obtuse compared to Python. Generally, writing simple statically typed functional code requires a bit more effort.

              1. 1

                I run NixOS on my personal/dev machine and my homelab, and nix-darwin on my laptop: https://github.com/samhh/dotfiles. Happy to answer any questions.

                We use Nix for dev environments on my team at work. I’d highly recommend it despite its immaturity.

                1. 17

                  I disagree with R6. Specifically, the advice of moving sub-blocks to separate function in the name of avoiding too much nesting. My problem here is that the nesting is still there. It’s just hidden through the function call.

                  I’ll concede that in some cases this makes sense. Especially when the resulting sub-function ends up performing a well defined task and needs few arguments. But it’s not what we should go to first. What we should try instead is to flatten the scopes.

                  In practice, the main sources of deeply nested scopes are loops and conditionals. (Like, duh.) And while nested loops can rarely be collapsed, I’ve seen many conditionals that could be flattened. A classic example is the following pyramid, that we often see when we applying the brainded “single exit point” fake quality rule:

                  if (test1) {
                      if (test2) {
                          if (test3) {
                              OK();
                          } else { error3(); }
                      } else { error2(); }
                  } else { error1(); }
                  

                  It can generally be flattened to:

                  if (!test1) { error1(); return; }
                  if (!test2) { error2(); return; }
                  if (!test3) { error2(); return; }
                  OK()
                  

                  (When your language doesn’t have destructors or defer it can be a bit more complex. Worst case, you’ll need to use goto to handle the cleanup without duplicating code all over the place.)

                  Only when everything is nice and flattened can we ask ourselves whether the nesting is still too deep.

                  1. 6

                    And while nested loops can rarely be collapsed

                    But often extracted. An extraordinary number of the comments I make while reviewing other people’s code is pointing out to them where they can do a tiny amount of preprocessing to avoid unnecessary nested loops and ways they can chain functions that accept and return iterators to replace nested loops with what are in effect pipelines, while making their code simpler.

                    1. 3

                      It can potentially offer more readable code.

                      hasPineapple *> isFresh *> isNotOnPizza
                      

                      This can’t happen if you inline the conditions and errors. Hiding the specifics is desirable if it’s behind a good function name. Contrast:

                      do
                        when (foo1 `bar1` baz1) $ Left "tastes dull"
                        when (foo2 `bar2` baz2) $ Left "could be fresher"
                        when (foo3 `bar3` baz3) $ Left "heathen"
                      
                      1. 5

                        This is a good example of the tension between naming things and keeping the code local. I’d say it depends what the emphasis should be on. Your 3 function chain at the beginning is perfect for showing the big picture at a glance. The expanded version below however is better at showing the internals. The question is, which is more important in any given case?

                        1. 3

                          Big picture, but possibly only when the language gives you the tools to do so comfortably.

                          A delightful feature of Haskell that’s not made it to any other language I use is where clauses, where you can have local definitions after the function body. It’s essentially an inverted let..in.

                          f x = hasPineapple *> isFresh *> isNotOnPizza
                            where hasPineapple = etc
                                  isFresh = etc
                                  etc
                          

                          This allows you to define these bindings at the scope of the outer function, and also hide them below the main function body. In use it acts as if to say “read the main body first, and you can drill down to any of the specifics if/when you care about them”.

                          On the other hand in a language like TypeScript you need to define these bindings above the main body. (You could make an exception for hoisted functions but the scope is still unclear and not all bindings will be functions.)

                        2. 3

                          hasPineapple *> isFresh *> isNotOnPizza

                          I can’t parse this. What is it meant to express?

                          1. 1

                            It’s Haskell’s applicative functor syntax. In fp-ts it’d be:

                            pipe(hasPineapple, apSecond(isFresh), apSecond(isNotOnPizza))
                            

                            It’s similar to monadic bind (which is sort of found in other languages such as JavaScript’s Promise.then and Rust’s Result.and_then). The difference is that here, with the weaker applicative dependency, the code needn’t run sequentially, and we don’t care about the result on one side of the operator assuming it succeeds.

                            If we imagine their types are Either e a, then this will either give us back isNotOnPizza‘s a (which we don’t really care about in this example), or the left-most e, representing failure. Here are some REPL-friendly examples:

                            -- Right 'y'
                            Right 'x' *> Right 'y'
                            
                            -- Left 'e'
                            Left 'e' *> Right 'y'
                            
                            -- Left 'e'
                            Right 'x' *> Left 'e'
                            

                            If the types were Validation, then the same code would result in us collecting all the errors in say a list instead of failing fast.

                            Applicative syntax is also extremely pleasant for writing recursive descent parsers.

                            1. 1

                              I’m sorry, I still can’t parse the actual intent here. When I read that expression I see two boolean qualifiers (hasPineapple, isFresh) which make sense to apply to a given value (a pizza). But then there’s this parameterized qualifier (isNotOnPizza) without any apparent parameter. Is what not on pizza?

                          2. 1

                            but for chains like this you need languages that supports Result-like types well

                            1. 2

                              It’s becoming more common. Rust’s “enums” come to mind. I’m positive there are others in more mainstream, non-functional languages.

                              1. 3

                                Wider use of algebraic data types can’t come soon enough.

                          3. 1

                            And while nested loops can rarely be collapsed, I’ve seen many conditionals that could be flattened.

                            I tend to think of error handling as a special case, where the ideas of structured programming must be suspended. Errors are exceptional states that usually require execution to be aborted, which is why they are often handled using non-structured jumps, like exceptions or gotos.

                            Outside of error handling, I am not sure if early returns are a good idea in general. Compare:

                            void UpdateTheme()
                            {
                                if (!IsThemeActive) return;
                                const bool bThemeActive = IsThemeActive();
                                g_dlv->UpdateTheme(bThemeActive);
                                g_elv->UpdateTheme(bThemeActive);
                            }
                            

                            and

                            void UpdateTheme()
                            {
                                if (IsThemeActive) {
                                    const bool bThemeActive = IsThemeActive();
                                    g_dlv->UpdateTheme(bThemeActive);
                                    g_elv->UpdateTheme(bThemeActive);
                                }
                            }
                            

                            I recently changed the former to the latter, because I think the intent and logical structure are more transparent.

                            1. 5

                              In your specific example I think they the before and after are equivalently clear, but in the general case early returns for “exceptional” cases are more readable.

                              Align the happy path to the left edge

                              The nesting forces you keep more context, and more distant context, in your short term memory.

                              While technically speaking you could say the same of early returns (all the previous early returns are context, after all), in practice this isn’t really case, because the early return style allows you take advantage of the knowledge “ok, there’s a bunch of crap that could go wrong, and we’re dealing with each of them, and now it’s all done and we can handle the main case” – which is an easier way to think about things.

                              It’s also nice to have it highlighted that way in code with a consistent structure. In the nested version, your “main logic”, the happy path, is this deeply indented little branch in the middle of a bunch of other code or, even worse, split up across multiple such branches.

                              1. 1

                                in the general case early returns for “exceptional” cases are more readable.

                                I agree. I suppose I should say that in my specific example, it isn’t really an error or an exceptional case that IsThemeActive is null. That it is non-null is simply a condition for the following three lines of code to be executed. The benefit of using a conditional statement rather than an early return is that it is harder for the reader to miss the condition, and it is easier to refactor the code, e.g. if one were to move the block into another function.

                                My general point is that early returns and gotos are great when they’re needed, but that I’m not sure whether it is a good idea to use them when a normal conditional statement would fit just as well. If the only reason an early return is used is to avoid another level of indentation, then perhaps it actually makes the code less clear and harder to understand. Indentation highlights the logical structure of the code visually in a way that non-structured control flow tends to obscure.

                                1. 1

                                  The thing is that indentation is a pretty good proxy for state, and state is absolutely worth minimizing. It’s not an exceptional case that IsThemeActive is false, but if the majority of the logic of a block of code assumes that IsThemeActive is true, then it’s good if you can eliminate the counterfactual condition early-on, and therefore be able to drop that state going forward.

                                  And I don’t think early returns and gotos are really comparable. Early returns work within the rules of the call stack, but gotos can do basically whatever.

                              2. 5

                                Errors are exceptional states that usually require execution to be aborted, which is why they are often handled using non-structured jumps, like exceptions or gotos.

                                Is it exceptional if an HTTP GET request fails? Or a disk read? Or a JSON parse of untrusted data? Or an execution of a subprocess? Or a function call that takes user input? Assuming you’re programming with syscalls – errors are absolutely normal, equivalent to happy-path control flow, and should be returned to callers same as a successful result.

                                UpdateTheme

                                YMMV, but I wouldn’t approve a PR that changed your first version to the second. Early returns are God-sends! They’re one of the best ways to reduce the amount of state that human beings need to maintain as they read and model a block of code.

                                1. 2

                                  errors are absolutely normal, equivalent to happy-path control flow

                                  The difference is that errors usually require the entire execution to be aborted early, and they may arise in different parts of the logical structure of a function. Exceptions are so useful because they cater to this specific, but very common need. Early returns are another way of handling it.

                                  Early returns are God-sends! They’re one of the best ways to reduce the amount of state that human beings need to maintain as they read and model a block of code.

                                  On the one hand, yes. Outside of error handling, I find early returns to make the code more clear when the execution of an entire function depends on a variety of preconditions.

                                  On the other hand, some problems with early returns are that

                                  1. they’re easy to miss, and
                                  2. they require the reader to carefully read all the code in order to understand the logical structure and control flow.

                                  I think both points apply to the code sample I shared above. In the second example, it is clear visually that the execution of the indented lines depend on some condition. One doesn’t even really have to read the code. It is also easier to refactor.

                                  There is a presentation about structured programming by Kevlin Henney, which influenced some of my thoughts on this.

                                  1. 1

                                    The difference is that errors usually require the entire execution to be aborted early, and they may arise in different parts of the logical structure of a function. Exceptions are so useful because they cater to this specific, but very common need.

                                    I bet we work in pretty different programming contexts, and I’m sure that rule makes sense in yours. But in my world, there is essentially no situation where an e.g. network request error, or input validation error, or whatever else, should abort execution at any scope beyond the current callstack.

                                    On the other hand, some problems with early returns are they’re easy to miss, and they require the reader to carefully read all the code in order to understand the logical structure and control flow.

                                    Early returns model error handling as normal control flow. If errors are normal, and an error is no different than any other value, then readers will naturally need to do these things to understand some code. They have to do that anyway! And exceptions actually make understanding control flow harder rather than easier, I think. With early returns you don’t get any surprises — the function will return when it says return. But with exceptions, all bets are off — the function can return at any expression.

                            1. 31

                              This is as opposed to languages where conformance must be explicitly declared somehow, either with a formal subclass relationship (the class-based portion of C++) or by explicitly declaring conformance to an interface (Java). Not needing to formally declare conformance is powerful because you can take some pre-existing object, declare a new interface that it conforms to, and then use it with your interface.

                              There is a third option: things like Rust’s traits, Swift’s protocols or Haskell’s typeclasses, all of which are like a post-hoc version of Java interfaces. You’re effectively advocating for dynamic/structural typing because it addresses the expression problem. That’s not wrong, but there are ways to do it in more statically/nominally typed systems too.

                              1. 4

                                Even go, which is not noted for it’s expressive type system, does this.

                                1. 1

                                  I’m not familiar with Rust’s traits or Swift’s protocol. For Haskell’s type classes, if you want to extend a predefined type to conform to a new type class, you would need to newtype it with that type class, which is still inconvenient as you need to call existing functions with that predefined type under a Monad that wraps the newtype.

                                  1. 14

                                    if you want to extend a predefined type to conform to a new type class, you would need to newtype it with that type class

                                    You do not need to do this at all.

                                    1. 3

                                      Seconding this, although if you didn’t define either the type or the typeclass you get into orphan instance territory.

                                      1. 2

                                        I stand corrected. Thanks. I didn’t have enough coffee. You need newtype only if you want to further customize the type.

                                      2. 4

                                        In Haskell, there are no such limitations, as others mentioned. You can define as many instances as you want, as long as they don’t clash when imported.

                                        In fact, the limitation you’re describing is that of OOP interfaces! It is them that require writing adapters all the time if the class itself does not implement an interface.

                                        Rust does have a limitation: instances must be written either alongside the type definition, or alongside the trait definition. Less flexible than Haskell, but still much better than OOP interfaces.

                                    1. 15

                                      Now Go users can clamour for HKTs just like Rust users.

                                      1. 3

                                        What’s an HKT?

                                        1. 7

                                          To elaborate on the other comment - higher kinded types refer to being able to make other parts of the type generic - we can talk about mapMaybe : (a -> b) -> Maybe<a> -> Maybe<b> but without higher kinded types, you can’t talk about genericMap : Mappable m => (a -> b) -> <m><a> -> <m><b> where m could be Maybe or List or Array or IO. In Haskell, we have the Functor class:

                                          class Functor f where
                                              fmap :: (a -> b) -> f a -> f b
                                          

                                          where any type f which can implement this interface, and obey a few simple laws (that fmap identity is just identity so the shape of the structure isn’t changed by fmap), means that f is a Functor;

                                          instance Functor Maybe where
                                              fmap f Nothing = Nothing
                                              fmap f (Just a) = Just (f b)
                                          
                                          instance Functor [] where -- Instance for lists
                                              fmap f xs = map f XS
                                          
                                          etc.
                                          

                                          We can now write algorithms which generalise over things which are Functors, or Monads, such as

                                          -- Run some monadic action N times
                                          replicateM :: Monad m => Int -> m a -> m [a] 
                                          
                                          -- Run an IO action three times
                                          >>> replicateM 3 (putStrLn "Hello!") 
                                          Hello!
                                          Hello!
                                          Hello!
                                          [(),(),()] -- The [()] returned by replicateM - since (putStrLn "..") has type IO () 
                                                     -- the Hello!s are the result of evaluating those actions
                                          
                                          -- Not super useful for Maybe, but it still works
                                          >>> replicateM 3 (Just True)
                                          

                                          Just [True,True,True] – Shows that the Maybe monad is the “fail if anything fails” monad, not just a fancy null >>> replicateM 3 Nothing Nothing

                                          -- make all combinations of the elements of a list of length three
                                          >>> > replicateM 3 "ABC"
                                          ["AAA","AAB","AAC","ABA","ABB","ABC","ACA","ACB","ACC",
                                          "BAA","BAB","BAC","BBA","BBB","BBC","BCA","BCB","BCC",
                                          "CAA","CAB","CAC","CBA","CBB","CBC","CCA","CCB","CCC"]
                                          

                                          the implementation for replicateM is defined only once, and uses the behaviour of which ever monad you provide it with.

                                          Not sure if that cleared anything up, but the take away is probably the

                                          genericMap : Mappable m => (a -> b) -> <m><a> -> <m><b>
                                          

                                          example, where you want to be able to talk not just about generic values within a type, but also generic structures themselves, where you may not care what their generic parameters actually are, as in this case.

                                          1. 3

                                            I appreciate the detailed explanation, although I have encountered HKTs recently while learning OCaml - I just didn’t recognize the acronym (and the results of a Google search were dominated by Hong Kong Telecom).

                                          2. 2

                                            Higher kinded types

                                          3. 1

                                            And TypeScript!

                                          1. 1

                                            In my opinion, server settings are where NixOS shines the most!

                                            I was curious about that one recently. On a Mac if I need to install something, I just do it and if it’s big enough I’ll check back in 10min - it mostly works. (ignoring all the other problems) Maybe run gc once a week and clean up 40GB of trash. But I wouldn’t want to attempt that on an RPi class hardware. How do people deal with that? Some external compilation service/cache? I mean cases where it turns out you’re about to compile llvm on your underpowered Celeron based NAS.

                                            1. 3

                                              Some external compilation service/cache?

                                              Basically yeah, its pretty easy to setup and defer to something else to build: https://sgt.hootr.club/molten-matter/nix-distributed-builds/

                                              1. 1

                                                Unless I’m misunderstanding, you want to target revisions of nixpkgs channels which are already built as per: https://status.nixos.org

                                                1. 1

                                                  I’m on a rolling release, but it’s not perfect. Specifically, some updates are not built yet at the time of installation. Also changing since configs means recompiling anyway (for example if you want to enable jit in Ruby)

                                                  1. 2

                                                    Specifically, some updates are not built yet at the time of installation. I guess for something like a raspberry pi your best option would be to stick to a release instead of nixos-unstable. For unstable or configs which are not in upstream caches you’d need some kind of external cache (which could be your desktop machine), yes.

                                                    One pitfall I ran into when using a stable release was using release-21.11 which also includes upgrades which are not yet built, switching to nixos-21.11 solved that.

                                              1. 8

                                                Wow.

                                                It’s disappointing how slow the roll-out of fast fibre broadband in the UK has been. A couple of years ago I was living in central London, with 67Mb/s the top download speed I could get. Of course, this was asymmetric, so the upload speed was even worse. After several months, Hyperoptic wired the building up with fibre and I could get a symmetric gigabit connection, which was fantastic.

                                                Then I had to move slightly further out, still in the London area and still with a fibre connection, but now the best I can get is asymmetric 550Mb/s down / 35Mb/s up. Yes, this is still really fast, but… it’s so much worse than it could be!

                                                1. 6

                                                  In Cambridge, I have BT’s FTTP package, which is 900 Mb/s down, 110 Mb/s up. I don’t know why they do the asymmetric thing, I’d more happily pay for a 500 Mb/s symmetric link. CitiFibre is rolling out a parallel fibre network, though it doesn’t seem reliable. Folks I know using it frequently report downtime and often have poor quality in video calls and so I suspect that they’re not getting anything like their promised 1000Mb/s symmetric bandwidth.

                                                  Generally, with 900Mb/s downloads, the bottleneck is elsewhere. A lot of servers top out at 200-400 Mb/s, so with wired GigE I can make a second fast connection somewhere else but can’t get more speed from a single download location. With 802.11ac, the WiFi is more often a bottleneck than the wired connection. I don’t have any 11ax hardware yet, in theory it should move the bottleneck back.

                                                  Upgrading the wiring in my house to handle more than GigE is probably a lot of effort, so I doubt that I’d get much benefit from a faster connection - I only upgraded the switches from 100Mb/s to 1Gb/s a couple of years ago after GigE equipment prices dropped to the dirt-cheap price that I paid for the 100Mb/s hardware I’ve had for 10-15 years. 10GigE switches seem to cost about 50 times as much as 1GigE ones, so I’m in no hurry to upgrade.

                                                  I remember the upgrade from 2400baud to 14.4 Kb/s and then to 28.8 Kb/s as big jumps that made it possible to load images on web pages by default most of the time. The jump to a 512 Kb/s cable modem in a shared house was a huge improvement. First, because it was always on, but it meant that downloading entire videos or Linux ISOs was feasible (though with some traffic shaping at the router, especially so that someone using BitTorrent didn’t saturate the upstream and prevent ACK packets getting through for everything else. I learned to use PF / ALTQ on OpenBSD from one my housemates solving that problem). I was living with geeks and so when the 1Mb/s option came along we jumped on it and had enough spare bandwidth that we could listen to decent-quality Internet radio. I did set up a repeater for Radio Paradise so that we weren’t using half of the bandwidth to all download the same stream though.

                                                  I think the provider (NTL, later Virgin Media) upgraded us to 5 Mb/s and then 10 Mb/s on the same price. That was, again, a big jump because we didn’t need to restrict usage at all. I stayed on the 10 Mb/s connection (by then living by myself) as it went from the most expensive package to the cheapest, and as the cheapest connection went from 10-20-30 Mb/s. Streaming video came out around then and Virgin Media did some annoying rate limiting, which meant if you watched an hour of HD video at peak times you’d be throttled for a few hours. They stopped that after a year or so.

                                                  I think I stayed on 30 Mb/s until moving here. I moved from the cheapest FTTP offering to the most expensive during lockdown when working from home and wanting to make sure the Internet wasn’t a bottleneck (again, mostly for upstream) but 99% of the time I don’t notice the difference. We can play cloud games on Xbox game pass and stream HD video at the same time, but I think you could do that on the 56 Mb/s connection too. Backing up from my NAS to the cloud is faster and downloading games from Game Pass or gog.com is faster (a lot faster from gog.com), but I increasingly don’t install games locally given how good the streaming option is (Game Pass pops up a thing saying ‘Install this game for the best experience’, but I don’t consider worse graphics and longer loading times from my Xbox One S versus the Xbox Series X in the cloud to be the best experience).

                                                  Maybe 3D AR things will drive up the demand again, but since we passed 50Mb/s we’ve been well in diminshing-returns land, unless you have a large family that all wants to watch different HD films at the same time.

                                                  1. 4

                                                    I don’t know why they do the asymmetric thing,

                                                    Sometimes the underlying infrastructure is asymmetrical, e.g. with GPON. But mostly, I guess, the big end user ISP optimize their network for incoming traffic from big content provider.

                                                    1. 1

                                                      I suspect it’s also to discourage people from using residential connections to operate servers.

                                                      1. 2

                                                        It’s usually because residential users tend to consume content rather than produce content. Offering a symmetrical 200 Mbit connection is generally less useful than a 300/100 Mbit connection. This also lets ISPs cost cut more as they try and use available channels for downlink rather than uplink. There’s limits to how far this goes as you definitely don’t want to saturate your uplink while trying to consume content, but that’s typically why.

                                                        1. 2

                                                          This is exactly right. People enshrine into technologies and solutions the approaches people are currently taking. This means asymmetry was an engineering shortcut to maximize the usefulness of the technology for what people actually needed.

                                                          And then the rest of us upload images to the cloud and actually get around to saturating that upload, dreaming of a world with symmetric links.

                                                        2. 1

                                                          Also the reason why you can’t get static IPv6 prefixes at most provider.

                                                    2. 3

                                                      Hello from the North of England! I’m jealous; there are certainly benefits to moving away from London (I lived there for 15 years) but when it comes to internet speeds the saying “it’s grim up north” certainly rings true!

                                                      Speedtest.net reports 25 Mb/s download, 5 Mb/s upload, and 29 ms ping times for my current connection. And that’s a fantastic improvement since I moved 5 months ago: at my old house a few miles away the fastest connection money could buy was 19 Mb/s down, and just over 1 Mb/s upload. I work from home, and Zoom calls can be rough when others in the house are playing online games.

                                                      Edit: fixed MB/s -> Mb/s (oops)

                                                      1. 2

                                                        I still only get 28Mb/s down in zone 3 of London. Our infrastructure is generally awful.

                                                        By the way you should know there’s a big difference between “Mb” and “MB”.

                                                        1. 1

                                                          I’m not sure where Speedtest.net’s edge is, but 29 ms ping times can be killer for video calls depending on the latency to Zoom’s closest video edge. Is the 29 ms over WiFi?

                                                          1. 1

                                                            That’s interesting. Yes, it’s over Wi-Fi. I don’t own any computers with a physical network port in any more, but I can try to see if I can do a Speedtest from the router. If I get better ping times from that I’ll try to stretch a cable via the loft to my office and buy a usb-c network dongle.

                                                            1. 2

                                                              On my home network, speed tests tend to read a latency of 35ms ping under load on WiFi. Latency stays lower when I’m using Ethernet (and I’ve corroborated similar numbers using iPerf.) Zoom performance is way better on my home network with an ethernet connection even if I’m the only one using it (many fewer stutters or freezes). When both my partner and I are using Zoom over Wifi, the experience is pretty terrible unless one of us gets on Ethernet (since it’s easy to have frames collide on Wifi, causing retries and latency on the RTP “connections” Zoom uses to send video).

                                                              1. 1

                                                                pinging your gateway may also give you a an approximate picture of how much latency your Wi-Fi leg is contributing to your score, but with less effort.

                                                                1. 1

                                                                  Thanks, that’s a great idea. Running mtr from my laptop to the domain of my ISP yields this for the first two hops:

                                                                                                         Packets               Pings
                                                                   Host                                Loss%   Snt   Last   Avg  Best  Wrst StDev
                                                                   1. 192.168.1.1                       0.0%    67   26.6   7.5   1.8 124.2  15.9
                                                                   2. fritz.box                         0.0%    67    3.2   5.2   2.5  21.0   3.6
                                                                  

                                                                  192.168.1.1 is a TP-link mesh-networking thing that’s plugged into fritz.box (my ADSL router) with a short cat-5 cable.

                                                                  Walking through the ADSL router’s options looking for a speed-test option it looks like it too supports mesh, so I will try to make it the primary. That might let me discard a hop some of the time? I can see the router itself from half my house, but tend to connect to the mesh. (It has a cooler network name ;-) )

                                                                  1. 1

                                                                    Does your ADSL router have an AP as well? If not then this is standard. Your packet first goes to the AP which then pushes your packet to the router and then to the upstream ISP router.

                                                                    Try running an mtr to a remote and see how much time is spent getting to your AP.

                                                          2. 2

                                                            Honestly the state of broadband in the capital was extremely dire 8 years ago. It doesn’t surprise me that you’re not having a good time but I am impressed you’re getting those speeds.

                                                            I was on 16Mbit and it would die every night. 3 places in wildly different areas had the same awful oversubscribed ADSL thing. I even ranted about it at length: http://blog.dijit.sh/the-true-state-of-london-broadband

                                                          1. 1

                                                            I’ve been using TypeScript for about a month, and flow typing is one of my favorite features. (And now I know what it’s called!)

                                                            An interesting feature of TS is user-defined predicates that narrow types, like

                                                            function isFoo(x: Foo | Bar) : x is Foo { return s instanceof Foo; }

                                                            The return type x is Foo means the function returns a boolean, but with a compile-time side effect that if it returns true the compiler knows that the argument’s type is Foo.

                                                            1. 3

                                                              User-defined type guards are unsafe so you need to be careful with them. This’ll typecheck in a fully strict environment:

                                                              const absurd = (x: unknown): x is number => true
                                                              
                                                            1. 11

                                                              I’ve been learning Rust the past couple weeks strictly for hobbyist reasons. I’m coming from the dynamic typing world of Python and JavaScript. While I’m familiar with the strongly typed paradigms of C and Java, I figured spending my time in Rust would be more beneficial.

                                                              There are a lot of individual things I like in Rust, but programming in it in general feels like I’m trying to learn a new sport and I can’t quite seems to get my sense of balance. E.G., “I’ve been out on the water a hundred times now, so why can’t I comfortably stand on the surfboard now?” type of feeling. If that makes sense.

                                                              I’ll keep plowing forward with it though. I’ve got The Rust Programming Language book from No Starch Press, and I have been spending a lot of my time in the Rust Book online. I’ll find my balance, I just wish I could find it sooner than later.

                                                              1. 13

                                                                While I’m familiar with the strongly typed paradigms of C and Java

                                                                C is barely type at all, and Java is a weird hybrid of a poor type system and just being dynamically typed anyway. I wouldn’t call either “strongly typed”.

                                                                1. 4

                                                                  C is barely type at all, and Java is a weird hybrid of a poor type system and just being dynamically typed anyway. I wouldn’t call either “strongly typed”.

                                                                  Maybe I should have chosen “static” instead, as I also referred to “dynamic” typing rather than “weak” typing. Or maybe type safety would have been even a better comparison? Shrug.

                                                                  1. 22

                                                                    I wouldn’t worry too much, internet fights about what counts as “static” or “strong” or “typed” are generally not worth your time.

                                                                    1. 9

                                                                      The key here is that there is no single definition of “strong” or “static” types. It is a spectrum. For example, Rust has algebraic data types, C++ does not. They are both static and strong in the broad sense, but algebraic data types mean that you can express more things at the type level.

                                                                      It’s not like saying “I’m familiar with C, so I know everything that can be done with static types,” is accurate, I think is the point of this response.

                                                                      1. 2

                                                                        It’s not like saying “I’m familiar with C, so I know everything that can be done with static types,” is accurate, I think is the point of this response.

                                                                        Indeed.

                                                                  2. 6

                                                                    I get what you’re saying and if I may suggest two things which have helped me pick up rust: writing a small but complete CLI application and watching live or recorded streams (for instance Ryan Levick or Jon Gjengset are formidable tutors)

                                                                    1. 4

                                                                      Maybe you should listen to your gut. Rust is a relatively new language, it is not clear it will stand the test of time.

                                                                      1. 8

                                                                        That’s fair. However, even though I’m struggling to find my balance, I find learning it exciting. I think I’d rather learn as much of the ins and outs as I can and walk away with an informed opinion than bail early. At least I’m having fun, even if I keep falling off the surfboard.

                                                                        1. 6

                                                                          I hope you stick with it. It is a relatively hard-to-master language, IMHO. Java, or an GC language doesn’t force you to think about ownership the way Rust puts that front and center. The ownership model has a dramatic effect on the kind of datastructures you can and should design. And it effects the way in which you use and share data across the entire application.

                                                                          I believe the tradeoff is worth it, because you can catch certain classes of memory safety bugs at compile time, which can lurk undetected for years in other codebases. These are the kind of bugs that cause big problems later.

                                                                          1. 2

                                                                            I hope you stick with it.

                                                                            I plan on it.

                                                                            It is a relatively hard-to-master language, IMHO. … The ownership model has a dramatic effect on the kind of datastructures you can and should design. And it effects the way in which you use and share data across the entire application.

                                                                            This might be the balance I’m struggling to find. A lot of my compiler errors are related to owneship and I keep having to come back to the docs to remind myself how it works.

                                                                      2. 2

                                                                        I made a similar jump. It informed how I learned TypeScript and then made a whole lot more sense once I learned Haskell. Some tidbits that would’ve helped me follow.

                                                                        Static typing, especially in something like TypeScript, can be thought of as an elaborate linter. At times it can be restrictive, but surprisingly often it’s leading you towards better code.

                                                                        Strong static typing as in Rust’s case is like static typing but where your types are accurate to what will be there at runtime, so you’re probably going to need to do some validation or casting at your application boundaries. In Rust’s case also there’s no duck typing, you can’t just carry around data the compiler doesn’t know about at compile-time.

                                                                        Traits are interfaces for shared abstractions across types. I really struggled with this until I saw typeclasses in Haskell. I think learning something relatively simple there like Functor can be instructive. It’s not entirely unlike extending prototypes in JavaScript, but here the compiler can use that information at compile-time to let you write functions which constrain input to support certain things, like for example an Eq constraint would mean “I’ll accept any input for which I can test equivalence”.

                                                                        1. 1

                                                                          There is always duck typing if you want it, in any language. The best way in rust IMO is trait objects, but you can even get Java-style dynamic typing if you want (with possible a single use of unsafe in the library providing it).

                                                                        1. 3

                                                                          I have a project-level justfile that does more or less the same thing as the shell script but with all of the ergonomics of a Makefile. Using https://just.systems to automate common Rails, React, amd Postgres tasks is extremely convenient.

                                                                          1. 4

                                                                            I went to click on this and got a full screen j u s t with no ability to scroll on Bromite or Fennec on Android. Was not giving the user any info intentional?

                                                                            1. 3

                                                                              The letters are clickable and link to the github repo.

                                                                              1. 5

                                                                                An underline would maybe help users know this is a link and clickable eyeroll

                                                                            2. 1

                                                                              I just incorporated this at work, would recommend it.

                                                                          1. 1

                                                                            I have that keyboard and wrote my own keymap in qmk.

                                                                            It treats it as a 30% keyboard to minimise discomfort/reaching in my relatively small hands. The gaming layer retains use of the rest of the keys; I’m not sure on this basis how I could ever migrate to a physical 30% keyboard.

                                                                            It was originally Qwerty but is now based upon Colemak-DHm, which I quite like so far. The gaming layer keeps Qwerty so I don’t have to rebind everything I play. This works well except I can’t type very quickly in multiplayer any more: I either need to take a moment to context-switch back to Qwerty, or have to do a layer dance to first go to Colemak and then back again.

                                                                            I don’t love where Cmd is, or rather how it’s activated on the controls layer via oneshot.

                                                                            1. 6

                                                                              I think there are two primary reasons to learn and use a language, library, paradigm, etc.

                                                                              The first and most common is that it’s where the job market is. This might be why one chooses to learn JavaScript for example, particularly early in their career. Anecdotally, PHP roles in my city seem to be just about the worst paying, and if memory serves the Stack Overflow survey backs this up.

                                                                              The second is for enjoyment and/or intellectual intrigue. I don’t think this can apply to such a middle of the road, thoroughly mainstream language, though perhaps I’m wrong.

                                                                              I don’t mean to be rude to anyone who does like PHP by the way, I’d just advise against learning it in favour of almost any alternative for either of the above reasons.

                                                                              1. 34

                                                                                I had to stop coding right before going to bed because of this. Instead of falling asleep, my mind would start spinning incoherently, thinking in terms of programming constructs (loops, arrays, structs, etc.) about random or even undefined stuff, resulting in complete nonsense but mentally exhausting.

                                                                                1. 12

                                                                                  I dreamt about 68k assembly once. Figured that probably wasn’t healthy.

                                                                                  1. 4

                                                                                    Only once? I might have gone off the deep end.

                                                                                    1. 3

                                                                                      Just be thankful it wasn’t x86 assembly!

                                                                                      1. 4

                                                                                        I said dream, not nightmare.

                                                                                        1. 2

                                                                                          Don’t you mean unreal mode?

                                                                                          being chased by segment descriptors

                                                                                          only got flat 24bit addresses, got to calculate the right segment bases and offsets, faster than the pursuer

                                                                                    2. 7

                                                                                      One of my most vivid dreams ever was once when I had a bad fever and dreamed about implementing Puyo Puyo as a derived mode of M-x tetris in Emacs Lisp.

                                                                                      1. 21

                                                                                        When I was especially sleep-deprived (and also on call) in the few months after my first daughter was born, I distinctly remember waking up to crying, absolutely convinced that I could solve the problem by scaling up another few instances behind the load balancer.

                                                                                        1. 4

                                                                                          Oh my god.

                                                                                          1. 2

                                                                                            Wow that’s exactly what tetris syndrome is about. Thanks for sharing!

                                                                                        2. 5

                                                                                          Even if I turn off all electronics two hours before bed, this still happens to me. My brain just won’t shut up.

                                                                                          “What if I do it this way? What if I do it that way? What was the name of that one song? Oh, I could do it this other way! Bagels!”

                                                                                          1. 5

                                                                                            even undefined stuff

                                                                                            Last thing you want when trying to go to sleep is for your whole brain to say “Undefined is not a function” and shut down completely

                                                                                            1. 5

                                                                                              Tony Hoare has a lot to answer for.

                                                                                            2. 2

                                                                                              Different but related: I’ve found out (the hard way) that I need to stop coding one hour before sleeping. If I go to bed less than one hour after coding, I spend the remaining of the said hour not being able to sleep.

                                                                                              1. 1

                                                                                                I know this all too well. Never heard of the tetris syndrome before. I need to investigate this now right before going to bed.

                                                                                              1. 2

                                                                                                Why would one use this over FairEmail?

                                                                                                1. 7

                                                                                                  UX. I’ve tried FairEmail and everything about the interface just felt wrong. I can’t point at exactly what. Just everything.

                                                                                                  1. 4

                                                                                                    K9 is fully open source. You can download the code, edit it, and recompile it yourself, no sweat.

                                                                                                    K9 has had an awful lot of bugs ironed out of it over the last N years, and there are config workarounds for weird email servers. Many people never encounter a weird email server, or don’t recognize it as an issue.

                                                                                                    1. 3

                                                                                                      I don’t like FairEmail’s “pro” feature system. Besides the interface being worse for me than K-9 (personal preference there), I have K-9 setup to only notify when I receive email from a contact. That keeps my distractions low while getting good prioritization. The last time I tried FairEmail trying to setup something similar was more involved and required the “pro” features.

                                                                                                      1. 1

                                                                                                        I assume its because it’s one of the few Android Email clients with PGP encryption

                                                                                                      1. 14

                                                                                                        This was an absolutely brilliant article! It was fantastically well researched and written someone with expert knowledge of the domain. I’m learning so much from reading it.

                                                                                                        The argument about representing JSON objects in SQL were not persuasive to me. I do not really understand why this would be desirable. I see the SQL approach as a more static-typed one, where you would process JSON objects and ensure they fit a predefined structure before inserting them into SQL. For a more dynamic approach where you just thrown JSON objects into a database you have MongoDB. On that note I think the lack of union types in SQL is a feature more than a limitation, isn’t it?

                                                                                                        Excellent point about JOIN syntax being verbose, and the lack of sugar or any way to metaprogram and define new syntax. The query language could be so much more expressive and easy to use.

                                                                                                        It totals ~16kloc and was mostly written by a single person. Materialize adds support for SQL and various data sources. To date, that has taken ~128kloc (not including dependencies) and I estimate ~15-20 engineer-years

                                                                                                        I think these line counts say a lot! The extra work trying to fulfill all the criteria of the SQL standard isn’t necessary work for the implementation of a database system. A more compact language specification would enable implementations to be shorter and enable people to learn it much more easily.

                                                                                                        The overall vibe of the NoSQL years was “relations bad, objects good”.

                                                                                                        The whole attitude of the NoSQL movement put me off it a lot. Lacking types and structure never sounded like an improvement to me - more like people just wanted to skip the boring work of declaring tables and such. But this work is a foundation for things to work smoothly so I think the more dynamic approach will often bite you in the end. But then the author explains more about GraphQL and honestly it sold me on GraphQL, I would be very open to using that in future rather than SQL after reading this.

                                                                                                        Strategies for actually getting people to use the thing are much harder.

                                                                                                        This is a frustrating part about innovation in programming but honestly I believe that the ideas he has presented represent too significant an improvement that they are just too good for people not to start using.

                                                                                                        1. 7

                                                                                                          If you have data encoded in a JSON format, it often falls naturally into sets of values with named fields (that’s the beauty of the relational model) so you can convert it into a SQL database more or less painlessly.

                                                                                                          On the other hand, if you want to store actual JSON in a SQL database, perhaps to run analytical queries on things like “how often is ‘breakfast’ used as a key rather than as a value”, it’s much more difficult, because “a JSON value” is not a thing with a fixed representation. A JSON number might be stored as eight bytes, but a JSON string could be any length, never mind objects or lists. You could create a bunch of SQL tables for each possible kind of JSON value (numbers, strings, booleans, objects, lists) but if a particular object’s key’s value can be a number or a string, how do you write that foreign key constraint?

                                                                                                          Sure, most applications don’t need to query JSON in those ways, but since the relational model is supposed to be able to represent any kind of data, the fact that SQL falls flat on its face when you try to represent one of the most common data formats of the 21st century is a little embarrassing.

                                                                                                          That’s what the post means by “union types”. Not in the C/C++ sense of type-punning, but in the sense of “a single data type with a fixed number of variants”.

                                                                                                          1. 4

                                                                                                            A JSON number might be stored as eight bytes

                                                                                                            Sorry to nitpick, but a JSON number can be of any length. I think what you were thinking of was JavaScript, in which numbers are represented as 64-bit values.

                                                                                                            1. 1

                                                                                                              No, the json standard provides for a maximum number of digits in numbers. Yes I know this because of a bug from where I assumed json numbers could be any length.

                                                                                                              Edit: I stand corrected - I’m certain I saw something in the standard about a limit (I was surprised) but it seems there isn’t. That said various implementations are allowed to limit the length they process. https://datatracker.ietf.org/doc/html/rfc7159#section-6

                                                                                                              1. 5

                                                                                                                Which standard? ECMA-404 doesn’t appear to have a length limitation on numbers. RFC 8259 says something much more specific:

                                                                                                                This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.

                                                                                                                In fewer words, long numbers are syntactically legal but might be incorrectly interpreted depending on which implementation is decoding.

                                                                                                                1. 1

                                                                                                                  The ECMA-303 standard doesn’t talk about any numerical limits at all, and RFC7159 talks about implementation-specific limitations which a) is kinda obvious, because RAM isn’t unlimited in the real world and b) doesn’t buy you anything if you are implementing a library that needs to deal with JSON as it exists in the wild.

                                                                                                                  So yes, JSON numbers can be of unlimited magnitude and precision and any correct parsing library better deals with this.

                                                                                                            2. 5

                                                                                                              Lacking types and structure never sounded like an improvement to me - more like people just wanted to skip the boring work of declaring tables and such.

                                                                                                              To some degree it’s the same as the arguments in favor of dynamically-typed languages. Just s/tables/variable types/, etc.

                                                                                                              Also, remember the recent post which included corbin (?)s quote about “you can’t extend your type system across the network” — that was about RPC but it applies to distributed systems as well, and the big win of NoSQL originally was horizontal scaling, i.e. distributing the database across servers.

                                                                                                              [imaginary “has worked at Couchbase for ten years doing document-db stuff” hat]

                                                                                                              1. 3

                                                                                                                The whole attitude of the NoSQL movement put me off it a lot. Lacking types and structure never sounded like an improvement to me - more like people just wanted to skip the boring work of declaring tables and such.

                                                                                                                I always thought that NoSQL came about because people didn’t feel like dealing with schema migrations. I’ve certainly dreaded any sort of schema migration that did more than just add or remove columns. But I never actually tried using NoSQL “databases” so I can’t speak about whether or not they actually help.

                                                                                                                1. 13

                                                                                                                  In practice you still need to do migrations, in the form of deploying your code to write the new column in a backwards compatible way and then later removing that backwards compatible layer. The intermediate deployments that allow for the new and old code to live side by side, as well as a safe rollback, are required whether you use sql or not. The only difference is that you don’t have to actually run a schema migration. A downside of this is that it’s much easier to miss what actually turns out to be schema change in a code review, since there are not explicit “migration” files to look for.

                                                                                                                  1. 10

                                                                                                                    This! you’re basically sweeping dirt under the carpet. One day you’re going to have to deal with it..

                                                                                                                  2. 11

                                                                                                                    In my experience this leads to data inconsistencies and the need to code defensively or otherwise maintain additional application code.

                                                                                                                    1. 9

                                                                                                                      Not if you’re hopping jobs every 1-2 years. If you’re out the door quickly enough, you can honestly claim you’ve never run into any long-term maintainability issues with your choice of technologies.

                                                                                                                    2. 3

                                                                                                                      I always thought that NoSQL came about because people didn’t feel like dealing with schema migrations.

                                                                                                                      I think that’s unlikely, most NoSQL people probably have no idea what schema migrations are.

                                                                                                                  1. 11

                                                                                                                    My number one worry with encrypted email is that I will lose access to it permanently.

                                                                                                                    My number two worry with encrypted email is that nobody uses it so there’s no point.

                                                                                                                    My number three worry is traffic analysis, but it’s a long way behind one and two.

                                                                                                                    1. 8

                                                                                                                      I’ve created so many public keys over the years because some thing (like the Ubuntu CoC or “signed” commits in GitHub) requires it and I’ve lost access to literally every one of them, mostly because I use a public key about once every five years. I would literally never set up email encryption because of your reason number one.

                                                                                                                      1. 16

                                                                                                                        What doesn’t help is the relative opaqueness of how gpg and its “keyring” works. With ssh it’s easy: I just have a ~/.ssh/id_ed25519 and ~/.ssh/id_ed25519.pub file, and public keys for verification are in ~/.ssh/authorized_keys. That’s pretty much all there’s to it.

                                                                                                                        I’ve never lost access to a ssh key or been confused how this works. With gpg it’s so much harder.

                                                                                                                        1. 5

                                                                                                                          I agree with this so, so much. SSH is almost trivial to understand, and there are even concrete benefits to setting it up and using it!

                                                                                                                          1. 11

                                                                                                                            It’s simple to use for simple cases but a bunch of things are complicated with SSH, for example:

                                                                                                                            • If a key is compromised, how do I revoke access to it from all of the machines that have it in their authorised-keys file?
                                                                                                                            • If I want to have one key per machine, how do I add that key to all of the machines I want it to access?
                                                                                                                            • How do I enforce regular key rollover?

                                                                                                                            You can do these things with PKI but now your key management is both complicated and very different from ‘normal’ SSH.

                                                                                                                            SSH is also solving a much simpler problem because it only needs to support the online use case. If I connect to a machine with SSH and it rejects my key, I know immediately and it’s my problem. The protocol is interactive and will try any keys I let it, so I can do key rollover by generating a new key, allowing SSH to use both, and eventually replacing the public key on every machine I connect to with the new one. If I send an encrypted email with the wrong key, the recipient gets nonsense.

                                                                                                                            The offline design is the root cause of a lot of the problems with email. It was designed for a world where the network was somewhat ad-hoc and most machines were disconnected. When my father’s company first got Interne email, they ran an internal mail server with a backup MX provided by their ISP. When someone sent an email, it would go to their ISP, when they dialed in (with a modem, which they did every couple of hours) the backup MX would forward email to them and they’d send outgoing mail, which would go via the same multi-hop relaying process. Now, email servers are assumed to be online all of the time and it would be completely fine to report to the sender’s mail client if the recipient’s mail server is not reachable and ask them to try again later. If you define a protocol around that assumption from the start, it’s completely fine to build an end-to-end key-exchange protocol in and a huge number of the problems with encrypted email go away.

                                                                                                                          2. 1

                                                                                                                            With SSH however, it’s trivial to create a new key, and get it installed on the new server. Because SSH keys don’t worry about a decentralized “web of trust” in way OpenPGP does, there is no historical or technological baggage to require you carrying around your SSH keypair. I’ve been through so many SSH keys over the years on personal and corporate systems, yet never once has it ever bothered me.

                                                                                                                          3. 8

                                                                                                                            This is one reason I started signing every email I send years ago and also how sign every git commit. It forces my key to be a core part of my workflow

                                                                                                                            1. 3

                                                                                                                              Similarly, I also use gpg to encrypt documents, and to encrypt passwords in pass. You’ll also rarely need to interact with it for Arch packages. Can’t beat its ubiquity.

                                                                                                                            2. 2

                                                                                                                              If you want you can make a “master” key that you keep in one place permanently and at backup locations, and then sign stuff with subkeys signed by the master key. That way you will never lose anything if you lose a subkey but not the masterkey. Not a great solution but still helpful

                                                                                                                          1. 4

                                                                                                                            My email is hosted with Migadu. I sync emails down to my NAS periodically with offlineimap after which they’re backed up in Backblaze via Duplicity.

                                                                                                                            I like this approach because I can swap out my domain name provider, email host, backup provider, with minimal impact upon any of the other services.

                                                                                                                            1. 32

                                                                                                                              When an error in your code base can take down millions of users who depend upon it for vital work you should

                                                                                                                              1. Have good CI
                                                                                                                              2. Have extensive tests
                                                                                                                              3. Make small changes at a time
                                                                                                                              4. Have at least one set of extra eyes looking at your changes
                                                                                                                              1. 15
                                                                                                                                1. Make use of language features that push you towards correctness, for example static typing.
                                                                                                                                1. 8

                                                                                                                                  I find it shocking how many people love “dynamic languages”

                                                                                                                                  1. 7

                                                                                                                                    I don’t. There’s a lot of neat tricks you can do at runtime in these systems that would require 10x more work to do at build time, because our build tools are awful and far too difficult to work with. Problem is that we only have the build-time understanding of things while we’re actually programming.

                                                                                                                                    Don’t get me wrong, I disagree with taking this side of the trade-off and I don’t think it’s worth it. But I also realise this is basically a value judgement. I have a lot of experience and would expect people to give my opinions weight, but I can’t prove it, and other rational people who are definitely no dumber than me feel the opposite, and I have to give their opinions weight too.

                                                                                                                                    If our tooling was better (including the languages themselves), a lot of the frustrations that lead people to build wacky stuff that only really works in loose languages would go away.

                                                                                                                                    1. 7

                                                                                                                                      I don’t, because I used to be one of those people. Strong type systems are great if the type system can express the properties that I want to enforce. They’re an impediment otherwise. Most of the popular statically typed languages only let me express fairly trivial properties. To give a simple example: how many mainstream languages let me express, in the type system, the idea that I give a function a pointer to an object and it may not mutate any object that it reaches at an arbitrary depth of indirection from that pointer, but it can mutate other objects?

                                                                                                                                      Static dispatch also often makes some optimisations and even features difficult. For example, in Cocoa there is an idiom called Key-Value Coding, which provides a uniform way of accessing properties of object trees, independent of how they are stored. The generic code in NSObject can use reflection to allow these to read and write instance variables or call methods. More interestingly, this is coupled with a pattern called Key-Value Observing, where you can register for notifications of changes before and after they take place on a given object. NSObject can implement this by method swizzling, which is possible only because of dynamic dispatch.

                                                                                                                                      If your language has a rich structural and algebraic type system then you can do a lot of these things and still get the benefits of a static type checking.

                                                                                                                                      1. 2

                                                                                                                                        Regarding your example, honestly I am not 100% sure that I grasp what you are saying.

                                                                                                                                        In something like C++ you can define a constant object and then explicitly define mutating parts of it. But I don’t think that quite covers it.

                                                                                                                                        I have enjoyed some use of Haskell a few years back and was able to grasp at least some of it. But it gets complicated very fast.

                                                                                                                                        But usually I am using languages such as c# and typescript. The former is getting a lot of nice features and the latter has managed to model a lot of JavaScript behaviour.

                                                                                                                                        But I have no problem admitting that type systems are restrictive in their expressibility. But usually I can work within it without too many issues. I would love to see the features of Haskell and idris, and others become widely available - but the current languages don’t seem interested in that wider adoption.

                                                                                                                                        1. 3

                                                                                                                                          Regarding your example, honestly I am not 100% sure that I grasp what you are saying.

                                                                                                                                          In something like C++ you can define a constant object and then explicitly define mutating parts of it. But I don’t think that quite covers it.

                                                                                                                                          I don’t want an immutable object, I want an immutable view of an object graph. In C++ (ignoring the fact that you can cast it away) a const pointer or reference to an object can give you an immutable view of a single object, but if I give you a const std::vector<Foo*>&, then you are protected from modifying the elements by the fact that the object provides const overloads of operator[] and friends that return const references, but the programmer of std::vector had to do that. If I create a struct Foo { Bar *b ; ... } and pass you a const Foo* then you can mutate the Bar that you can reach via the b field. I don’t have anything in the type system that lets me exclude interior mutability.

                                                                                                                                          This is something that languages like Pony and Verona support via viewpoint adaptation: if you have a capability that does not allow mutation then any capability that you load via it will also lack mutation ability.

                                                                                                                                          But usually I am using languages such as c# and typescript. The former is getting a lot of nice features and the latter has managed to model a lot of JavaScript behaviour.

                                                                                                                                          Typescript is a dynamic language, with some optional progressive typing, but it tries really hard to pretend to be a statically typed language with type inference and an algebraic and structural type system. If more static languages were like that then I think there would be far fewer fans of dynamic languages. For what it’s worth, we’re aiming to make the programmer experience for Verona very close to TypeScript (though with AoT compilation and with a static type system that does enough of the nice things that TypeScript does that it feels like a dynamically typed language).

                                                                                                                                          1. 1

                                                                                                                                            I really like the sounds of Verona.

                                                                                                                                        2. 1

                                                                                                                                          Strong type systems are great if the type system can express the properties that I want to enforce. They’re an impediment otherwise.

                                                                                                                                          It’s not all-or-nothing. Type systems prevent certain classes of errors. Tests can help manage other classes of errors. There’s no magic bullet that catches all errors. That doesn’t mean we shouldn’t use these easily-accessible, industry-proven techniques.

                                                                                                                                          Now, static typing itself has many other benefits than just correctness–documentation, tooling, runtime efficiency, enforcing clear contracts between modules being just a few. And yes, they do actually reduce bugs. This is proven.

                                                                                                                                        3. 4

                                                                                                                                          We have somewhat believable evidence that CI, testing, small increments, and review helps with defect reduction (sure, that’s not the same thing as defect consequence reduction, but maybe a good enough proxy?)

                                                                                                                                          I have yet to see believable evidence that static languages do the same. Real evidence, not just “I feel my defects go down” – because I feel that too, but I know I’m a bad judge of such things.

                                                                                                                                          1. 1

                                                                                                                                            There are a few articles to this effect about migrations from JavaScript to TypeScript. If memory serves they’re tracking the number of runtime errors in production, or bugs discovered, or something else tangible.

                                                                                                                                            1. 1

                                                                                                                                              That sounds like the sort of setup that’d be plagued by confounders, and perhaps in particular selection bias. That said, I’d be happy to follow any more explicit references you have to that type of article. It used to be an issue close to my heart!

                                                                                                                                              1. 1

                                                                                                                                                I remember this one popping up on Reddit once or twice.

                                                                                                                                                AirBNB claimed that 38% of their postmortem-analysed bugs would have been avoidable with TypeScript/static typing.

                                                                                                                                            2. 1

                                                                                                                                              Shrug, so don’t use them. They’re not for everyone or every use case. Nobody’s got a gun to your head. I find it baffling how many people like liquorice.

                                                                                                                                              1. 1

                                                                                                                                                Don’t worry, I don’t. I can still dislike the thing.

                                                                                                                                          2. 6

                                                                                                                                            And if you have millions of users, you also have millions of user’s data. Allowing unilateral code changes isn’t being a good steward of that data, either from a reliability or security perspective.

                                                                                                                                          1. 1

                                                                                                                                            Link dead for me, can’t resolve host. Anyone else?