1. 12

    Looking at Nix DSL code makes my eyes bleed. I realize that this isn’t a fundamental problem, but it sure doesn’t make me want to learn how to use Nix.

    1. 1

      Work is going on to allow people to specify most packages using simple YAML, etc. instead of using full Nix all the time.

      1. 1

        That’s a bit like answering people who complain that this 747 is a bit difficult to fly by giving them a toy plane.

        1. 3

          The Nix language is great. People are always gonna whinge.

    1. 18

      The article conflates two questions. Will Nix be more popular than Docker? Probably not. Will Nix be able to do everything Docker does? It has been able to do that for a couple years now!

      1. 1

        Does Nix let you build a single file package that AWS (for example) can spin up in response to a network event?

        1. 8

          You can build OCI container using Nix, so my answer would be - probably yes.

          1. 3

            So I guess another question to put is, will nix replace the docker file?

            1. 6

              Nix replaced Dockerfiles for my team at work.

              Atlassian Marketplace is a big Scala application, the jar is built via a Nix shell and then we use Nix to build an OCI container image. The image is pushed to a registry where the internal Atlassian PaaS is able to deploy to production. Every release is completely reproducible, bit for bit!

            2. 2

              neat, and i didn’t know that. thanks.

            3. 6

              Guix (which I think of as a better Nix) lets you do this. You can even produce a pack in the Docker image format directly from Guix.

              1. 2

                I had not seen that; thank you.

          1. 9

            I’m one of the package maintainers of Nix for Arch Linux and it’s been a real headache getting this version to compile from source, beginning with the source tarball on the homepage returning a 404.

            There’s also 5 unspecified compulsory dependencies:

            • autoconf-archive
            • jq
            • libcpuid
            • gtest
            • lowdown

            And lowdown is patched in Nixpkgs, which adds another package that package maintainers have to juggle. The patches haven’t been accepted upstream either, which makes it difficult for me to justify including them in Arch Linux. What does lowdown even do anyway?

            I’ve spent a few hours today attempting to get this to compile, and it’s been one issue after the other.

            1. 7

              We no longer release source tarballs. If you want to build from source, please build from the tags in the Git repository.

              From the post.

              Looks like Lowdown might be used for the new documentation generation. Necessary for generating the man pages, I imagine.

              nixpkgs hacks up a lot of packages to enable dynamic linking but I don’t think that’s relevant to your Arch work. Just use the static version. Doesn’t matter.

              1. 4

                I use Arch Linux and am eagerly awaiting this working so I can upgrade to 2.4 with my normal arch package manager. Thank you for your service.

                1. 2

                  I’ve managed to get it working, the blocker was generally just me being super tired and juggling multiple responsibilities!

                  It’s been through the testing repository, and now in the community repository. 🎉

                  1. 1

                    Awesome, thanks!

                2. 3

                  What’s the goal of a nix package in arch? I thought nix is pretty much self-managing / self-updating in its own environment. That would make the nix package more of a nuisance than useful.

                  Or am I missing some use case where you’d want pacman managing it?

                  1. 2

                    All that linked lowdown patch does is help split up package outputs more finely, which is a Nixpkgs-specific thing. You can have hello.bin, hello.lib, hello.dev, hello.man, &c. Arch doesn’t concern itself with that when packaging.

                  1. 4

                    adb push […]

                    You can simplify this by using TWRP’s sideload mode (Advanced > ADB Sideload), and sideload zips using

                    adb sideload rom.zip
                    

                    directly from your computer.

                    I initially made the mistake of trying to install Magisk by just flashing their zip file to the system partition via TWRP. Do not do this or else your phone will enter a boot loop.

                    Not sure why this happened to you — I’ve been flashing Magisk via TWRP for years.


                    I’ve been flashing ROMs ever since the Galaxy Ace days (2012?). I never ran stock. Always either Lineage/OmniROM without GApps. But that changed about two weeks ago, after I upgraded my OnePlus 6T to the latest firmware, and tried flashing TWRP, resulting in a brick — my phone was stuck in some Qualcomm debug mode thing. Wasted an entire day trying to get it working (this was on a Sunday, and I needed the phone working the next day). I had to use some MSMDownloadTool and put my phone in EDL mode to flash stock firmware — resulting in a hard wipe and a locked bootloader. Turns out the firmware upgrade resulted in some incompatibilities, and the only “solution” was to flash some “patched” boot.img some rando on XDA posted. No thanks.

                    I’m currently running the OnePlus stock ROM, with all the Google stuff disabled, waiting for my iPhone 13 mini to arrive. It’ll be my first ever iPhone but at this point, I just want something that works without having to mess around with it. Hopefully it can last me another 5 years or so, and by then the Linux/BSD mobile ecosystem would’ve developed enough to be daily-driveable.

                    1. 1

                      iPhone has reached a state where it’s almost completely sane and usable, except it lacks native WebM support. Until then I’ll be using MIUI on Xiaomi phones, which is probably the sanest Android experience I’ve had to date, the first time I didn’t even feel the need to root and flash something different. ROM flashing, for all it’s history, is a painful and annoying experience to me, with custom ROMs always buggy in some aspect different from the last.

                      1. 1

                        Does Xiaomi still ship ads in their apps? I can’t stand their bootleg iOS UI plus the insane amount of telemetry. Then again, I last used MIUI in 2016 or so? I suppose things have changed since, at least visually.

                        1. 1

                          Yes, but they’re disabled by default in EU and Global versions of the ROM with opt-in pop-ups at inital setup.

                      2. 1

                        I’ve been flashing Magisk via TWRP for years

                        Seems like this has been deprecated somewhat recently.

                        https://topjohnwu.github.io/Magisk/install.html#custom-recovery

                      1. 2

                        What’s the goal of the nondeterminism in your case?

                        1. 13

                          I don’t think it’s that nondeterminism is a goal, it’s that determinism is not always a goal. For instance if you need a way to shed 20% of your traffic once it goes over some threshold you could deterministically drop every 5th request, keeping the state and coordination necessary to do so, or you could randomly drop every request with a 20% probability. The latter is much simpler. It’s not that nondeterminism is desireable, it’s that determinism adds more complexity and we don’t really need it for our goals here.

                          Or a recommender that scores everything that you’re likely to buy. There’s likely to be something that the algorithm thinks you’re really likely to buy but you aren’t so it will stack it way at the top of the list, so every time you see recommendations it’s always the top item. (If we remove things after you’ve bought them, this result is virtually guaranteed.) We could recompute the scores often and keep track of what you’ve already seen and frequencies of re-showing the same product, enforcing diversity by occasionally promoting items lower in the list. That would work. Or we could multiply the score by a random float, effectively weighting the likelihood by the computed score but still allowing it to look fresh every time you look and getting the diversity that way. If you need determinism then this quick hack isn’t available to you, but if it’s not something that you actually need then it is.

                          1. 1

                            Such things are perfectly fine to do, if somewhat tricky to test. But randomizing tests is a big no-no in my book (see my note above). In fact, if you drop requests based on a probability rather than deterministically, it will probably also be much harder to abuse the system in a denial of service attack.

                          2. 4

                            Property testing will generate random cases. If you run a test 100 times with random data, you might generate data which breaks a test; that’s good!

                            1. 1

                              Random tests are just as annoying as flaky tests - they break at inopportune moments and can take a lot of effort to track down. They only sound good in theory, but in practice they’re just a headache.

                              1. 3

                                Nah, I use them all day every day. Not a headache, not a lot of effort. Finds bugs all the time and is great documentation.

                          1. 3

                            I’m still looking for an argument/example for why all this abstraction carries its own weight, in a software-development context.

                            1. 3

                              Like most software engineering patterns, it exists to facilitate code reuse in a principled way. An abstraction’s utility can be measured along two axes:

                              1. Working only within the abstraction, what things can I say?
                              2. How many things embody this abstraction?

                              Monad is a useful abstraction because it applies to a surprisingly large range of types, while still allowing a broad vocabulary of functions that work over any Monad. It is also hard to understand for these reasons, which is why the most effective teaching method seems to involve finding some vaguely familiar concept (e.g., promises) that happens to be a Monad, using that to give the student a toe-hold, and then asking the student to write Monad instances for a zillion types, letting the instinctive, pattern-matching part of the student’s brain notice the underlying similarities.

                              The Monad abstraction in Haskell enables (among other things) a large family of functions (when, unless, forever, etc) that would be control structures in other languages. This is handy because a) we don’t have to petition some language steward to add new ones (contrast: Python’s with statement), and b) we can use our normal programming tools to work with them.

                              I can use the Monad abstraction when checking for missing values, to halt my program on the first error, to make an ergonomic database Transaction data type that prohibits non-database I/O, to write deserialisers, to set up eDSLs with great ergonomics, to treat lists as nondeterministic computations, to provide a good interface to software transactional memory, to build up an additional “summary” result in a computation, to pass around additional arguments, and other things I’ve surely forgotten. You could well say (as you said to /u/shapr in a sibling comment) that none of these need the theory of Monads. And they don’t. What the theory of Monads gives you is a useful toolbox to see how they’re all similar, and it’s one that’s usefully expressed only in a few programming languages. A tool like mapM works exactly as well in each of those apparently-diverse cases, and only needed writing once.

                              1. 2

                                mapM’s functionality is trivial, though, and I expect anything you can do with an arbitrary monad would be equally trivial. In my experience, code reuse is useful when the code carries significant domain knowledge or provides a single point of reference for application details which are subject to change. Abstracting away repeated patterns for the sake of it, or simply for the sake of concision, is often not worth the cognitive load it adds.

                              2. 3

                                I’m not a haskeller, but I’ve spent a little time with the language. One benefit is that you can write your functions for the success case and the monadic machinery will short circuit if there is a failure. This means that you don’t need to litter you code with checks for nulls or nothings.

                                1. 2

                                  One thing I like is that I can use the same code with a fake in memory database without changing the code itself, just feeding a different value into the monad.

                                  1. 7

                                    You don’t need the theory of monads to enable that.

                                    1. 1

                                      Using a monad to separate concerns and do easy dependency injection is one of many cases where the monad abstraction carries its weight.

                                      I agree, you don’t need the theory to do those things with a monad, you just use it.

                                      1. 3

                                        I can do those things quite easily in languages which don’t even have the concept of “monad.” The abstractions I use might be representable as monads, but I see no benefit to calling them out as such.

                                        1. 1

                                          Consider Elm. Its designer has banned the word monad from all the documentation. Nevertheless, it has a very consistent and intuitive way of handling all monadic types. How can that be? Because the designer knew they were monads.

                                          Most users won’t ever have to declare a monad. They don’t need monad tutorials or even know the word, but the world would be a better place if all language designers did.

                                    2. 3

                                      that’s just an interface, isn’t it?

                                      1. 1

                                        If you mean an interface as in an API being a pattern of functions and data, then yes. A good interface can make a problem easier to think about and solve.

                                        1. 2

                                          Or an interface as in literally like a Java interface, i.e. a simple everyday programming concept that doesn’t need any theoretical mathematics to understand.

                                          1. 3

                                            That’s what I was thinking. Jdbc is the ultimate example here. You program only against interfaces and you can swap databases in tests trivially easy. All without reading complex Monad tutorials.

                                            1. 1

                                              Interface theory is very complex and mathematical. You never see blog sized tutorials about all the theory because it doesn’t fit in a blog. Monads are stupid simple in comparison, which is why there are so many blogs about it. Get over the abstract nonsense words, implement it yourself in 10 lines, then write a blog about how you finally understood it. That’s all there is to it.

                                              Designing interface rules for your language is hard to get right even if you know all the theory, cause there are many tradeoffs and you might make the wrong choice for what you’ll want later on. Getting monads wrong is only possible when you refuse to acknowledge you’re dealing with a monad, like JS’s Promises.

                                              1. 2

                                                Interface theory is very complex and mathematical. You never see blog sized tutorials about all the theory because it doesn’t fit in a blog.

                                                I think you never see blogs about it, because - at least Java programmers - learn about them in the very beginning and then use them. They are trivial to understand and use. Java programmers write them every single day and most of them do not have deep type theory backgrounds. I think that is what this thread is about. Pragmatic programmers using something vs. pure functional programmers exchanging complex Monad tutorials.

                                                1. 2

                                                  Exactly, you can use monad-like things without ever learning about monads. You’ll have a better time if the language designer did know monad theory though. Same goes for interfaces.

                                                  I really don’t want to call monads complex though. That’s what leads to this “monad tutorial fallacy”, it’s always the first mythical hard thing new Haskellers run in to, so when it clicks they must write a tutorial. Haskell has other stuff much more complex, that never gets blog explanations either. I’d say GADTs are at the level of interfaces, and when a new Haskeller reaches those, suddenly they’re pragmatics again. (And then after a year you make it your master thesis and never touch Haskell again lmao.)

                                    3. 1

                                      Code reuse

                                    1. 5

                                      I saw a lot of confusion on Twitter around what this is for. It’s very similar to systems like Souffle. There’s a great blog post about doing interesting static analysis using Souffle.

                                      According to the authors, Glean is a bit different in that it’s optimised for doing quick analysis, like for an editor or IDE. With Souffle, I have previously tried to analyse a medium sized Java code base using Doop, which consumed all memory and crashed.

                                      Hopefully Glean is more promising for this type of work. Sadly the Java indexer is not open source yet.

                                      1. 3

                                        Souffle

                                        Any opinions on how these relate to semgrep?

                                        1. 2

                                          For history: Semgrep was originally a Facebook project built upon their pfff project. Facebook stopped working on both of those so I guess Glean is being used as their replacement.

                                          From my understanding:

                                          • Semgrep has parsers and typers for all of the languages it supports, while Glean is designed to read data dumped out from a compiler
                                          • Pretty-printing and semantic diffing are part of Semgrep but wouldn’t really be feasible with Glean
                                          • Angle is a query language in Glean allowing abstraction of facts, e.g. you could define the concept of “type class” and have it applied to Scala and Haskell - it wouldn’t be based just on syntax patterns

                                          It’s say that Semgrep is more of a syntactical tool while Glean is a more full on static analysis tool.

                                      1. 3

                                        Not that useful. If you are familiar with the domain, then you would wana look up Google Kythe and how it works (completely opensource with a few talks on YouTube)

                                        Essentially, to extract code intelligence, your ‘indexer’ need to be very closely intertwined with the language compiler where all the source of truth of how the syntax is interpreted into actual ast/bytecode.

                                        Glean is essentially like Kythe, but more flexible. Instead of a universal schema, they let you define your own schema so that you can decide to get more or less info from a language easier. I.e. comments might be very useful to extract in Golang or Java but does not exist in Json so you can have separate schemas for each. Or if there is some special concept that only your language have (i.e. Golang struct tags), you can add a special schema for it.

                                        However, the bread and butter of this is actually the indexers. Glean indexers are closed source, with only Hack and Flow being opensourced in the compilers(not in Glean) as FB control the compilers of those languages. To use Glean for other languages, most likely you will need to change the schema in Glean as well as bring-your-own-compiler, which is a huge overtake, especially for languages without a plugin extraction extension built-in to their compiler.

                                        It would be interesting if somebody started to hook up existing LSP/LSIF implementation into Glean for a better/easier adoption.

                                        1. 1

                                          It would be interesting if somebody started to hook up existing LSP/LSIF implementation into Glean for a better/easier adoption.

                                          Yes but this would lose a bunch of the supposed benefit of Glean. The point is to have language specific schemas, where LSP is designed to be the opposite.

                                          1. 2

                                            Every language has it’s own lsp. The protocol is the only thing common.

                                            1. 1

                                              Yes, that’s my point

                                            2. 1

                                              Thats ok, the trade-off is ease of adoption. Once users have adopted the tool, the schema/indexer could be modified then

                                            3. 1

                                              Are these components not available, or simply closed source? There’s a big difference.

                                              1. 2

                                                Facebook uses C++ and has an indexer closed source.

                                                Other languages that facebook does not use won’t have indexer available

                                                1. 1

                                                  Thx

                                            1. 1

                                              I’ve used

                                              boot.binfmt.emulatedSystems = [ "aarch64-linux" ]; 
                                              

                                              Before and it’s so slick. I know it’s possible to set up binfmt on other distributions but NixOS just makes it so so easy.

                                              1. 3

                                                There’s a project that I’ve used for this before:

                                                https://github.com/Gabriel439/nix-diff

                                                1. 1

                                                  But why now?

                                                  Why ignore the problem for 25 years, then turn around and admitting the problem when null and undefined has been a problem for longer than most of us have lived?

                                                  1. 22

                                                    Because we are living through a programming language renaissance: it seems that major languages (Java, JavaScript, C++, Python) had a period of stability/stagnation, when they were considered “done”, as they fully implemented their vision of OOP. Then, the OOP craze subsided, and people started to look into adapting more of FP ideas. Hence, all those languages finally started to get basic conveniences as:

                                                    • statement-level type inference (auto in C++, var in Java) / gradual types (TypeScript, mypy)
                                                    • optional and result monads (?. in JS, optional/outcome in C++, optional in Java. Curiously, Python seems to get by without them)
                                                    • data types (records in Java, <=> operator/hash in C++, NamedTupple/@dataclass in Python. Curiously, JS seem to get by without them)
                                                    • sum types / union types / pattern matching (pattern matching in Java, variant in C++, pattern matching in Python, union types in TS)
                                                    • destructing declaration ( auto [x, y] = in C++, richer unpacking syntax in Python, let { foo } in TypeScript. Nothing yet in Java I think?)
                                                    • async/await (curiously, Java seems to have chosen stackful coroutines instead)
                                                    • and, of course, lambdas. JS doesn’t get enough credited for being the first mainstream language with closure functions, but they shortened the syntax. Python curiously got stuck in local optima, where lambda was pretty innovative at the time, but now feels restrictive.
                                                    1. 1

                                                      data types… Curiously, JS seem to get by without them

                                                      FWIW the way anonymous objects (and their types in typescript) work in JS is just as convenient for casually structuring data as e.g. NamedTuple in Python.

                                                      1. 1

                                                        I consider structural eq/hash/ord a part of being a data type. I think JS still doset have those?

                                                        1. 1

                                                          No and it’s never getting them, but oh well, eq is in lodash.

                                                    2. 3

                                                      The “billion dollar problem” hasn’t been ignored, modern languages (last 5-10 years), usually treat null and undefined completely different than the past. Nowadays, they are treated as algebraic data types (like tagged unions) that must explicitly be checked by the developer, avoiding runtime NullReferenceExceptions.

                                                      New syntax ,such as ?., tries to make these checked as easy as possible.

                                                      //c, this value may or may not be null
                                                      SomeStruct * s = get_struct();
                                                      //This may or may not be an error
                                                      int value = s->some_field;
                                                      
                                                      //Typescript
                                                      type ReturnValue = SomeStruct | null
                                                      const s : ReturnValue = get_struct();
                                                      //The compiler will not allow this
                                                      Const value = a.some_field;
                                                      //you have to do this
                                                      const value : number | null = s !== null ? s.some_field : null;
                                                      //or with the new syntax
                                                      const value  = s?.some_field;
                                                      
                                                      

                                                      Where it really shines, is chaining multiple checks together:

                                                      //Turn this:
                                                      Interface I {
                                                        field1?: {
                                                          field2?: {
                                                          field3?: number
                                                        }
                                                        }
                                                      }
                                                      function func(arg: I | undefined){
                                                        If(!arg){
                                                         return;
                                                       }
                                                       if(!arg.field1){
                                                          return;
                                                       }
                                                       if(!arg.field1.field2){
                                                          return;
                                                       }
                                                       return arg1.field1.field2.field3;
                                                      }
                                                      
                                                      //into this:
                                                      const func = (arg: I | null) => arg?.field1?.field2?.field3;
                                                      
                                                      1. 1

                                                        Er, that’s my point. The problem has been known for fifty years, solutions have been known for a long time as well. Why did TS/JS ignore the problem until now?

                                                        1. 5

                                                          Because it used to have much bigger problems to deal with first.

                                                          1. 3

                                                            Just because someone knows the problem doesn’t mean it’s in everybody’s understanding, or on everybody’s roadmap.

                                                            1. 1

                                                              The problem was solved in a few languages for a long time (e.g. Haskell) - it’s just popularity.

                                                          2. 2

                                                            Honestly, having null and undefined doesn’t bother me at all, as long as they are part of the type system- as is the case in TypeScript.

                                                            They mean subtly different things. The most common convention is that null means “this value has been explicitly set to empty/bottom/default” and undefined means “this value has not been set”. (I know this is not the case for the TypeScript compiler project that just prefers undefined for everything)

                                                            It would be better to just have an Option<T> type, IMO, but TypeScript is trying to be a thin veneer over JavaScript, so that’s out of scope for it.

                                                            1. 2

                                                              In my opinion, null and undefined actually conflate three different cases:

                                                              1. This variable has not been defined
                                                              2. This variable has not been given a value
                                                              3. This variable has no value (or value not found)
                                                              1. 3

                                                                Interesting. If I understand your list, #1 refers to a variable name just not existing at all in scope (never declared with var, let, const), #2 refers to a let or var declaration, but no assignment: let foo;, and #3 is setting something to null.

                                                                I think you’re right and that makes sense, but do you see any value in differentiating scenarios 1 and 2?

                                                                Also, if I understand and remember correctly, #1 is not allowed in TypeScript at all. So I think, for TypeScript, your list and my described convention is the same.

                                                                1. 1

                                                                  For languages where you do not have to pre-declare variables, then yes, there is a value in differentiating between 1 and 2.

                                                          1. 26

                                                            I co-authored the Subresource Integrity specification and agree with almost everything this article is saying. The benefits of a public CDN are mostly useless since browsers have split caches per first-party origin (which I agree is the right thing to do!). I think SRI is mostly useful when you are using a big paid edge CDN that you can’t fully trust.

                                                            I don’t agree with the IPFS bits. I don’t think that’s realistically usable for any website out there.

                                                            1. 3

                                                              As someone who hosts a lot of client websites with IPFS I wonder what you think makes it harder?

                                                              1. 2

                                                                How do you do that?

                                                                1. 5

                                                                  Pin the site to at least two IPFS nodes (usually one I run and also pinata.cloud) then set the dnslink TXT record to /ipfs/hash and then A or CNAME flatten to an IPFS gateway (often cloudflare, because why not make them pay the bills? But pinata.cloud also a great option if you are paying them anyway)

                                                                  1. 6

                                                                    So you still need to host a copy, and you need a regular CDN to serve and scale it. That exactly like old-school HTTP, but with extra steps.

                                                                    1. 3

                                                                      No, CloudFlare offers a public IPFS gateway, you don’t need a “regular CDN”

                                                                      1. 8

                                                                        Cloudflare abstracts the IPFS away, so that users don’t talk to the IPFS network. Users just connect over HTTP to the company’s own CDN servers running nginx, just like everyone else who’s not using IPFS in any way. IPFS here is not handling the traffic nor distributing the content in a meaningful way.

                                                                        Such setup makes the protocol behind the CDN entirely irrelevant. It could have been RFC 1149 too, or an Apache in someone’s basement, and it wouldn’t make a difference.

                                                                        1. 2

                                                                          Yeah but you don’t have to run anything except the IPFS node. No exposing ports via a firewall, no configuring a public CDN, etc.

                                                                          Pin your files in IPFS and point DNS to CloudFlare. Done!

                                                                    2. 2

                                                                      Can you explain what you are gaining from this? If this is routed through a third party gateway, how do you get baked-in integrity checks? Not for the user, only somewhere in the backend, no? I’m inexperienced with ipfs and I might misunderstand bits. But would be happy to learn more about your setup.

                                                                      1. 2

                                                                        I get redundancy (because my content is in 2+ places) so if my box is down, or even if both the nodes I pin on are down, the site keeps serving due to caching at the edge, but even when the cache expires usually one of my two pins are up.

                                                                        I get simplicity. Some content I can pin on machines in my house without any additional port forwarding, etc. My client can pin the content on their machine or by uploading to pinata.cloud for free and I just add the hash to my deployment and it streams live from them. No more wetransfer.

                                                                        And I get the future. If a user wants integrity checking or p2p serving, they just need the browser extension and it will load my sites from IPFS directly and not use the CDN proxy.

                                                                      2. 1

                                                                        Very cool! Thank you!

                                                                1. 29

                                                                  Well written, this were exactly my thoughs when i read this. We don’t need faster programmers. We need more thorough programmers.

                                                                  Software could be so much better (and faster) if the market would value quality software higher than “more features”

                                                                  1. 9

                                                                    We don’t need faster programmers. We need more thorough programmers.

                                                                    That’s just a “kids these days…” complaint. Programmers have always been fast and sloppy and bugs get ironed out over time. We don’t need more thorough programmers, like we don’t need more sturdy furniture. Having IKEA furniture is amazing.

                                                                    1. 12

                                                                      Source code is a blueprint. IKEA spends a lot of time getting their blueprints right. Imagine if every IKEA furniture set had several blueprint bugs in it that you had to work around.

                                                                      1. 5

                                                                        We’re already close though. We have mature operating systems, language runtimes, and frameworks. Going forward I see the same thing happening to programming that happens to carpentry or cars now. A small set of engineers develop a design (blueprint) and come up with lists of materials. From there, technicians guide the creation of the actual design. Repairs are performed by contractors or other field workers. Likewise, a select few will work on the design for frameworks, operating systems, security, IPC, language runtimes, important libraries, and other core aspects of software. From there we’ll have implementors gluing libraries together for common tasks. Then we’ll have sysadmins or field programmers that actually take these solutions and customize/maintain them for use.

                                                                        1. 7

                                                                          I think we’re already completely there in some cases. You don’t need to hire any technical people at all if you want to set up a fully functioning online store for your small business. Back in the day, you would have needed a dev team and your own sysadmins, no other options.

                                                                          1. 1

                                                                            I see the same thing happening to programming that happens to carpentry or cars now. […] From there we’ll have implementors gluing libraries together for common tasks.

                                                                            Wasn’t this the spiel from the 4GL advocates in the 80s?

                                                                            1. 2

                                                                              Wasn’t this the spiel from the 4GL advocates in the 80s?

                                                                              No, it was the spiel of OOP/OOAD advocates in the 80s. Think “software IC’.

                                                                        2. 1

                                                                          Maybe, maybe not. I just figured that if i work more thoroughly, i get to my goals quicker, as i have less work to do and rewrite my code less often. Skipping error handling might seem appealing at frist, as i reach my goal earlier, but the price for this is that either me or someone else has to fix that sooner or later.

                                                                          Also mistakes or just imperformance in software nowadays have huge impact due to being so widespread.

                                                                          One nice example i like to make:

                                                                          Wikimedia foundation got 21.035.450.914 page views last month [0]. So if we optimize that web server by a single instruction per page view, assuming the CPU runs at 4 GHz, with a perfect optimized code of 1.2 instructions per cycle, we can shave off 4.382 seconds per month. Assuming wikipedia runs average servers [1], this means we shave of 1.034 watt hour of energy per month. With a energy price of 13.24 euro cent [2], this means a single cycle costs us roughly 0.013 euro cent.

                                                                          Now imagine you can make the software run 1% faster, which are 48.000.000 instructions, this is suddenly 6240€ per month savings. For 1% overall speedup!

                                                                          High-quality software is not only pleasant for the user. It also saves the planet by wasting less energy and goes easy on your wallet.

                                                                          So maybe

                                                                          Programmers have always been fast and sloppy and bugs get ironed out over time. We don’t need more thorough programmers,

                                                                          this should change. For the greater good of everyone

                                                                          [0] https://stats.wikimedia.org/#/all-projects/reading/total-page-views/normal|table|2-year|~total|monthly
                                                                          [1] https://www.zdnet.com/article/toolkit-calculate-datacenter-server-power-usage/
                                                                          [2] https://www.statista.com/statistics/1046605/industry-electricity-prices-european-union-country/

                                                                        3. 9

                                                                          Software could be so much better (and faster) if the market would value quality software higher than “more features”

                                                                          The problem is there just aren’t enough people for that. That’s basically been the problem for the last 30+ years. It’s actually better than it used to be; there was a time not so long ago where everyone who could sum up numbers in Excel was a programmer and anyone who knew how to defrag their C:\ drive was a sysadmin.

                                                                          Yesterday I wanted to generate a random string in JavaScript; I knew Math.random() isn’t truly random and wanted to know if there’s something better out there. The Stack Overflow question is dominated by Math.random() in more variations that you’d think possible (not all equally good I might add). This makes sense because for a long time this was the only way to get any kind of randomness in client-side JS. It also mentions the newer window.crypto API in some answers which is what I ended up using.

                                                                          I can make that judgment call, but I’m not an ML algorithm. And while on Stack Overflow I can add context, caveats, involved trade-offs, offer different solutions, etc. with an “autocomplete code snippet” that’s a lot more limited. And especially for novice less experienced programmer you wouldn’t necessarily know a good snippet from a bad one: “it seems to work”, and without the context a Stack Overflow answer has you just don’t know. Stack Overflow (and related sites) are more than just “gimme teh codez”; they’re also teaching moments.

                                                                          Ideally, there would be some senior programmer to correct them. In reality, due the limited number of people, this often doesn’t happen.

                                                                          We’ll have to wait and see how well it turns out in practice, but I’m worried for an even greater proliferation of programmers who can’t really program but instead just manage to clobber something together by trail-and-error. Guess we’ll have to suffer through even more ridiculous interviews to separate the wheat from the chaff in the future…

                                                                          1. 2

                                                                            We’ll have to wait and see how well it turns out in practice, but I’m worried for an even greater proliferation of programmers who can’t really program

                                                                            I don’t see this as a problem. More mediocre programmers available doesn’t lower the bar for places that need skilled programmers. Lobste.rs commenters often talk of the death of the open web for example. If this makes programming more accessible, isn’t that better for the open web?

                                                                          2. 6

                                                                            We don’t need faster programmers. We need more thorough programmers.

                                                                            Maybe we need more than programmers and should aim to deserve the title of software engineers. Writing code should be the equivalent of nailing wood, whether you use a hammer or AI assisted nailgun shouldn’t matter much if you are building a structure that can’t hold the weight it is designed for or can’t deal with a single plank that is going to break or rot.

                                                                            1. 6

                                                                              We don’t need faster programmers. We need more thorough programmers.

                                                                              Not for everything, but given we spend so much time debugging and fixing things, thoroughness is usually faster.

                                                                              1. 6

                                                                                Slow is smooth and smooth is fast.

                                                                            1. 20

                                                                              where even though everything compiled correctly, it didn’t work. For a compiled language, that is not something you expect

                                                                              I would never expect that of C++! Maybe Haskell or Rust, but not C++.

                                                                              C++ has a ton of holes derived from C, and a whole bunch of new features that can be confused. Here are a couple of surprises I encountered in semi-automatically translating Oil to C++ (even though I’ve been using both C and C++ for decades):

                                                                              This is in addition to all the “usual” ones like scope/shadowing, uninitialized variables (usually warned, but not always), leaving off braces like goto fail, unexpected wrapping with small integer types, signed/unsigned problems, dangling pointers, buffer overflows, use after free, etc.

                                                                              string_view is nice but it’s also a “pointer” and can dangle. Those are all reasons that code may not work when it compiles.

                                                                              I think leaning on the compiler too much in C++ gives diminishing returns. It encourages a style that bloats your compile times while providing limited guarantees. With C that’s even more true since I consider it more of a dynamic language (e.g. void* is idiomatic).


                                                                              Historically, C was even more dynamically typed than it is now. Types were only for instruction selection, e.g. a + b for 2 ints generated different code than 2 floats. That’s about it. You didn’t have to declare function return types or parameter types – they’re assumed to be ints. Reading the old Thompson and Ritchie code really underscores this.

                                                                              C++ has more of the philosophy of types for correctness, but it was constrained by compatibility with C in many cases. It comes from a totally different tradition and mindset than say Haskell or ML.

                                                                              1. 3

                                                                                I would never expect that of C++! Maybe Haskell or Rust, but not C++.

                                                                                I am somewhat hesitant to say this about Rust or Haskell, even in jest - it’s a best an aspirational aphorism about code in these languages, and if you’re trying to think seriously about program correctness it matters that it’s very possible to write code in Rust or Haskell that compiles but is not correct (for some definition of correct). If you want to write code that you can prove is correct at compile time, that’s a noble goal and you need more sophisticated tools for doing this than the ones Haskell or Rust give you.

                                                                                But yes no one says this even in jest about C++.

                                                                                1. 4

                                                                                  Such generalizations are never true in the absolute sense, but there is a noticeable difference in how often and how correct programs are when they compile for the first time in Rust vs less strict languages.

                                                                                  Rust does a remarkable job eliminating “boring” language-level problems, like unexpected nulls, unhandled errors, use-after-free, and unsynchronized data shared between threads. These things most of the time just work in Rust on the first try. In C++ kinda maybe if you’re programming with NASA-level of diligence, but typically I’d expect compiling in C++ to be just the first step before testing and debugging to weed these problems out.

                                                                                  1. 2

                                                                                    I don’t think it’s a binary as much as it’s far as the language’s guarantees on compile-time safety through things like the type system or borrow checker making it more likely that if it compiles, it’s correct.

                                                                                    1. 2

                                                                                      Yeah honestly I don’t really believe in that whole philosophy – I feel like it leads you into a Turing tarpit of a type system. There are plenty of other engineering tools besides type systems that need more love.

                                                                                      But I think that refactoring can be quite safe in strongly typed languages, and that’s useful. Writing new code isn’t really because you don’t know what you want yet, and you can have logic bugs. But refactoring can be, and that’s what the article is about.

                                                                                      1. 2
                                                                                        id :: a -> a
                                                                                        

                                                                                        Implement this function, as long as you don’t:

                                                                                        • Throw exceptions
                                                                                        • Cast
                                                                                        • Loop infinitely

                                                                                        Then if it compiles, it’s correct.

                                                                                        1. 1

                                                                                          It is true with regard to a property called parametricity. On an intuitive level it states that type parameters are used as expected. So a function map :: (a -> b) -> [a] -> [b] must satisfy that each element in [b], it must have an image wrt f in [a] (note that you could as well return the empty list for each input and it would typecheck, thus our guarantee is worded a bit strangely).

                                                                                          1. 1

                                                                                            It is very often correct though. Usually what we say is that if you understand the problem and your solution compiles it probably works. If it doesn’t work you likely don’t understand the problem.

                                                                                            You experience this programming in Haskell more often than Rust (I think because of HKTs) but it is still often the case in Rust.

                                                                                        1. 3

                                                                                          I agree that the term “functional programming” is so vague as to be almost meaningless. But if I were to be as pedantic as this author, then I would also have to say that there is no single thing called “functional programming” as a style, and it doesn’t exist on a linear spectrum, there are multiple dimensions of variation. The Haskell style is not the only kind of pure functional programming you can point at. Many Haskell people do not seem to distinguish between pure functional programming and programming using category theory, a static type system and higher order types. But it is quite possible to have a dynamically typed pure functional language where the idioms are different from Haskell idioms.

                                                                                          1. 1

                                                                                            dynamically typed pure functional language where the idioms are different from Haskell idioms

                                                                                            e.g. Nix

                                                                                          1. 9

                                                                                            Functional programming does not have a single definition that’s easy to agree upon.

                                                                                            I know that for most programmers, the term “functional programming” lacks meaning but there are large groups of programmers who DO use the term to mean just programming with (mathematical) functions. You can ask a question in these groups “is this functional?” and get a consistent response; it has meaning!

                                                                                            I completely agree that there’s “no such thing as a functional programming language” because there’s only the question of “to what extent does this facilitate programming with functions?” and I don’t think we should draw a line at any particular point.

                                                                                            1. 4

                                                                                              My main experience with Data.Text is that I have to litter T.pack/T.unpack all over my code in order to glue things together. If it was the standard I would be much happier.

                                                                                              1. 2

                                                                                                I use lens which provides isomorphisms. Makes it a lot easier: https://hackage.haskell.org/package/lens-4.16/docs/Data-Text-Lens.html

                                                                                              1. 12

                                                                                                SQL maps very closely to the underlying set theory, which makes it very easy to reason about.

                                                                                                1. 5

                                                                                                  I do not think it maps closely to set theory at all. There’s the Null value, rows can be duplicated, etc. People have been criticising SQL since the 80s because it’s not close to set theory.

                                                                                                  1. 2

                                                                                                    There are variations on set theory that account for duplicates.

                                                                                                1. 5

                                                                                                  Defenders of dynamically typed languages sometimes counter that these pitfalls do not matter when runtime failures are mostly harmless. If you want to find errors in your program, just run the program!

                                                                                                  … We do?

                                                                                                  1. 2

                                                                                                    Gabriel qualified the statement (“sometimes”) and gives an example (Nix). Is it not a reasonable accusation? What else can one do with a dynamically typed program, except running it?

                                                                                                    1. 2

                                                                                                      An non exhaustive list:

                                                                                                      • unit tests against your code
                                                                                                      • property tests against your code
                                                                                                      • runtime type pattern matching
                                                                                                      • runtime type conditionals
                                                                                                      • contracts
                                                                                                      • other compile-time checks besides a type system
                                                                                                      • defensive coding (for instance, calling int(x) BEFORE opening a file handle, not after)

                                                                                                      I love type systems, and even more so I love compile time guarantees, but any static typing advocate should be able to speak about the value of typing as a trade-off against other techniques and paradigms. On the other hand, no advocate of dynamic typing would (or should, I suppose) claim that, in the absence of the tangible and often productivity-enhancing benefits of strong static typing, “just run the program” is an acceptable alternative.

                                                                                                      1. 2

                                                                                                        An non exhaustive list:

                                                                                                        unit tests against your code property tests against your code runtime type pattern matching runtime type conditionals contracts other compile-time checks besides a type system defensive coding (for instance, calling int(x) BEFORE opening a file handle, not after)

                                                                                                        All but one of these is a variant on running the code.

                                                                                                        ** EDIT ** Well, the formatting didn’t stay and I’m on mobile so I’m just going to leave this as is for now…

                                                                                                        1. 4

                                                                                                          All but one of these is a variant on running the code.

                                                                                                          In that sense, one can say that even compilation is just a variant on running the code.

                                                                                                          1. 2

                                                                                                            I disagree. The compiler, a linter, or another static analyzer is another program operating on your program.

                                                                                                            Though, I suppose you could argue the same about unit tests and property tests. I think the difference is that they exercise your code whereas your code may not execute with a compiler. Things get a bit weird with compile time execution/macros/whatever.

                                                                                                            I don’t feel completely satisfied with my response.

                                                                                                            1. 6

                                                                                                              A typed program is two programs at once; the type system’s annotations themselves form a second program which is intertwined with the original program’s shape. This second program is the one which is run by the type-checker, and this is what gives us Rice’s Theorem for type systems, where e.g. GHC has some extensions that allow for type-checking to be undecideable.

                                                                                                              This also justifies why static type systems cannot possibly replace runtime testing of some sort; in order to complete in a reasonable time, the typical type system must be very simple, because its expressions are evaluated by the type-checker.

                                                                                                              I waffled over whether to post this at the top or the bottom of the thread, since I feel that this fact is at odds with how the discussion progressed. After all, what else can one do with a statically-typed program (that is, a pair of interleaved programs where one is not Turing-complete and is either sound or conservative over the second program). except running it or running just the type-checker portion? Similarly, it is not just obvious that compilation is a variant on running the code, in that is is an evaluation of one entire layer of the program’s code which produces a residual single-layer program, but also that compilation must run the code.

                                                                                                              1. 2

                                                                                                                A typed program is two programs at once; the type system’s annotations themselves form a second program which is intertwined with the original program’s shape. This second program is the one which is run by the type-checker…

                                                                                                                Okay, I am reasonably convinced by this. My major hang ups might have been implementation details: the distinction between your program and a program running your program can be blurry in some environments (e.g., Dr. Racket).

                                                                                                                Given that there are two interleaved programs that may be run, the typed program’s advantage is that it is side effect free by it’s nature. Unit tests, property tests, fuzzers, etc. are running arbitrary code and could trigger arbitrary effects. I think that is ultimately the distinction that matters: how protected are you from mistakes in your program?

                                                                                                                (Side note: running property tests on a function that sends email could be hilariously bad.)

                                                                                                                I waffled over whether to post this at the top or the bottom of the thread, since I feel that this fact is at odds with how the discussion progressed.

                                                                                                                Well, I appreciate being called out on my position.

                                                                                                                1. 2

                                                                                                                  On this view, what makes the type annotations the second and final program? Could it be three+ programs, since the linter/documentation generator also operate on a language that is intertwined with your program? This is a serious question, btw. I feel like I can think of some answers, but I don’t know that I can reason through them precisely.

                                                                                                                  1. 2

                                                                                                                    You’re quite right. There’s the view that syntax and semantics are the only two components to consider, and it’s a compelling view, rooted in the typical framing of Turing’s and Rice’s Theorems. There’s also the view that dependently-typed theories are at the ceiling of expressive power, and that our separation of types and values is just a happenstance due to choosing weak type systems, which also is tantalizing because of the universal nature of cubical and opetopic type theories. And to answer your point directly, there’s nothing stopping one syntax from having multiple different kinds of annotations which are projected in different ways.

                                                                                                                    I suppose I’m obligated to go with a category-oriented answer. Following Lawvere, syntax and semantics are adjoint and indeed form a Galois connection when viewed in a certain way. So, when we use Rice’s framing to split the properties of a program along such a connection, what we get is a pair of adjoint functors:

                                                                                                                    • On the left, semantics is a functor from programs to families of computer states
                                                                                                                    • On the right, syntax is a functor from families of computer states to programs
                                                                                                                    • The left side of the adjunction: the semantics of a program can include some particular behaviors, if and only if…
                                                                                                                    • The right side of the adjunction: those behaviors are encoded within the syntax of that program

                                                                                                                    And this is a family of adjunctions, with many possibilies for both functors. Rice’s Theorem says that we can’t decide whether most of the left-hand functors are computable, but the right-hand functor is not so encumbered; we can compute some properties of syntax, like well-typedness.

                                                                                                                    We can use the free-forgetful paradigm for analysis too: Semantics freely takes any of the possible execution paths, and syntax forgets which path was taken.

                                                                                                                    There is another functor which is left adjoint to semantics, the structure functor (discussed further here).

                                                                                                                    • On the left, structure is a functor from families of computer states to programs
                                                                                                                    • On the right, the same semantics as before
                                                                                                                    • The left side of the adjunction: the structure of some computations can all be found in a particular program, if and only if…
                                                                                                                    • The right side of the adjunction: the semantics of that program can reach all of those computations

                                                                                                                    In the free-forgetful paradigm, structure freely generates the program which it embodies, and semantics forgets structure.

                                                                                                                    Rice didn’t forbid us from analyzing this structure functor either! As a result, we have two different ways to examine a program without running it and invoking the dreaded semantics functor:

                                                                                                                    • We can study syntax backwards: Which machine states are possibly implied by our program? We can logically simplify the program text if it would help.
                                                                                                                    • We can study structure backwards: Which machine states do our program require or assume? We can simplify the states if it would help.

                                                                                                                    This means that analyzing structure is abstract interpretation! Or I’ve horribly misunderstood something.

                                                                                                                    Now I’m quite sorry to have not posted this at the top of the thread! Oh well.

                                                                                                                  2. 1

                                                                                                                    How does this interpretation handle fully inferred types? Is that not truly an analysis pass on the program, I.e. running a program on the source code, not running the source code itself?

                                                                                                                    I’m not sure if there’s any value in making the distinction… But it feels like there must be, because static types are qualitatively different from dynamic types.

                                                                                                                    1. 1

                                                                                                                      The inference is usually syntactic and doesn’t require examining the semantics of the program. This is the sense in which the second program is intertwined with the first program.

                                                                                                                      For example, in many languages, 42 has a type like Int. This fact could be known by parsers. Similarly, (\x -> x + 1) has a type like Int -> Int. Parsers could know this too, since function types are built out of other types. In general, syntactic analysis can discover both the concrete parse tree and also the inferred type.

                                                                                                                2. 2

                                                                                                                  Types categorise values, they don’t run code.

                                                                                                        1. 2
                                                                                                          • Matrix
                                                                                                          • Web server
                                                                                                          • Baby Buddy
                                                                                                          • Grocy
                                                                                                          • Plex
                                                                                                          • Home Assistant

                                                                                                          NixOS on a Dell PowerEdge r720 with 128GB of RAM. I also run a bunch of virtual machines for development and SSH into them.