1. 1

    It would’ve been nice to see more of the do notation examples in whatever the other style is called. Bind style?

    1. 2

      There’s not too much of a difference, because in the bookMeeting examples the operations don’t depend on values from the previous operations. You just do >> at the end of every operation, which means “>>=, but ignore the result”

      So instead of

      bookMeeting participants room details = do
        addMeeting details (calendarOf room)
        for_ participants ((addMeeting details) . calendarOf)
        for_ participants ((sendEmail details) . emailOf)
      

      it is

      bookMeeting participants room details =
        (addMeeting details (calendarOf room)) >>
        (for_ participants ((addMeeting details) . calendarOf)) >>
        (for_ participants ((sendEmail details) . emailOf))
      
        1. 1

          whatever the other style is called. Bind style?

          Maybe “point-free style”?

        1. 3

          Neat idea of using Writer to store undo actions.

          1. 8

            In addition to cd and ls, I suggest to use pushd/popd. Built into practically every shell, so nothing to install.

            Apart from that, Fish autocompletion is good enough to remember my “current” project folders.

            1. 2

              Is there a way to automatically push a directory unto the directory stack? And possibly just have it remember the last 10, 20 or however much? pushd/popd are two commands I could never get used to, because I think of them after I need them, not before.

              1. 5

                I think using popd after pushd is seldom useful.

                What I do is prepopulate the dirs stack using pushd -n and then navigate between the “bookmarks” using tilde expansions like cd ~2 which don’t “pop” folders from the stack.

                1. 1

                  I think using popd after pushd is seldom useful.

                  Why? It would be like a “back” button in a browser/file manager.

                  navigate between the “bookmarks” using tilde expansions like cd ~2

                  Meh, I have a “jump” function already where I can use human-readable names.

                2. 2

                  Unfortunately bash doesn’t have zsh’s autopushd. Years ago I made two aliases that suppress pushd/popd’s noisy output and alias cd/bd (“back directory”) to them - I’ve found it very handy and use it daily.

                  # don't echo the current stack on pushd
                  function quiet_pushd() {
                      # preserves functionality when aliasing cd to quiet_pushd
                      if [ $# == 0 ]; then
                          builtin pushd $HOME/ > /dev/null;
                      else
                          builtin pushd "$@" > /dev/null;
                      fi
                  }
                  
                  # don't echo the current stack on popd
                  function quiet_popd() {
                      builtin popd "$@" > /dev/null;
                  }
                  
                  1. 1

                    In zsh: setopt autopushd

                    I’m unclear about other shells, though it can be emulated by writing a function / alias for cd.

                  2. 1

                    I know about pushd and popd, have a few scenarios in my head where I think they might be useful, but have never used them productively in practice. Some times I note in passing that it would be nice to go back more levels into cd history than simple cd - but those moments are quite rare. And once you catch yourself wanting to do that using pushd is already too late.

                    I secretly wish pushd would be auto implemented as cd history with a shortcut to trigger it, like maybe cd ^

                    1. 2

                      autopushd and pushdsilent seems to be what you want. I think oh-my-zsh enables them by default but I’m in the process of setting my shell from scratch these days (zsh + starship + cherry-picking the cool parts of oh-my-zsh) for performance reasons, and this discussion finally made me search for it.

                      http://zsh.sourceforge.net/Intro/intro_6.html

                      1. 1

                        Thanks for these.

                        I am somewhat on a streak for getting back to minimal (almost ascetic) setup on my machines, so things like zsh are out of the question. But will definitely keep autopushd thing in mind if someday I change my mind.

                  1. 2

                    It’s been many years since I played with Haskell, but the last time I did I remember running into a major stumbling block when I realized not all Haskell code can be entered at the GHCi repl. Maybe definitions can’t be added? I’d appreciate an explanation of how you use the REPL without hitting your shins on that constraint all the time.

                    1. 5

                      It has been many years! Bindings have been supported for 4 years now.

                      https://gitlab.haskell.org/ghc/ghc/issues/7253

                      1. 3

                        One of the key differences with the REPL is that you used to need to use the let keyword for definitions.

                        1. 2

                          Short definitions can be added in the repl, but it’s more common to write them in a file and reload it on each change.

                        1. 2

                          I have this script for writing all accented characters in Spanish, French and German to my heart’s content, after being annoyed by standard international layouts https://gist.github.com/danidiaz/583824e50e3667ab50963cc30c7df0ec#file-acentos-ahk

                          I also use AutoHotKey to switch Ctrl and CapsLock in a work laptip for which I don’t have administrator rights.

                          1. 6

                            I’m afraid Haskell wins this one:

                            • Type declarations and value definitions go in separate lines, reducing the amount of noise that ever goes in a single line.
                            • Fancy polymorphic types can be inferred to a much larger extent than in Scala, further reducing the amount of noise.
                            • Pattern matching (with branching!) on the left-hand side of a function definition is actually possible and idiomatic.
                            1. 8

                              Not a fan of Haskell in this regard – I think the split of params from their types comes at a cost, just as fancier type inference.

                              But yeah, Haskell (or rather Idris with its single :) may occupy a different local optimum from the one I described.

                              1. 6

                                Haskell is moving even more in this direction. There was the quirk that, while type signatures could be in separate lines, kind signatures (the “types of types”, useful in places like datatype and type family declarations) could not. But there’s a new extension called StandaloneKindSignatures that allows that. For example:

                                type MyEither :: Type -> Type -> Type
                                data MyEither a b = MyLeft a | MyRight b
                                
                                1. 3

                                  I definitely do not like Haskell (hence “I’m afraid”), but credit is given where credit is due. Using polymorphism in Haskell is much more pleasant than using it in a language that requires you to explicitly spell out every single type variable.

                                  What is the cost in separating function signatures from function argument lists?

                                  1. 3

                                    I don’t see how separating the signature would change the properties of type inference, neither using a single : would improve anything in this regard.

                                    Another aspect that would be more in line with the content of your article would be how class constraints are defined:

                                    fromIntegral :: (Num b, Integral a) => a -> b:

                                    Here we can see that haskell took kind of both roads at the same time :)

                                1. 27

                                  Interesting post. The main issue of Nix is its onboarding curve and lack of simple-to-grok documentation.

                                  There’s a few things in this post worth responding to:

                                  Now, you may ask, how do you get that hash? Try and build the package with an obviously false hash and use the correct one from the output of the build command! That seems safe!

                                  Nix has the prefetch-* commands that can do this for you and output either the hash, or a full Nix expression that refers to that thing.

                                  I could avoid this by making each dependency its own Nix package, but that’s not a productive use of my time.

                                  It depends. My personal view recently has been that Nix should adopt a more Bazel-like model, in which the Nix language is also used for describing the actual software builds rather than just wrapping external package managers.

                                  I have implemented this for Go (buildGo.nix / docs) and Common Lisp (buildLisp.nix), and with the Go one specifically external dependencies can be traversed and automatically transformed into a Nix data structure.

                                  For example, here’s the buildGo.nix packaging of golang.org/x/net (from here):

                                  { pkgs, ... }:
                                  
                                  pkgs.buildGo.external {
                                    path = "golang.org/x/net";
                                    src = builtins.fetchGit {
                                      url = "https://go.googlesource.com/net";
                                      rev = "c0dbc17a35534bf2e581d7a942408dc936316da4";
                                    };
                                  
                                    deps = with pkgs.third_party; [
                                      gopkgs."golang.org".x.text.secure.bidirule.gopkg
                                      gopkgs."golang.org".x.text.unicode.bidi.gopkg
                                      gopkgs."golang.org".x.text.unicode.norm.gopkg
                                    ];
                                  

                                  This makes every subpackage available as an individual Nix derivation, which also means that those builds are cached across different software using those dependencies.

                                  this is at least 200 if not more packages needed for my relatively simple CRUD app that has creative choices in technology

                                  For most mainstream languages generators have been written to wrap 3rd-party package managers automatically. For some languages (e.g. Python), the nixpkgs tree actually contains derivations for all packages already so it’s just a matter of dragging them in.

                                  Oh, even better, the build directory isn’t writable.

                                  This isn’t true for Nix in general. The build directory is explicitly writable and output installation (into /nix/store) usually (in Nix’s standard environment) happens as one of the last steps of a build.

                                  It might be that this was a case of some language-specific tooling implementing such a restriction, in which case there’s probably also a flag for toggling it.

                                  You know, the things that handle STATE, like FILES on the DISK. That’s STATE. GLOBALLY MUTABLE STATE.

                                  The conceptual boundary is drawn differently here. In some sense, we look at the artefacts of realised derivations (i.e. completed “builds”) as a cache. The hashes you see in the /nix/store reference the inputs, not the outputs.

                                  Also, nothing written to the store is mutated afterwards so for any given store there is mutability, but it is append-only.

                                  As a side effect of making its package management system usable by normal users, it exposes the package manager database to corruption by any user mistake, curl2bash or malicious program on the system.

                                  I’m not sure what is meant by this.

                                  Edit 1: Ah, on the above this tweet adds some background (I think):

                                  It doesn’t matter how PURE your definitions are; because the second some goddamn shell script does anything involving open(), you lose. The functional purity of your build is gone.

                                  By default, Nix sandboxes builds which means that this is not a problem. Only explicitly declared dependencies are visible to a builder, and only the build directory and output path are writeable. Users can enable various footguns, such as opting out of sandboxing or whitelisting certain paths for passthrough.

                                  1. 6

                                    By default, Nix sandboxes builds which means that this is not a problem.

                                    Only on linux, unfortunately. The author seems to be on mac, which is probably why they didn’t know about the sandboxing.

                                    1. 3

                                      It seems like sandboxing is available on Mac (thanks puck for pointing this out), but for users running Nix in single-user mode (which OP might be doing) there is currently some extra hoop-jumping required to make it work correctly.

                                      1. 1

                                        I was thinking of this line from https://nixos.org/nix/manual/:

                                        In addition, on Linux, builds run in private PID, mount, network, IPC and UTS namespaces to isolate them from other processes in the system (except that fixed-output derivations do not run in private network namespace to ensure they can access the network).

                                        It looks like on mac it’s just a chroot to hide store paths but you can still curl install.sh | bash in your build. I didn’t know it even had that much sandboxing on mac though, so thanks for pointing it.

                                    2. 4

                                      You know, the things that handle STATE, like FILES on the DISK. That’s STATE. GLOBALLY MUTABLE STATE.

                                      The conceptual boundary is drawn differently here. In some sense, we look at the artefacts of realised derivations (i.e. completed “builds”) as a cache. The hashes you see in the /nix/store reference the inputs, not the outputs.

                                      Also, nothing written to the store is mutated afterwards so for any given store there is mutability, but it is append-only.

                                      I really like this aspect of Nix: it’s like all packages exist in some platonic library of babel, and we copy a few of them into our /nix/store cache as they’re needed. This style of reasoning also fits with the language’s laziness, the design of nixpkgs (one huge set, whose contents are computed on-demand) and common patterns like taking fixed points to allow overrides (e.g. all of the function arguments called self).

                                      A similar idea applies to content-addressable storage like IPFS, which I’m still waiting to be usable with Nix :(

                                      1. 2

                                        Nix should adopt a more Bazel-like model, in which the Nix language is also used for describing the actual software builds rather than just wrapping external package managers.

                                        Would that involve “recursive Nix” to allow builders to use Nix themselves, in order to build sub-components?

                                        1. 3

                                          Recursive Nix is not necessary. For some languages this can already been done. E.g. the buildRustCrate function reimplements (most of) Cargo in Nix and does not use Cargo at all. This is in contrast to buildRustPackage, which relies on Cargo to do the builds.

                                          You can convert a Cargo.lock file to a Nix expression with e.g. crate2nix and build crates using buildRustCrate. This has the same benefits as Nix has for other derivations: each compiled crate gets its own store path, so builds are incremental, and crate dependencies with the same version/features can be shared between derivations.

                                          1. 2

                                            No, I’m not using recursive Nix for these. In my opinion (this might be controversial with some people) recursive Nix is a workaround for performance flaws of the current evaluator and I’d rather address those than add the massive amount of complexity required by recursive Nix.

                                            What’s potentially more important (especially for slow compilers like GHC or rustc) is content-addressed store paths, which allow for early build cutoff if two differing inputs (e.g. changes in comments or minor refactorings) yield the same artefact. Work is already underway towards that.

                                          2. 2

                                            Can you please edit documentation somewhere to note the existence of the prefetch commands and how to use them?

                                            Does that buildGo.nix thing support Go modules?

                                            1. 7

                                              Can you please edit documentation somewhere to note the existence of the prefetch commands and how to use them?

                                              nix-prefetch-url is part of Nix itself and is documented here, nix-prefetch-git etc. come from another package in nixpkgs and I don’t think there’s any docs for them right now.

                                              Nix has several large documentation issues and this being undocumented is a symptom of them. The two most important ones that I see are that the docs are written in an obscure format (DocBook) that is not conducive to a smooth writing flow and that the docs are an entirely separate tree in the nixpkgs repo, which means that it’s completely unclear where documentation for a given thing should go.

                                              The community disagrees on this to various degrees and there is an in-progress RFC (see here) to determine a different format, but that is only the first step in what is presumably going to be a long and slow improvement process.

                                              Does that buildGo.nix thing support Go modules?

                                              I’ve never used (and probably won’t use) Go modules, but I believe Go programs/libraries written with them have the same directory layout (i.e. are inspectable via go/build) which means they’re supported by buildGo.external.

                                              If your question is whether there’s a generator for translating the Go module definition files to Nix expressions, the answer is currently no (though there’s nothing preventing one from being written).

                                              1. 1

                                                Is there a way to get a hash of a file without making it available over HTTP?

                                                1. 6

                                                  Yep!

                                                  /tmp $ nix-store --add some-file 
                                                  /nix/store/kwg265k8xn9lind6ix9ic22mc5hag78h-some-file
                                                  

                                                  For local files, you can also just refer to them by their local path (either absolute or relative) and Nix will copy them into the store as appropriate when the expression is evaluated, for example:

                                                  { pkgs ? import <nixpkgs> {} }:
                                                  
                                                  pkgs.runCommand "example" {} ''
                                                    # Compute the SHA256 hash of the file "some-file" relative to where
                                                    # this expression is located.
                                                    ${pkgs.openssl}/bin/openssl dgst -sha256 ${./some-file} > $out
                                                  ''
                                                  

                                                  Edit: Oh also, in case the question is “Can I get this hash without adding the file to the store?” - yes, the nix-hash utility (documented here) does that (and supports various different output formats for the hashes).

                                            2. 1

                                              For example, here’s the buildGo.nix packaging of golang.org/x/net (from here):

                                              Proxy error (the link obvisouly).

                                              Edit: Back up!

                                              1. 2

                                                Hah, sorry about that - I’m running that web UI on a preemptible GCP instance and usually nobody manages to catch an instance cycling moment :-)

                                              2. 1

                                                Oh, even better, the build directory isn’t writable.

                                                This isn’t true for Nix in general. The build directory is explicitly writable and output installation (into /nix/store) usually (in Nix’s standard environment) happens as one of the last steps of a build.

                                                It might be that this was a case of some language-specific tooling implementing such a restriction, in which case there’s probably also a flag for toggling it.

                                                It’s most likely caused by the derivation either trying to build inside a store path, e.g. cd "${src}" && build, or inside a copy of a store path (which preserves the read-only flags), e.g. cp -a "${src}" src && cd src && build. We can see if that’s the case by looking at the build script in the failing .drv file: they’re plain text files, although they’re horrible to read without a pretty-printer like pretty-derivation. This is probably quicker than trying to get hold of and inspecting the failing derivation in nix repl, since it may be buried a few layers deep in dependencies.

                                                I actually make this mistake a lot when writing build scripts; I usually solve it by putting chmod +w -R after the copy. If someone else has written/generated the build script it may be harder to override; although in that case it would presumably break for every input, so I’d guess the author might be calling it wrong (AKA poor documentation, which is unfortunately common with Nix :( )

                                                It might be a symptom of the Koolaid, but I find this a feature rather than a bug: Nix keeps each download pristine, and forces my fiddling to be done on a copy; although the need to chmod afterwards is admittedly annoying.

                                              1. 2

                                                Do people find R’s lazy evaluation useful, or would a strict R work just as well? R is one of the few mainstream languages besides Haskell that uses lazy evaluation.

                                                1. 5

                                                  It is essential to R’s non-standard evaluation (NSE), which is in turn essential to R’s many DSLs that make interactive usage a joy. An example:

                                                  countries <- read.csv('countries.tsv', sep='\t')
                                                  ggplot(countries) +
                                                      geom_point(aes(x=tax_rate, y=prosperity, colour=continent))
                                                  

                                                  If R had non-lazy evaluation, you’d get an error “undefined variable: tax_rate”. Because it has lazy evaluation, the geom_point function can say “actually, look for a variable called tax_rate inside the (countries) data frame, and use that for the x coordinate”.

                                                  Plotnine, the python ggplot clone, is forced to ask for geom_point({'x': 'tax_rate','y': 'prosperity', 'colour': 'continent'}) instead, and those extra quotmarks are surprisingly frictionful for interactive usage. OTOH, programming over NSE of variable names can be a right pain in the neck compared to programming which string literals to pass, to the extent that there is a small cottage industry of packages to decouple ggplot/dplyr code from variable names.

                                                  Stopping due to awful keyboard probs and bedtime, let me know if you want links to such pkgs and I’ll come back tomorrow.

                                                1. 2

                                                  A few languages, including Haskell, allow type variables that abstract over type functions (second-order polymorphism)

                                                  Haskell allows that for type constructors like “Maybe” but not for type families (at least until -XUnsaturatedTypeFamilies arrives).

                                                  In fact, defunctionalization is used to mimic higher order functions at the type level: https://typesandkinds.wordpress.com/2013/04/01/defunctionalization-for-the-win/ https://free.cofree.io/2019/01/08/defunctionalization/

                                                  1. 7

                                                    The present of Dependent Haskell looks like this (paper).

                                                    1. 9

                                                      I’ve written Java code professionally for about 15 years. My relationship with singletons have gone through these phases:

                                                      1. Singletons are cool! Let’s use them everywhere. (About a year)
                                                      2. Never use singletons! Singletons are code smell. It breaks single-responsibility principle, causes contention problems, means very tight coupling, means code is often hard to test. (About 5-7 years)

                                                      Until finally, where I am today:

                                                      1. Ok. Let’s use singletons where it makes sense.

                                                      We have a very large enterprise Java application and use singletons in a couple of places. Client-side caches, server-side caches, connectivity classes from client to server, global statistics framework, logging framework, global system wide properties and defaults. That’s about it.

                                                      As with everything in programming, I say as in The Hillstreet Blues; “Let’s be careful out there”

                                                      1. 4

                                                        From the examples you listed, why not just inject those as dependencies into the calling scope? Why do they have to be singletons?

                                                        1. 4

                                                          Because they are literally used everywhere. We don’t want to be injecting them everywhere.

                                                          Also, singletons are easy to use safely, if well designed. Sadly, the majority of our developers are very inexperienced, and to be frank, not all that great developers. It’s easier for them to get connections right, logging, or proper use of caches, when it is available, looks, and feels the same everywhere.

                                                          Also, some core code that is highly sensitive to concurrency problems is hidden and handled behind the singleton. Such that the developers don’t have to think about things like that.

                                                          1. 1

                                                            Don’t they make testing harder?

                                                            1. 2

                                                              Yes. Definitely. Testing wise, it’s a mess. We have had to create very specialised mock versions of them that are as easy to use such that we can switch them out for mocks altogether for unit tests.

                                                              We have very few unit tests compared to lines of production code. Instead we try to do more continuous testing using BDD with Gherkin that runs against live environments where we can do more system and end-to-end testing.

                                                              But yes. For unit testing. These large singletons are a headache.

                                                      1. 8

                                                        Wow, that is a very unusual introduction to Haskell — going straight into imperative programs (everything’s a do!) and concurrency. And then it just…stops!

                                                        1. 6

                                                          It’s a phrasebook. It gives a way to do something in a language you don’t really know.

                                                          It isn’t idiomatic, it’s just getting you to have something to show for it as quickly as possible.

                                                          1. 6

                                                            It’s a work in progress:

                                                            We have launched the Phrasebook with 14 demonstrations of topics ranging from if-then-else expressions to transactional concurrency, and there is a lot more to come.

                                                            1. 2

                                                              In… a good way? Bad way?

                                                              1. 5

                                                                I don’t know! Well, it’s not good that it just stops. But I wonder what a Haskell book would be like that started with the imperative and concurrent stuff like “normal” languages have, and ended with the higher-order functions and so on, instead of the other way around, as a Haskell book normally does.

                                                                Like, you would start off thinking it was like Go or something, just with weird syntax. You’d get sucked in that way, but then things would start getting more abstract and powerful, and by the end you’d be using pure functions and GADTs and free monads before you knew what hit you.

                                                                1. 3

                                                                  Like, you would start off thinking it was like Go or something, just with weird syntax. You’d get sucked in that way, but then things would start getting more abstract and powerful, and by the end you’d be using pure functions and GADTs and free monads before you knew what hit you.

                                                                  I suspect you might give up, thinking, “what’s the point of this weirdness” before you got to any real motivation or reason to keep learning.

                                                                2. 4

                                                                  I like it. And I am waiting for it to provide more examples. I went through several books, still reading and still trying to learn. But did already write programs that I am using for my work and that are helpful for me. Still mostly reaching out for shell scripting, because the shell scripts naturally grow by combining commands and I wish I would use some Haskell Shell in which I would do my daily stuff and that would easily allow me at some point to put together the Haskell programs.

                                                                  I like how they are showing ghcid early (how long did it take me to find settle on ghcid, how many editor/IDE tools did I try), and I like that ghci is introduced. It’s pragmatic.

                                                                  I hope it will go on with many examples.

                                                                  1. 0

                                                                    In… a good way? Bad way?

                                                                    Definitely a bad way.

                                                                    All the weirdness and higher order stuff is there to give you all kinds of guarantees which can be extremely useful.

                                                                    In fact: If you are not using the higher-order stuff, you might just as well use another language which requires you to jump though less hoops, because you are missing the whole point of what Haskell is about.

                                                                    You should start with the higher-order stuff and then bolt this phrasebook on as an afterthought, not the other way around. If you start with this phrasebook, you will essentially be writing a bad code base.

                                                                    Please keep in mind that I have actually reviewed assignments from a “Functional Programming” course, which used Haskell as it’s primary subject of study.

                                                                    1. 9

                                                                      You are gate-keeping, and this behaviour is definitely worse for the community.

                                                                      I’m one of those developers who had no computer science education, and started programming essentially by banging rocks together, trying to pay the bills with WordPress and jQuery.

                                                                      I learned Haskell the trial-and-error way, and the imperative way. My first foray into Haskell was from the book Seven Languages in Seven Weeks, which necessarily doesn’t go very deep into the languages it exhibits. I got some of the basics there, but otherwise trial-and-error, Google, IRC, etc. My first web apps in Haskell were pretty terrible, but I needed to just get something working for me to be more invested in the technology. Everyone sucks at something before they’re good at it anyway. There’s still an enormous amount of Haskell for me to learn. I see that as compelling, not a hurdle.

                                                                      This has not “destroyed my reputation”, as you asserted. If anything it’s only improved it, especially among people who are interested in Haskell but are discouraged by people like you.

                                                                      Now I run three businesses on Haskell, and employ other Haskellers who have worked at Haskell companies you have heard of.

                                                                      1. 6

                                                                        you will essentially be writing a bad code base.

                                                                        But you WILL be writing a code base.

                                                                        1. 1

                                                                          But you WILL be writing a code base.

                                                                          You will be writing a codebase that will force the next competent Haskell developer, to throw out all your work and start over. Also: It will destroy your reputation.

                                                                          Honestly, it’s better to not write anything at all if this is your starting point. Just use something else like python, C/C++, Java or C#. This is simply not how Haskell should be written and I will probably also perform worse than the alternatives.

                                                                          Why? Because if you use Haskell the right way, the compiler can throw in all kinds of optimizations, like lazy evaluation and memoization for free. If you are writing Haskell in the way that is proposed in the Phrasebook, you essentially loose all those perks without gaining anything. In fact your code will be much, much, (about a factor 10 actually) slower than it would be if you’d just started out by using a different language.

                                                                          For an elaborate example, you can look at The evolution of a Haskell programmer. Note that the Juniors and the first Senior developer’s solutions are in fact perfectly valid and viable.

                                                                          However, the second senior (which uses foldl) makes a critical mistake which costs him the “lazy evaluation perk”, which means that his best-case and worst-case performance are both O(n), whereas the senior that uses foldr will have O(1) as best case and O(n) as worst case performance.

                                                                          And it goes downhill from there. However the Haskell code I see in the Phrasebook is similar to what the “Beginning graduate Haskell programmer” would do.

                                                                          The “right” way to do it, is the “Tenured professor”-way all at the bottom. It doesn’t matter that product uses foldl’ internally in this case, which also sacrifices lazy evaluation. It’s about a way of doing things and in general, where you rely upon the implementation of libraries getting better. This phrasebook also throws a lot of those perks out by manually taking control over nearly the entire control flow (which is something you should do as little as possible when you are writing Haskell).

                                                                          That is the kind of “bad codebase you would be writing” we are talking about here. If you find yourself in the situation where you need this phrasebook to get started, you are simply out of your league. The situation is really not unlike the software engineering team that programmed the flight computers of the 737 MAX 8. You should step away and say: “No, I am not up to this task right now. I need at least 120 hours (but 240 hours is a more reasonable estimate) of study before I can do this”.

                                                                          But if you did invest the hours upfront and are using this Phrasebook as an afterthought… sure; Sure! Go ahead! You should now know where the pitfalls in these examples are.

                                                                          1. 7

                                                                            One of the authors of this Phrasebook is also an author of Haskell Programming from First Principles, which starts from the lambda calculus. I think she’s deliberately exploring as different an approach as possible. There’s isn’t a single Right way to teach, the readers’ varied backgrounds and motivations lead them to really different results.

                                                                            1. 1

                                                                              One of the authors of this Phrasebook is also an author of Haskell Programming from First Principles, which starts from the lambda calculus. I think she’s deliberately exploring as different an approach as possible. There’s isn’t a single Right way to teach, the readers’ varied backgrounds and motivations lead them to really different results.

                                                                              The approach the author is taking now, is an approach which defeats the main purpose of Haskell: It’s type-system and a relatively smart compiler that exploits this through lazy evaluation. Because of this, I simply do not agree with this statement.

                                                                              A pilot needs to learn at least some basic meteorology and aerodynamics, the same applies here, because if you don’t take the time to properly understand the type system and lazy evaluation, you are basically an unlicensed pilot that knows how to get an airplane off the ground, keep in in the air and land it again, but without any contact with air traffic control.

                                                                              I would not want to fly with such a pilot, neither do I want to use an aircraft he/she has flown in. In reality we have systems in place to stop this from happening and the pilot will be told to stay on the ground and “pilot” something (like a car for example) he/she knows how to pilot. In the software world, we do not have a system, other than our own sound judgement, in place to prevent this from happening.

                                                                              So please: Learn Haskell’s fundamentals first and then add this phrasebook to the mix afterwards or choose an entirely different technology. Everyone who is currently next to you or whom comes after you, will thank you for it.

                                                                              1. 3

                                                                                Hopefully Haskell can be many things to many people. I think it makes for a pretty good imperative language.

                                                                            2. 6

                                                                              I’m currently training a team of engineers to write Scala. We’re experiencing the “no code” problem right now. I prefer people write bad (but functional) code than no code.

                                                                              1. 1

                                                                                I’m currently training a team of engineers to write Scala. We’re experiencing the “no code” problem right now. I prefer people write bad (but functional) code than no code.

                                                                                I would agree with you if this was about any other programming language, but Haskell really is a different beast in this regard.

                                                                                I pose you this question: Would you rather spend some time training your engineers or would you rather have them dive in without them knowing what they are doing?

                                                                                Since you are training a team, you’ve probably chosen the first approach, which is exactly what I am proposing you should do with Haskell as well. You do not hand a pilot the keys to an airplane without making sure they’ve had some proper training. The same applies here (see below). Most other programming languages are like cars or trucks, but Haskell really is more of an aircraft.

                                                                                1. 8

                                                                                  I think this type of elitist gate keeping dissuades people trying to learn Haskell and reflects poorly on the community. Furthermore the creators of the Haskell Phrasebook clearly know a lot about Haskell and have built a business around teaching it to people. Do you think it’s possible for them to have a compelling reason to create a resource like this?

                                                                                  @argumatronic: I’ve seen people do similar with Haskell, starting with very imperative-style Haskell, but in the meantime: I can understand you, thank you for making effort to learn a new language, welcome.

                                                                                  1. 0

                                                                                    I think this type of elitist gate keeping dissuades people trying to learn Haskell and reflects poorly on the community.

                                                                                    Actually I digress. There is nothing elitist about it. It’s about using a hammer to turn a screw in.

                                                                                    Furthermore the creators of the Haskell Phrasebook clearly know a lot about Haskell and have built a business around teaching it to people.

                                                                                    The fact that someone builds a business around something, doesn’t mean they are doing things the right way. Teaching people things the wrong way, has a tendency to stick around. Oh and btw, I also earned money teaching Haskell (and cryptography and security) to people during my studies at an accredited university with the oversight of a professor leading in the development of the language…. So I am no lightweight either…. And what I see here makes me cringe and would have awarded any student a non-passing grade with approval.

                                                                                    Do you think it’s possible for them to have a compelling reason to create a resource like this?

                                                                                    Yes I do. In fact, they state the same reason as I suspected on the Twitter feed you mention:

                                                                                    IME, people start to write more Haskelly Haskell as they get comfortable with it, but we have the tools to write imperative-style Haskell as a bridge, no shame in using them.

                                                                                    And:

                                                                                    Eventually, by doing that a lot, I became quite fluent in Japanese. And I’ve seen people do similar with Haskell, starting with very imperative-style Haskell, but in the meantime: I can understand you, thank you for making effort to learn a new language, welcome.

                                                                                    And like I said, there is nothing wrong with using the phrasebook, but you have to use it after you at least have a firm grasp op the basic concepts. Doing it the other way around will give the community and the language itself a bad name. If nothing else, the Haskell ecosystem will turn into a hack fest similar to python or nodejs with the decrease in quality and performance of everything that comes with it.

                                                                                    That’s what I am worried about and it’s also why I disagree: You want people that write Haskell, to write it in a completely different way than you’d write an imperative language.

                                                                    1. 5

                                                                      I like this approach! In particular how it touches on tooling and hashing. I would perhaps have eschewed forkIO and gone directly to the utilities of the async package.

                                                                      1. 3

                                                                        This “Do-It-Yourself Functional Reactive Programming” talk is also good: https://www.youtube.com/watch?v=Rm7rwWL6lIY

                                                                        1. 3

                                                                          Understanding CTEs was a revelation for me. For me it is so much easier to compose a non-trivial query from custom sets built with CTEs as opposed to sub-queries, which tend to get cluttered and don’t stand out as obviously to my perception as set building blocks.

                                                                          1. 2

                                                                            Recursive CTEs are also very cool. They somewhat remind me of “unfolds” or “anamorphims” from functional programming, in that they start from and initial “seed set” of rows and add new rows by repeatedly applying a query to the results from the previous step. https://stackoverflow.com/questions/3187850/how-does-a-recursive-cte-run-line-by-line/3188127#3188127 Maybe they should be called co-recursive CTEs instead!

                                                                            1. 3

                                                                              We use recursive CTEs at work to traverse graph structures that we store within the database. It’s really neat, but also a performance nightmare. For large tables, recursive CTEs have a tendency to hold a lot more stuff in memory than they need to. This has caused lots of unintuitive problems for us e.g. our database running out of space because a large recursive CTE query ate up a ton of memory, swapped it all to disk, then timed out and never cleaned the swap files up.

                                                                              1. 1

                                                                                What database engine are you using?

                                                                                1. 1

                                                                                  We’re running on PostgreSQL.

                                                                          1. 7

                                                                            CTEs are great, but it’s important to understand the implementation characteristics as they differ between databases. Some RDBMSs, like PostgreSQL, treat a CTE like an optimization fence while others (Greenplum for example) plan them as subqueries.

                                                                            1. 2

                                                                              The article mentions offhand they use SQL Server, which AFAIK does a pretty good job of using them in plans. I believe (not 100% sure) its optimiser can see right through CTEs.

                                                                              1. 2

                                                                                … and then you have RDBMSs like Oracle whose support for CTE is a complete and utter disgrace.

                                                                                I praying for the day Oracle’s DB falls out of use, because I imagine that will happen sooner than them managing to properly implement SQL standards from 20 years ago.

                                                                                1. 2

                                                                                  At university we had to use Oracle and via the iSQL web-interface for all the SQL-related parts in our database-courses. It was the slowest most painful experience, executing a simple select could take several minutes and navigating the interface/paginating results would take at least a minute per operation.

                                                                                  I would always change it to show all results on one page (no pagination) but the environment would do a full reset every few hours requiring me to spend probably 15-30minutes changing the settings back to my slightly saner defaults. Every lab would take at least twice as long because of the pain in using this system. I loved the course and the lecturer, it was probably one of the best courses I took during my time at university, but I did not want to use Oracle again after that point.

                                                                                  I’ve heard that they nowadays have moved the course to use PostgreSQL instead which seems like a much more sane approach, what I would have given to be able to run the code locally on my computer at that time.

                                                                                2. 1

                                                                                  I didn’t know this, so using a CTE in Postgres current would be at a disadvantage compared to subqueries?

                                                                                  Haven’t really used CTEs in Postgres much yet but I’ve looked at them and considered them. Is there any plans on enabling optimization through CTE’s in pg? Or is there a deeper more fundamental undelaying problem?

                                                                                  1. 5

                                                                                    would be at a disadvantage compared to subqueries

                                                                                    it depends. I have successfully used CTEs to circumvent shortcomings in the planner which was mi-estimating row counts no matter what I set the stats target to (this was also before create statistics).

                                                                                    Is there any plans on enabling optimization through CTE’s in pg

                                                                                    it’s on the table for version 12

                                                                                    1. 2

                                                                                      It’s not necessarily less efficient due to the optimization fence, it all depends on your workload. The underlying reason is a conscious design decision, not a technical issue. There have been lots of discussions around changing it, or at least to provide the option per CTE on how to plan/optimize it. There are patches on the -hackers mailing list but so far nothing has made it in.

                                                                                    2. 1

                                                                                      Does anyone know if CTEs are an optimization fence in DB2 as well?

                                                                                    1. 9

                                                                                      Annoyingly, this is only a ZIP file with no installer

                                                                                      This counts as an improvement for me. Things like automated deployments and building Docker images will be simpler now.

                                                                                      1. 3

                                                                                        This change is mostly visible to desktop Windows users, where automated deployments are done with .msi files and Docker isn’t really a thing.

                                                                                        1. 2

                                                                                          As a Java developer at $work, having half a dozen JREs and JDKs installed at any one time, with exactly four different shell environments (this is on Windows), I’ve preferred the zip-file installation method for some time. I leave the system-wide path clear of any java install, too, so that when I try to run something in the JVM, it fails until I explicitly pick an environment. This saves more time than it costs!

                                                                                          sebboh@workstation MINGW64 ~
                                                                                          $ find /c/Program\ Files/Java/ /d/java -maxdepth 1 && find /d -maxdepth 1 -iname \*eclipse*
                                                                                          /c/Program Files/Java/
                                                                                          /c/Program Files/Java/ojdkbuild.windows.x86_64-1.8.0.171-1.b10
                                                                                          /d/java
                                                                                          /d/java/adopt-jdk-10.0.2+13
                                                                                          /d/java/openjdk8u172-b11
                                                                                          /d/java/oracle-jdk-8u181
                                                                                          /d/java/oracle-jre-7u45
                                                                                          /d/java/oracle-jre-8u181
                                                                                          /d/eclipse-201809
                                                                                          /d/eclipse-luna
                                                                                          /d/eclipse-mars
                                                                                          /d/eclipse-neon
                                                                                          /d/eclipse-oxygen
                                                                                          
                                                                                        1. 8

                                                                                          I prefer Vim’s approach of using JSON for communicating with external jobs instead of using Messagepack like Neovim does. I’m also not very partial about inventing a new binary format as a vimimfo replacement.

                                                                                          It is my understanding that Neovim wasn’t available on Windows for quite a while. I’m all for shedding features in order to facilitate building new ones, it might take Neovim in interesting new directions, but then the new project should not be considered a drop-in replacement for the old one.

                                                                                          (I even kind of like VimScript, despite its weirdness.)

                                                                                          1. 11

                                                                                            I’m also not very partial about inventing a new binary format as a vimimfo replacement.

                                                                                            viminfo is an ad-hoc unstructured format. shada is trivially parsable (and stream-able), because it’s just msgpack. It’s misleading to call it a “new binary format”, it’s just plain old msgpack with some sentinel values for guaranteed round-tripping to/from VimL.

                                                                                            It doesn’t make sense to prefer viminfo, which isn’t a format at all.

                                                                                            1. 1

                                                                                              Anyone using OniVim among us? I’m wondering how does it compare to vim, Atom, VSCode and Sublime Text?

                                                                                            1. 3

                                                                                              Could a language like Coq be used for similar purposes than TLA while being able to extract useful programs?

                                                                                              1. 1

                                                                                                Yes. If your specification language of choice (and/or a framework built on top of it) can do synthesis and code generation, it’d be able to derive correct implementations of your spec.

                                                                                              1. 1

                                                                                                I feel your pain, but this is the efficiency of free market! ;-)

                                                                                                Whether or not a hyperlink is broken on the web still relies entirely upon the maintenance of the page pointed to, despite all hypertext projects prior to the 1992 Berners-Lee project having solved this problem.

                                                                                                Great, you made me feel young! :-)

                                                                                                What are you talking about?

                                                                                                I cannot imagine how they could fix the arcs of a graph they do not control entirely after modifying a bunch of nodes they own…

                                                                                                Can you share more details?

                                                                                                  1. 2

                                                                                                    As the resident hypertext crank, pretty much every time I say “hypertext” I’m referring to Project Xanadu. However, Xanadu was only slightly ahead of the twenty or thirty commercial hypertext systems available in the 1980s in solving this problem.

                                                                                                    TBL’s pre-web hypertext system, Enquire, also didn’t have the breaking-hyperlink problem.

                                                                                                    Other than “make addresses permanent”, I don’t think Xanadu’s solutions to this problem are the best ones, personally. I prefer distributed systems over centralized services, and prior to the early 90s, Xanadu addressing schemes were intended for use with centralized services; lately, Xanadu addressing schemes are actually just web addressing schemes, leaning on The Internet Archive and other systems to outsource promises of permanent URLs. I prefer named-data, and the hypertext systems I’ve worked on since leaving Xanadu have used IPFS.

                                                                                                    “Make addresses permanent” is also a demand made but not enforced by web standards. Nobody follows it, so facilities based on the assumption of permanent addresses are broken or simply left unimplemented.

                                                                                                  2. 1

                                                                                                    Modifying hypertext documents is a no-no (even, theoretically, on the web: early web standards consider changing the content pointed to by a URI to be rude & facilities exist that assume that such changes only occur in the context of renaming or cataclysm; widespread use of CGI changed this).

                                                                                                    The appropriate way to do hypertext with real distribution is to replace host-centric addresses (which can only ever be temporary because keeping up a domain name has nonzero cost) with named-data networking (in other words, permanent addresses agnostic about their location) & rely upon the natural redundancy of popular content to invert the current costs. (In other words, the bittorrent+DHT model).

                                                                                                    A modified version of a document is a distinct document & therefore has a distinct address.

                                                                                                    This kind of model would not have been totally unheard of when TBL was designing the web, but it would have been a lot more fringe than it is now. Pre-web hypertext, however, typically had either a centralized database or a federation of semi-centralized databases to ensure links didn’t break. (XU88 had a centralized service but depended on permanent addresses, part of which were associated with user accounts, and explicit version numbering: no documents were modified in place, and diffs were tracked so that links to a certain sequence in a previous version would link to whatever was preserved, no matter how munged, in later versions.)