Threads for graph_

  1. 4

    What’s the difference between refinement types and dependent types? The types as written in this post seem to meet the definition of dependent types (a type that depends on something that is not a type) because various integer values are present in the type statement. Is the difference that refinement types can only be a subset of a larger type?

    1. 8

      IANATT (type theorist), but I believe refinement types are understood to be written in a limited language that can be fed into a SAT solver, whereas dependent types require that types can be used as values and vice versa ‘anywhere’ in the language.

      1. 4

        SMT solvers as well

      2. 2

        With dependent types you could write down a function that takes a boolean and varies the type of its second parameter according to the value of the boolean, at runtime. I don’t think that liquid-rust and liquid-haskell support this.

        [Edit: You could simulate this in liquid-haskell with something like b:Bool -> {e:Either String Int | if b then (isLeft e) else (isRight e) -> ... but this is somewhat different from what you get in Agda and Coq. Refinement types here refine the Either String Int to being just one of its variants according to the value of the boolean, but in a dependently typed language you’d just have the two different types.]

      1. 2

        Is there a runtime component to this or is entirely static?

        1. 1

          I believe it’s built on liquid-fixpoint, which is the same backend that liquid-haskell uses, and so my guess is that it’s similarly a compile time analysis only.

        1. 4

          I don’t think introducing lenses, prefacing the post with language extensions, and using TH for a multiline string amounts to “easy json parsing.” Here’s a stub using only base and aeson to implement an off-brand jq in 35 lines.

          $ cat json | stack run ?people _0 ?hobbies _0 ?name
          "bridge"
          

          import Data.Char (isDigit)
          import Data.Foldable (toList)
          import Data.String (fromString)
          import System.Environment (getArgs)
          
          import Data.Aeson
          import Data.Aeson.Types
          import Data.ByteString.Lazy as BS (ByteString, interact)
          
          main :: IO ()
          main = do
              args <- getArgs
              BS.interact (either error encode . decodeAndSearch args)
          
          decodeAndSearch :: [String] -> ByteString -> Either String Value
          decodeAndSearch args bytes = do
              val <- eitherDecode bytes
              parseEither (search args) val
          
          search :: [String] -> Value -> Parser Value
          search [] val = return val
          search (arg:args) val =
              case arg of
                  '_':ds | all isDigit ds ->
                      flip (withArray "Arr") val $ \arr -> do
                          let idx = read ds
                              xs = toList arr
                          if idx < length xs
                          then search args (xs !! idx)
                          else fail $ "Index out of bounds for array: " ++ show (idx, length xs)
                  '?':key ->
                      flip (withObject "Obj") val $ \obj -> do
                          fld <- parseField obj (fromString key)
                          search args fld
                  _ -> fail $ "Use !<digits> to index arrays or ?<key> to lookup in an object. Saw: " ++ show arg
          
          1. 2

            jq is off-brand lens!

          1. 4

            Forgive me if I missed it. What is Web Assembly doing here? It sounds like the runtime uses the web assembly output? What is the benefit of this instead of running the rust code for whatever hardware your server is running?

            Interesting project either way! The api seems nice at first glance.

            I see security listed as a web assembly benefit, but that is for running untrusted code right? I imagine your web server code is trusted.

            1. 18

              It replaces async/await. A WASM interpreter is able to suspend WASM code at any time, without involving OS threads or explicit futures.

              1. 4

                If you have any articles about that, please post them. I read the lunatic readme and it’s a little light on details. It mostly just described how it inherits the benefits of webassembly.

                1. 5

                  It definitely leverages a lot of WebAssembly’s features (e.g. very lightweight sandboxing, support for various languages, etc). The wasmtime documentation (the WebAssembly JIT which Lunatic uses) has a lot of information about how it executes WebAssembly, as well as the API it provides to manipulate WebAssembly programs and interface with them while running them.

                  In terms of concrete things Lunatic does (as I understand it)

                  • all IO operations are processed asynchronously øn the Rust side (so other processes can continue to run even while one process is waiting on the result of a blocking IO operation)
                  • every process is only permitted to run for a certain number of instructions before it is paused so that a different process can run
                2. 4

                  By the time you’re selecting which interpreter to suspend and resume, you’ve reinvented a thread scheduler. However, your thread scheduler has a VM save/restore in the loop. You’d need to measure, but it may not be faster than just using a thread.

                3. 2

                  The wasm is because this is built on lunatic, which is a wasm runtime

                1. 2

                  This is great! I still have (and use) an HP Mini 210 with nixos. I unfortunately scratched the screen awhile back, and the battery life is down to 2 hours.

                  I can’t wait for mini laptops with e-ink screens!

                  1. 7

                    Since the root of the problem is that Ratio can become denormalized, this seems more like a bug more than a wat.

                    1. 3

                      Yeah it seems the core quirks are strictly incorrect behaviour, but those behaviours are also due to the lack of Ratio having signal values for undefined values. NaN exists in floating point not because they’re hard - it’s because some values are not definable on any number line. Ratio is just fixed point rational arithmetic so can’t avoid values that are not representable as a finite value, so needs to have +/-Infinity, and can’t avoid non-values, so should have NaN.

                      Lacking those make its basic arithmetic incorrect, so also failing to ensure correct normalization isn’t too surprising :-/

                    1. 22

                      I find that, having written a lot of Haskell and done a lot of pure math, I am less and less interested in using category theory to describe what I am doing. The category theory seems like the implementation detail, and not the thing I care about. The way I have come to think of things like monads is more like model theory: the monad is a theory, and there are models of it. Those models happen to be isomorphic to categorical constructions.

                      This text seems to suffer from what every other category theory text does: where are the theorems? There are lot of definitions here, but definitions are cheap. They alter over time to keep theorems true as new corner cases are found. If someone knows of a category theory text that isn’t a giant collection of definitions, I would love a pointer to it.

                      1. 16

                        I feel that I am in the same boat. I enjoy Haskell and mathematics. It brings me intense joy to use mathematics to gain new insights into other area, so I leapt at the opportunity. Just like your experience, the first few chapters were purely defining terms, but I kept pushing through. Finally, we started getting into some relations.

                        Given f : A → (B, C), ∃ g : A → B and h : A → C
                        

                        This was treated as some mind-blowing revelation on par with the Euler identity. Pages and pages of exercises were dedicated to showing the applicability of this formula across domains. There was an especially deep interest showing real world examples. That by asking a barber for a Shave and a Haircut, then telling them to skip the shave, I could get just a haircut.

                        The next chapter introduced symmetric monoidal categories with the earth-shattering property that I could take a value (B, C) and get a value (C, B). There was a brief mention that there existed braided categories without this property, and that they had deep connections to quantum mechanics (and my thesis!), but that they were outside the scope of this book. What was in scope was a collection of example problems about how, given waffles and chicken, we can construct a meal of chicken and waffles.

                        1. 3

                          Stellar comment, category theory (at least at my depth) tends to explain the obvious with the obscure.

                        2. 6

                          It makes sense to me why a theory of associative maps (to put it glibly) might be useful for someone designing high-level abstractions, since it can help to identify what the important invariants of those abstractions should be. What chafes a little for me in Haskell’s aesthetic embrace of category theory is precisely that a lot of its enthusiasts have the opposite inclination from you and seem to want to refer everything about what they’re doing to category theory. This feels deeply silly to me, because the vast majority of the interesting semantic properties of any given program are carried by concrete implementors of those category-theoretic abstractions. It’s nice that they’re there to provide the interfaces and invariants that allow me to productively compose concretions, but the generality that allows them to do that is also what prevents them from having much to say about what my program actually does. At the risk of seeming uncharitable, I wonder if a lot of the fetishism is down to Haskell being the most obvious place to go for people who belatedly realized that they’d have preferred to pursue pure math than be focused on software.

                          If someone knows of a category theory text that isn’t a giant collection of definitions, I would love a pointer to it.

                          I think Emily Riehl’s Category Theory in Context is one such text. It’s pretty typical of a math text targeted at mathematicians in its very terse Definition -> Theorem -> Worked Proof -> Exercises format, but the balance between those elements seems similar to anything else in the genre.

                          1. 3

                            I think that there’s computer science and there’s software engineering and Haskell happens to be a good tool for either. As a result you get a lot of writings out of scope for any given person’s interest.

                            1. 2

                              Absolutely! It’s just been my experience that a lot of the prominent writing about Haskell from the computer science perspective in particular tends to defer overwhelmingly to category theory in a way that feels reductive to me. It’s certainly possible that I’m just working with a non-representative sample.

                              1. 2

                                FWIW as someone who’s done a decent amount of higher math, I agree.

                        1. 6

                          Put it in the types:

                          The Haskell library dimensional goes a long way to solving this problem once and for all. https://hackage.haskell.org/package/dimensional

                          If this sort of thing were in widespread use in standard libraries, it would be wonderful.

                          1. 4

                            I like using type systems to keep programmers on the right path, but we have to be careful not to over-do it either. The job of a programmer ought to be to efficiently transform data; with very invasive types, the job often changes to pleasing the type checker instead. A programmer should create a solution by thinking about what the data looks like, what its characteristics are, and how a computer can efficiently transform it. If a library provides extremely rigid types, the programmers start to think in terms of “what do the types allow me to do?”; if the library tries to address this rigidity by using more advanced features of the type system to make usage more flexible, the programmer’s job is now to deal with increased accidental complexity.

                            Looking at the doc for dimensional, I find that the cost-benefit is not one that I would make. The complexity is too high and that’s going to be way more of a problem in the short and long run.

                            1. 3

                              The job of a programmer ought to be to efficiently transform data; with very invasive types, the job often changes to pleasing the type checker instead.

                              I’ve always seen “pleasing/fighting the type checker” as a signal that my problem domain is necessarily complex. If I need to implement a stoichometry calculator, for example, I’d much rather use dimensional or a similar library that allows me to say “we start with grams of propane, we end with grams of water” and let the type checker say that my intermediate equation actually makes sense. The alternative I could see is… what, extensive unit tests and review? Those rarely worked for me in chemistry class ;)

                              1. 1

                                Another upside of types is that they nudge you toward rendering them correctly as user-facing (or debugger-facing) strings.

                              2. 1

                                At work we have a custom units library built that wraps the units library. We also provide a bunch of instances for the numhask hierarchy. This pair has felt like a sweet spot. We mostly work with geometric quantities (PlaneAngle and Length), and have had great success lifting this up to vector spaces (e.g., Point V2 Length represents a point, V2 Length is a vector, (+.) lets us combine points and vectors, etc).

                            1. 2

                              I received a bill from Prgmr a few days ago, and confirmation of payment to TornadoVPS today. I can’t say the lack of a notification is comforting.

                              Apparently we have the rest of this month to fix references to prgmr.com in configurations and DNS.

                              1. 2

                                I appreciate your forbearance over the deadline. If you find yourself needing more time, reach out to support@tornadovps.com and I’ll forward the request to retain your DNS entries.

                                1. 1

                                  Oh! Was not expecting a reply of this sort. Thank you. I’m just griping.

                                  Your announcement email got caught by gmail’s spam filter for me. I’ve marked it “not spam”, but I wonder if other customers had a similar issue.

                                  1. 2

                                    I did, too.

                              1. 1

                                This strikes me as very similar to an iPython or Jupyter notebook, but for coq. Perhaps there might be something gained by using that existing infra with coq?

                                1. 31

                                  Serve static html from the backend, and forego dynamic behavior on the front end. You’ll spend a lot less time doing things unrelated to your goal.

                                  1. 11

                                    This is the answer. Don’t mess around with SPAs, they will make your life much harder. Just write a traditional web app. Use express.js if you know JS, or sinatra if you know ruby, etc. These are tools you can actually start using in an hour or less, that can serve routes in a few lines of code, and whose features you can learn incrementally as needed. If you want to add a little slickness to your UI (transitions, fetches without page reloads, etc), while staying in the classic html/css paradigm you already know, have a look at https://htmx.org/.

                                    1. 2

                                      Use express.js if you know JS, or sinatra if you know ruby, etc

                                      It’s C++. There’s a lot of it already; the UI is just a layer on top. As I said I used a couple of libraries to give me a rudimentary routing system (based on path regexes) and Jinja-like templates.

                                      1. 2

                                        I have a lot of webdev experience with various stacks, but not C++. From my (limited) general experience with C++, I would imagine it would be a cumbersome web development experience, but if it’s working for you, go for it. I had assumed you were asking for something else. Either way, I think my original advice remains the same. Backend language you are familiar with, minimalist framework you can learn very quickly (or none at all, eg, in golang the std lib is enough), and htmx for UI effects/fanciness if needed.

                                        EDIT:

                                        From your other reply:

                                        And I wouldn’t be considering HTML for the UI at all if it couldn’t do dynamic behavior; that’s table stakes. It’s not 1995 anymore and my goal isn’t to rebuild Geocities :)

                                        It really depends how complex the dynamic parts need to be. At some point, if the frontend logic gets out of control, something like React may be warranted. But ime most sites that use React don’t need it. For a “social-networking”-esque site, I could see this going either way. But my instinct is still to start with a traditional architecture augmented by htmx until you find yourself needing more.

                                    2. 7

                                      The HTML has to be dynamically generated based on the (C++) model layer. This isn’t a static website. But I’m fairly satisfied with the C++ template engine I’m using.

                                      And I wouldn’t be considering HTML for the UI at all if it couldn’t do dynamic behavior; that’s table stakes. It’s not 1995 anymore and my goal isn’t to rebuild Geocities :)

                                      1. 6

                                        I have written… a significant amount of javascript in the past ~15 years.

                                        I would recommend you start by thinking carefully about where you want the state to live. Having some state in the javascript, some in the URL, some in HTTP cookies and some in the server (‘stateful connections’) is a recipe for trouble. IME, storing auth state in cookies and everything else in the URL has consistently been the most compatible with how browsers want pages to behave, at the cost of restricting how much state you can store (to a few kb).

                                        For most content, it’s easier to get a better result if you use <a href links rather than javascript (generating different HTML in the server process based on the URL).

                                        For parts which constantly receive new input from the server (eg a chat window), you can render an iframe to a URL which holds the connection open (using chunked encoding) and sends new content as it arrives. This requires very little code, is fast, and retains state correctly across page refreshes with no action on your part.

                                        Beyond that, a very small amount of javascript to toggle classes is typically enough.

                                        1. 1

                                          Thanks. Fortunately there is pretty much no UI-specific state I need to keep. It’s all running on a client device so there’s by definition only a single user.

                                        2. 5

                                          Here’s a few fairly powerful vanilla techniques:

                                          • Replacement for onclick: document.querySelectorAll(...).addEventListener('click', function(event) { const element = this; ... }
                                          • Dynamically fetch HTML rendered by the server: fetch("/url...", { headers: { 'Content-Type': 'text/html' }, ... }).then(resp => resp.text()).then(htmlText => { element.innerHTML = htmlText; })
                                      1. 15

                                        Try specifying infrastructure in a tool like Terraform and see how things work. It’s a way to “get your hands dirty without ending up with a bunch of garbage resources messing up your mental model. You can tear everything down with one command. Also, the terraform docs for AWS serve as a concise summary of all the types of resources and how they fit together.

                                        1. 8

                                          This is how I use AWS. Period. I have almost never touched the web console, except to manage IAM and Route53 bases that I treat as data and not resources in Terraform. Oh, and I think I’ve created a few S3 buckets by hand so that they’re treated similarly. That is, so that a terraform destroy doesn’t inadvertently cause me to, uh, activate my backup restoration procedures in a flurry of expletives.

                                          1. 2

                                            Do you have any thoughts on Pulumi? It’s been recommended to me, but at this point I barely understand the difference between it and Terraform (I haven’t used either). I’m trying to pick a tool to do DevOpsy stuff and was going to go with Terraform mostly on the basis that I’ve heard of it and it’s not Chef 😅

                                            1. 1

                                              I’ve not used Pulumi yet. It was in its infancy when I made a significant investment in Terraform.

                                              I came up with https://github.com/colindean/terraforming-with-types at about the same time I heard of Pulumi. I left the org wherein I was using Terraform heavily shortly thereafter and never picked it back up seriously. I just started using Terraform at my current job about three weeks ago to manage our GitHub sprawl.

                                              1. 1

                                                Interesting about Terraforming-with-types. One thing that appeals about Pulumi is its support for Typescript, which we’ll be using across the rest of the codebase.

                                          2. 4

                                            +1 for not using the UI

                                            I tried terraform superficially and was a bit surprised about its restrictions. Maybe what I tried was a bit too complicated.

                                            CDK allows you to create the declarative resources spec in a programming language.

                                            I am mostly using CDK now but ironically with quite simple projects which would have also worked with terraform, I guess.

                                            https://docs.aws.amazon.com/cdk/v2/guide/home.html

                                            1. 4

                                              We’re also using cdk at work. I quite like it. But is it possible to explore AWS with cdk as a beginner? I always take the opposite approach: I look at the service I want to use in the web interface. And when I understood the service, I write my cdk code.

                                              1. 1

                                                I think the documentation is often so focused on the UI but I pretty much always use cdk or something else in version control. Otherwise I quickly forget how I set it up.

                                                I like UIs for inspection but hate them for setting up things.

                                            2. 1

                                              To those wanting to go down this path: there is a very approachable series on YouTube on managing AWS with Terraform.

                                            1. 8

                                              So what happened next?

                                              1. 15

                                                Next GHC2021 happened. https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/exts/control.html#extension-GHC2021

                                                GHC2021 is a basket of language extensions which are useful and less controversial. New compilers could possibly target GHC2021 instead of falling prey to the author’s concern:

                                                As long as Haskell is defined implicitly by its implementation in GHC, no other implementation has a chance — they will always be forever playing catch-up.

                                                Again, whether it be Haskell98 or Haskell2010 or GHC2021, new compilers don’t have to implement every research extension explored by the GHC folks. I think the concern is overplayed.

                                                1. 10

                                                  Lacking a written standard, whatever GHC does has become the de facto definition of Haskell.

                                                  What that means depends on who you ask. Some people are happy with this situation. It allows them to make changes to the language that move it forward without a standard to get in the way.

                                                  Others are very unhappy. Haskell has been making constant breaking changes lately.

                                                  In my opinion the situation is disastrous and after using Haskell for over a decade, as of last year I no longer build any production system in Haskell. Keeping a system going is just a constant war against churn that wastes an incredible amount of time. Any time you might have saved by running Haskell, you will return because of the constant wastage.

                                                  What’s worse is that the changes being made are minor improvements whose fix don’t address any of the issues people have with the language.

                                                  It’s not just the core libraries that have this total disregard for the pain they inflict on users with large code bases. This message that breaking everything is a good idea has proliferated throughout the community. Libraries break APIs with a kind of wild abandon that I don’t see in any other language. The API of the compiler changes constantly to the point where tooling is 2+ years out of date. The Haskell Language Server still doesn’t have full 9.0 support 2 years later!

                                                  Haskell is a great language, but it’s being driven into the ground by a core of contributors who just don’t care about the experience of a lot of users.

                                                  1. 11

                                                    HLS has supported 9.0 since July 2021, it recently gained support for 9.2.1 as well.

                                                    Keeping a system going is just a constant war against churn that wastes an incredible amount of time.

                                                    Are we really talking about the same language? I’m working full time on a >60k line Haskell codebase with dependencies on countless packages from Hackage, and none of the compiler version upgrades took me longer than half a day so far.

                                                    Now, don’t get me wrong, the churn is real. Library authors particularly get hit the worst and I hope the situation gets better for them ASAP, but I really think the impression you’re giving is exaggerated for an application developer hopping from stackage lts to stackage lts.

                                                    1. 9

                                                      HLS has supported 9.0 since July 2021, it recently gained support for 9.2.1 as well.

                                                      HLS had partial support for 9.0. And even that took more than half a year. The tactics plugin wasn’t supported until a few months ago. And stylish-haskell support still doesn’t exist for 9.0

                                                      Support for 9.2 is flaky at best. https://github.com/haskell/haskell-language-server/issues/2179 Even the eval plugin didn’t work until 2 weeks ago!

                                                      I’m working full time on a >60k line Haskell codebase with dependencies on countless packages from Hackage, and none of the compiler version upgrades took me longer than half a day so far.

                                                      60k is small. That’s one package. I have an entire ecosystem for ML/AI/robotics/neuroscience that’s 10x bigger. Upgrades and breakage are extremely expensive, they take days to resolve.

                                                      In Python, JS, or C++ I have no trouble maintaining large code bases. In Haskell, it’s an unmitigated disaster.

                                                      1. 7

                                                        Saying you have “no trouble maintaining larger codebases” with Python or JS seems a bit suspicious to me….

                                                        I am personally also a bit in the “things shouldn’t break every time” (like half a day’s work for every compiler release seems like a lot!) but There are a lot of difficulties with Python and JS in particular because API changes can go completely unnoticed without proper testing. Though this is perhaps way less of an issue if you aren’t heavy dep users.

                                                        1. 3

                                                          py-redis released 2.0, Pipfile had “*” version, pipenv install auto-upgraded py-redis and bam, incompatible API. Larger the codebase, more frequently it happens.

                                                          Meanwhile, some C++/SDL code I committed to sf 20 years ago still compiles and runs fine.

                                                        2. 4

                                                          I’ve worked on quite large Haskell codebases too, and cannot say that I’ve had any of the experiences you have - I’m sure you have but it’s not something that the community is shouting from the rooftops as being a massive issue like you’re claiming, and it might have much more to do with the libraries you rely on than GHC itself. This just comes across as FUD to me, and if someone told me Join Harrop wrote it, I would believe it.

                                                          1. 7

                                                            it’s not something that the community is shouting from the rooftops as being a massive issue like you’re claiming …. This just comes across as FUD to me, and if someone told me Join Harrop wrote it, I would believe it.

                                                            Well, any amount of googling will show you many complaints about this. But I’ll pick the most extreme example. The person who runs stack (one of the two package managers) put his Haskell work on maintenance mode and is moving on to Rust because of the constant churn. https://twitter.com/snoyberg/status/1459118086909476879

                                                            “I’ve essentially switched my Haskell work into maintenance mode, since that’s all I can be bothered with now. Each new thing I develop has an immeasurable future cost of maintaining compatibility with the arbitrary changes coming down the pipeline from upstream.”

                                                            “Today, working on Haskell is like running on a treadmill. You can run as fast as you’d like, but the ground is always moving out from under you. Haskell’s an amazing language that could be doing greater things, and is being held back by this.”

                                                            I can’t imagine a worse sign for the language when these changes drive away the person who has likely done more than anyone to promote Haskell adoption in industry.

                                                            1. 3

                                                              Don’t forget the creation of a working group specifically on this topic. It still remains to be seen if they have the right temperament to make the necessary changes.

                                                              1. 3

                                                                The person who runs stack (one of the two package managers) put his Haskell work on maintenance mode and is moving on to Rust because of the constant churn. https://twitter.com/snoyberg/status/1459118086909476879

                                                                The person who runs stack has been a constant source of really great libraries but also really painful conflict in the Haskell community. His choice to vocally leave the community (or something) is another example of his habit of creating derision. Whether it’s reflective of anything in the community or not is kind of pointless to ask: It feels like running a treadmill to him because he maintains way too many libraries. That load would burn out any maintainer, regardless of the tools. I feel for him, and I’m grateful to him, but I’m also really tired of hearing him blame people in the Haskell community for not doing everything he says.

                                                                1. 1

                                                                  Snoyman somewhat intentionally made a lot of people angry over many things, and chose not to work with the rest of the ecosystem. Stack had its place but cabal has reached a state where it’s as useful, baring the addition of LTS’, which have limited value if you are able to lock library versions in a project. While He may have done a lot to promote Haskell in industry, I know a lot of people using Haskell in industry, and very few of them actually use much of the Snoymanverse in production environments (conduit is probably the main exception, because http-conduit is the best package for working with streaming http data, and as such many other packages like Amazonka rely on it). I don’t know any people using Yesod, and many people have been burned by persistent’s magic leading to difficulties in maintenance down the road. I say all this as someone who recommended stack over cabal quite strongly because the workflow of developing apps (and not libraries) was much more pleasant with stack; but this is no longer true.

                                                                  As someone who’s been using Haskell for well over a decade, the last few years have been fantastic in the pace of improvements in GHC. Yes some things are breaking, but this is the cost of paying off long held technical debt in the compiler. When GHC was stagnating, things were also not good, and I would prefer to see people attempting to fix important things while breaking some others than seeing no progress at all. The Haskell community is small, we don’t have large organisations providing financial backing to work on things like strong backwards compatibility, and this is the cost we have to pay because of that. It’s not ideal, but without others contributing resources, I’ll take positive progress in GHC over backwards compatibility any day (and even on that front, things have improved a lot, we used to never get point releases of previous compilers when a new major version had been released).

                                                          2. 6

                                                            I think the breaking changes in haskell aren’t significant. Usually they don’t actually break anything unless you’re using some deep hacks.

                                                            1. 9

                                                              Maybe for you. For me, and many other people who are trying to raise the alarm about this, changes like this cause an endless list of problems that are simply crushing.

                                                              It’s easy to say “Oh, removing this method from Eq doesn’t matter”. Well, when you depend on a huge number of libraries, it matters. Even fixing small things like this across a large code base takes time. But now I can’t downgrade compilers anymore unless I sprinkle ifdefs everywhere, so I need to CI against multiple compilers which makes everything far slower (it’s not unusual for large Haskell projects to have to CI against 4+ versions of GHC, that’s absurd). And do you know how annoying it is to have a commit and go through 3 different versions before you finally have the ifdefs right for all of the GHC variants you CI against?

                                                              Even basic things like git bisect are broken by these changes. I can’t easily look in the history of my project to figure out what’s going wrong. To top it all off I now need my own branches of other libraries I depend on who haven’t upgraded yet. This amounts to dozens of libraries. It also means that I need to upgrade in lockstep, because I can’t mix GHC versions. That makes deployments far harder. It’s also unpleasant to spend 10 hours upgrading, just to discover that somethings fundamental prevents you from switching, like a bug in GHC (I have never seen a buggier compiler of a mainstream language) or say, a tool like IHaskell not being updated or suffering from bugs on the latest GHC. I could go on.

                                                              Oh, and don’t forget how because of this disaster you need to perfectly match the version of your tools to your compiler. Have an HLS binary or IHaskell binary that wasn’t compiled with your particular compile version, you get an error. That’s an incredibly unfriendly UX.

                                                              Simply put, these breaking changes have ramifications well beyond just getting your code to compile each time.

                                                              Let’s be real though, that’s not the list of changes at all. The Haskell community decided to very narrowly define what a breaking change to the point of absurdity. Breaking changes to cabal? Don’t count. Breaking changes to the GHC API? Don’t count, even though they break a lot of tooling. Even changes to parts of the GHC API that you are supposed to use as a non-developer of the compiler, like the plugin API don’t count. Breaking changes to TH? Don’t count. etc.

                                                              Usually they don’t actually break anything unless you’re using some deep hacks.

                                                              Is having notebook support for Haskell a deep back? Because GHC has broken IHaskell and caused bugs in it countless times. Notebook support is like table stakes for any non-toy language.

                                                              If even the most basic tools you need to be a programming language mean you rely on “deep hacks” so apparently you deserve to be broken, well that’s a perfect reflection of how incredibly dysfunctional Haskell has become.

                                                              1. 10

                                                                Notebook support is like table stakes for any non-toy language

                                                                In a sciencey kind of context only. Systems, embedded, backend, GUI, game, etc. worlds generally do not care about notebooks.

                                                                I have never, not even once, thought about installing Jupyter again after finishing a statistics class.

                                                                1. 1

                                                                  I have never, not even once, thought about installing Jupyter again after finishing a statistics class.

                                                                  Interesting. May I ask why?

                                                                  1. 3

                                                                    Because I have no need for it. That concept doesn’t fit into any of my workflows. I generally just don’t do exploratory stuff that requires rerunning pieces of code in arbitrary order and seeing the results inline. Pretty much all the things I do require running some kind of “system” or “server” as a whole.

                                                                    1. 1

                                                                      Thank you for you answer. I always thought that work in a statistical setting (say, pharma, or epidemics), requires a bit of explorative process in order to understand the underlying case better. Tools like SAS kind of mirror the workflow in Jupyter.

                                                                      What kind of statistical processes do you work with, and what tools do you use?

                                                                      1. 2

                                                                        I don’t! I don’t do statistics! I hate numbers! :D

                                                                        I’m sorry if this wasn’t clear, but “finishing a statistics class” wasn’t meant to imply “going on to work with statistics”. It just was a mandatory class in university.

                                                                        The first thing I said,

                                                                        In a sciencey kind of context only. Systems, embedded, backend, GUI, game, etc. worlds generally do not care about notebooks.

                                                                        was very much a “not everybody does statistics and there’s much more of the other kinds of development” thing.

                                                                        1. 1

                                                                          Thanks!

                                                                2. 5

                                                                  Is having notebook support for Haskell a deep back? Because GHC has broken IHaskell and caused bugs in it countless times. Notebook support is like table stakes for any non-toy language.

                                                                  I’ve never used a notebook in my career, so…

                                                                  In any case, I think you’ve got a set of expectations for what haskell is, and that set of expectations may or may not match what the community at large needs, and you’re getting frustrated that haskell isn’t meeting your expectations. I think the best place to work that out is in the mailing lists.

                                                                  1. 1

                                                                    I’ve never used a notebook in my career, so…

                                                                    They are pretty nice. Kind of like a non-linear repl with great multi line input support. It can get messy (see also non-linear), but great for hacking all kinds of stuff together quickly.

                                                                  2. 5

                                                                    Is having notebook support for Haskell a deep back? Because GHC has broken IHaskell and caused bugs in it countless times.

                                                                    The way that IHaskell is implemented, I would actually consider it a deep hack, since we poke at the internals of the GHC API in a way that amounts to a poor rewrite of ghci (source: am the current maintainer). I don’t know that it’s fair to point to this as some flaw in GHC. If we didn’t actually have to execute the code we might be able to get away with using ghc-lib or ghc-lib-parser which offers a smoother upgrade path with less C pre-processor travesties on our end.

                                                                    1. 4

                                                                      Sure! I’m very familiar as I’ve contributed to IHaskell, we’ve spoken through github issues.

                                                                      I wasn’t making a technical point about IHaskell. That was a response to the idea that some projects need to suffer because they’re considered “deep hacks”. Whatever that is. As if those projects aren’t worthy in some way.

                                                                      I really appreciate the maintenance of IHaskell. But if you take a step back and look at the logs, it’s shocking how much time is spent on churn. The vast majority of commits aren’t about adding features, more stability, etc. Making IHaskell as awesome as it can be. They’re about keeping up with arbitrary changes in GHC and the ecosystem. Frankly, upstream Haskell folks are just wasting the majority of the time of everyone below them.

                                                                      1. 3

                                                                        I can definitely relate to the exhaustion brought on by the upgrade treadmill, but nobody is forcing folks to use the latest and greatest versions of packages in the Haskell ecosystem and I also don’t think the GHC developers owe it to me to maintain backwards compatibility in the GHC API (although that would certainly make my life a little easier). A lot of the API changes are related to improvements in the codebase and new features, and I personally think the project is moving in a positive direction so I don’t agree that the Haskell folks are wasting my time.

                                                                        At my current job we were quite happily using GHC 8.4 for several years until last month, when I finally merged my PR switching us over to GHC 8.10. If I hadn’t decided this was something I wanted to do we probably would have continued on 8.4 for quite a while longer. I barely had any issues with the upgrade, and most of my time was spent figuring out the correct version bounds and wrestling with cabal.

                                                                      2. 2

                                                                        Could you not use something like the hint library which appears to abstract some of that GHC API into something a little more stable and less hacky?

                                                                        1. 3

                                                                          Great question! We wouldn’t be able to provide all the functionality that IHaskell does if we stuck to the hint API. To answer your question with another question: why doesn’t ghci use hint? As far as I can tell, it is because hint only provides minimal functionality around loading and executing code, whereas we want the ability to do things such as:

                                                                          • have common GHCi functionality like :type, :kind, :sprint, :load, etc., which are implemented in ghci but not exposed through the GHC API
                                                                          • transparently put code into a module, compile it, and load the compiled module for performance improvements
                                                                          • query Hoogle within a cell
                                                                          • lint the code using HLint
                                                                          • provide tab-completion

                                                                          Arguably there is room for another Haskell kernel that does use hint and omits these features (or implements them more simply), but that would be a different point on the design space.

                                                                          So far in practice updating IHaskell to support a newer version of GHC takes me about a weekend each time, which is fine by me. I even wrote a blog post about the process.

                                                                          1. 2

                                                                            Thanks for the thoughtful and detailed response! As someone who has used IHaskell in the past, I really want it to be as stable and easy to use as any other kernel is with Jupyter.

                                                                            1. 1

                                                                              Me too! From my perspective most of the issues I see are around installing IHaskell (since Python packaging can be challenging to navigate, and Haskell packaging can also be challenging to navigate, so doing together is especially frustrating) and after that is successfully accomplished not that many people have had problems with stability (that I am aware of from keeping an eye on the issue tracker).

                                                                              1. 3

                                                                                Python packaging is it’s own mess so no matter what happens on the Haskell side there is likely always going to be a little pain and frustration. I was struck by your blogpost how many of the changes you made that were due to churn. Things like functions being renamed. Why couldn’t GHC people put a deprecation pragma on the old name, change its definition to be equal to the new name and go from there? It would be nice if all you needed to get 9.0 support was update the cabal file.

                                                                                With the way churn happens now I wouldn’t be surprised if in a few months there is a proposal to just rename fmap to map. After all this change should save many of us a character and be simple for all maintainers to make.

                                                                                1. 1

                                                                                  You’re right that it would be nice to just update the .cabal file, but when I think about the costs and benefits of having a bunch of compatibility shims (that probably aren’t tested and add bulk to the codebase without providing additional functionality) to save me a couple of hours of work every six months I don’t really think it makes sense. It’s rare that only the names change without any other functionality changing (and in that case it’s trivial for me to write the shims myself), so deprecating things really only has the effect of kicking the can down the road, since at some point the code will probably have to be deleted anyway.

                                                                                  I think the larger issue here is that it’s not clear who the consumers of the GHC API are, and what their desires and requirements are as GHC continues to evolve. The GHC developers implicitly assume that only GHC consumes the API, and although that’s not true it’s close enough for me. I harbour no illusions that IHaskell is a particularly important project, and if it disappeared tomorrow I doubt it would have much impact on the vast majority of Haskell users. As someone who has made minor contributions to GHC, I’m impatient enough with the current pace of development that I would rather see even greater churn if it meant a more robust and powerful compiler with a cleaner codebase than for them to slow down to accommodate my needs as an IHaskell maintainer. It seems like they’re slowly beginning to have that conversation anyway as more valuable projects (e.g. haskell-language-server) begin to run up against the limitations of the current system.

                                                                                  1. 2

                                                                                    I agree that the GHC API is generally treated as an internal argument. I also think between template-haskell, IHaskell, and the inadequacies of hint that there is a need for a more stable public API. And I think having one is a great idea. Libraries to let you more easily manipulate the language, the compiler, and the runtime are things likely to be highly valued for programming language enthusiasts. Maybe more so than the average programmer.

                                                                      3. 1

                                                                        I don’t really want to engage with your rant here. I’m sorry you’re having so many issues with haskell, but it doesn’t reflect my experience of the ecosystem and tooling.

                                                                        [Edit: Corrected a word which folks objected to.]

                                                                        1. 9

                                                                          I don’t know what to tell you. A person took the time to explain how the tooling and ecosystem make it very hard to keep packages functioning from version to version of the main compiler. And you just offer a curt dismissal. It’s an almost herculean effort to write libraries for Haskell that work for all 8.x/9.x releases. This combined with a culture of aggressive upper-bounds on package dependencies makes it very challenging to use any libraries that are not actively maintained.

                                                                          And this churn does lead to not just burnout of people in the ecosystem, but the sense that less Haskell code works every day that passes. Hell, you can’t even reliably get a binary install of the latest version of GHC which has been out for several months. The Ubuntu PPA page hasn’t been updated in a year.

                                                                          Many essential projects in the Haskell ecosystem have a bus-factor of one, and it’s hard to find people to maintain these projects. The churn is rough.

                                                                          1. 2

                                                                            I’m sorry for being dismissive. Abarbu’s response to my very short comment was overwhelming, and so I didn’t want to engage.

                                                                            For upper bounds on packages I typically use nix flakes to pin the world and doJailbreak to ignore version bounds. I believe you can do the same in stack.yaml with allow-newer:true. The ecosystem tools make dealing with many issues relatively painless.

                                                                            Getting a binary install of the latest version of GHC requires maintainers and people that care. But, if as abarbu says “I have never seen a buggier compiler of a mainstream language,” then I would recommend not upgrading to the latest GHC until your testing it shows that it works. If there aren’t packages released or the new version causes bugs, then why not stay on the current version?

                                                                            Breaking changes in the language haven’t ever burned me. If it’s causing people problems, writing about specific issues in the Haskell mailing lists is probably the best way to get help. It has the nice side effect of teaching the GHC developers how their practices might cause problems for the community.

                                                                            1. 3

                                                                              A lot of us are saying this stuff as people who have used the language for many years. That you need to use nix + assorted hacks for it to be usable reflects the sad state of the ecosystem. I’d go as far to say it’s inadvisable to compile any non-trivial Haskell program outside it’s own dedicated nix environment. This further complicates using programs written in Haskell, nevermind packaging them for external users. I have had ghci refuse to run because somehow I ended with a piece of code that depended on multiple versions of some core library.

                                                                              It’s a great language, but the culture has lead to an ecosystem that is rough to work with. An ecosystem that requires lots of external tooling to use productively. I could complain about the bugginess of GHC and how the compiler has been slower every release for as long as I can remember, but that misses the real pain point. The major pain point is that GHC team doesn’t value backwards compatibility, proper deprecation capabilities, or even tooling to make upgrading less painful. Their indifference negatively affects everyone downstream that has to waste time on pointless maintenance tasks instead of making new features.

                                                                              1. 1

                                                                                For context, I started learning haskell about 11 years ago and have been using it extensively for about 7 years. I started when cabal hell was a constant threat, and if you lost your development environment you’d never compile that code again.

                                                                                From my perspective, everything is much better now. Nix pinning + overrides and Stack resolvers + extra-deps are great tools to construct and remember build plans, and I’m sure Cabal has grown some feature along with “new-build” commands to save build plans.

                                                                                That you need to use nix + assorted hacks for it to be usable reflects the sad state of the ecosystem.

                                                                                I think having three great tools to choose from is pretty great. The underlying problem is allowing version upper-bounds in the cabal-project file-format.

                                                                                This further complicates using programs written in Haskell, nevermind packaging them for external users.

                                                                                After the binary is compiled none of compilation and dependency searching problems exist. Package the binary with its dynamic libs, or produce a statically linked binary.

                                                                                It’s a great language, but the culture has lead to an ecosystem that is rough to work with. An ecosystem that requires lots of external tooling to use productively.

                                                                                I worked with Go for four years. When you work with go you have go-tool, go-vet, counterfeiter, go-gen, go-mod, and at least two other dependency management tools (glide? glade? I can’t remember). Nobody is complaining about there being “too many external tools” in the go ecosystem. Don’t get me started on java tooling. Since when has the existence of multiple tools to deal with dependencies and compilation been a bad signal.

                                                                                The major pain point is that GHC team doesn’t value backwards compatibility, proper deprecation capabilities, or even tooling to make upgrading less painful. Their indifference negatively affects everyone downstream that has to waste time on pointless maintenance tasks instead of making new features.

                                                                                This is biting the hand that feeds, or looking the gift horse in the mouth or something. The voices in the community blaming their problems on the GHC team are not helping things, imo. Sorry. There’s a lot of work to be done, and the GHC team are doing a good job. That there also exists active research going on in the compiler is unusual, but that’s not the “GHC team doesn’t value backwards compatibility” or “their indifference”, that’s them being overloaded and saying “sure, you can add that feature, just put it behind an extension because it’s not standard” and going back to fixing bugs or optimizing things.

                                                                                1. 4

                                                                                  This is an issue that has provoked the creation of a working group by the Haskell Foundation as well as this issue thread

                                                                                  https://github.com/haskell/core-libraries-committee/issues/12

                                                                                  Many of the people weighing in are not what I would call outsiders. I’ve contributed plenty to Haskell, and I complain out of a desire to see the language fix what is in my opinion one of it’s most glaring deficiencies. If I didn’t care, I’d just quietly leave the community like many already have. The thread linked above even offers some constructive solutions to the problem. Solutions like migration tools so packages can be upgraded more seamlessly. Perhaps some shim libraries full of CPP macros that lets old code keep working for more than two releases. Maybe a deprecation window for things like base that’s close to 2 years instead of one.

                                                                                  Like how wonderful would it be if there was a language extension like GHC2015 or GHC2020 and I could rest assured the same code would still work in 10 years.

                                                                          2. 5

                                                                            Calling that Gish-gallop is pretty dismissive. It’s not like abarbu went off on random stuff. It’s all just about how breaking changes (or worse non breaking semantic changes) make for unpleasant churn that damages a language.

                                                                            1. 3

                                                                              I understand the churn can be unpleasant. Abarbu’s response to me was overwhelming and I didn’t want to engage. I am sorry for being dismissive.

                                                                            2. 3

                                                                              …do you know what a gish gallop is?

                                                                              1. 1

                                                                                No, guess I do not, and I need to be corrected by lots of people on the internet. Thanks.

                                                                                1. 1

                                                                                  Ha, I didn’t know either, so I googled it, and it maybe sounded harsher than you meant. It was a gallop, for sure, if not a gish gallop.

                                                                        2. 2

                                                                          Ngl, that kind of sounds like Rust

                                                                          1. 7

                                                                            why exactly ? the rust editions stay stable and many crates settled on 1.0 versions

                                                                        3. 6

                                                                          Perhaps this[1] was a lighter-weight solution to some of the problems the author mentions.

                                                                          [1] - https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0380-ghc2021.rst

                                                                          1. 4

                                                                            I’m thinking that the first result when DuckDuckGo’ing “haskell2020” being this article is not a good sign.

                                                                          1. 1

                                                                            Meta: Can we link to the pdf and not the dropbox? I can’t actually view this on mobile.

                                                                            1. 4

                                                                              It is hard to answer objectively. Also most programming style guides at least partially agree by discouraging global state and advocating some measures to restrict mutations through visibility,…

                                                                              For me, checking what state a part of a program consumes and changes is often a good first step to understanding it. I have found that first loading all needed data, then perform the logic and calculate necessary changes works well for business logic. The middle part is then usually nicely testable.

                                                                              I don’t think that I am far off the programming mainstream with that opinion so in the environments I worked in, there was some awarenes. So, it wasn’t often a problem. But maybe because it is a known problem that is avoided.

                                                                              In UIs confusing mutable state seems to be much more common than in backend software, in my limited experience.

                                                                              1. 3

                                                                                I see. Because backends are usually structured in a flow where: you get data from the db -> run it through some business logic in the controller -> render it, that you don’t see the issues of mutable state causing issues as much. That’s because any state that matters is coming from the DB, and most of the issue about writes are taken care of for you. In addition, in a request/response cycle of the web, most of the intermittent state can be thrown away.

                                                                                I can also see UIs wresting with mutable state. In fact, one of React’s main benefits is that it alleviates us from having to program in retained mode.

                                                                                1. 1

                                                                                  Also most programming style guides at least partially agree by discouraging global state

                                                                                  Have you worked in any golang codebases recently? Global state seems to be encouraged by patterns in the stdlib and commonly used third party libs: global logger struct, global “once” cells, etc.. Used judiciously it doesn’t cause problems, but many reach for it unthinkingly and go programs often end up write-once.

                                                                                  1. 1

                                                                                    No, didn’t work on golang code bases to any meaningful extend so that is interesting.

                                                                                    I wondered how larger code bases in golang turned out since it superficially encourages a relatively simple but imperative coding style. Would be interested to read more about it but it is such a hard topic because it is by nature pretty subjective which programs are easy to understand or extend.

                                                                                    1. 3

                                                                                      My experience is that larger golang codebases require cohesive teams with low turnover to maintain properly. A secondary helpful thing is to have agreed-upon conventions for how state is managed and isolated in the codebase.

                                                                                      We had several successful large scale projects at twitch. We also had several failed projects. The projects that failed usually failed because the person who best understood how the effects were managed was no longer with the company, and the project didn’t follow the conventions for effects which was already established. Those codebases became write-once. Nobody tried to understand them. Eventually they were rewritten from scratch.

                                                                                      Languages which encourage good patterns for managing and isolating effects will push authors toward conventions which encourage maintainability.

                                                                                1. 12

                                                                                  I’ve observed in industry that programs exist in only a couple of states with respect to maintainability:

                                                                                  1. New programs which are maintainable because the person who wrote them (or so) is around.
                                                                                  2. Old programs which are maintainable, despite the authors being absent, because they control and isolate side effects.
                                                                                  3. Old programs which aren’t maintainable because they use side effects instead of function calls to communicate across parts of the code base.

                                                                                  The third category is the most common because it is easier to add a pointer to two data structures than do a proper investigation and refactoring when a programmer is tasked with maintenance tasks but not given enough time. It’s the gravity of maintaining software at low cost to end up with badly written software. Programs in the first two categories fall into the third category over time.

                                                                                  A language which controls side effects will push against this trend by forcing maintainers to build more explicit and maintainable a control flow, with clearer ownership of data. OOTTP is in my opinion very accurate.

                                                                                  1. 1

                                                                                    A language which controls side effects will push against this trend by forcing maintainers to build more explicit and maintainable a control flow, with clearer ownership of data. OOTTP is in my opinion very accurate.

                                                                                    So my question is, if that’s the case how come more programmers in imperative languages don’t try to migrate to languages that make more explicit demands to have maintainable control flow or a clearer ownership of data?

                                                                                    1. 5

                                                                                      Learning the new model is difficult. It’s like relearning programming from scratch. I programmed in php, java, python and c for about 7 years before learning scheme, lisp and then Haskell and Rust. It wasn’t until Haskell that I had to do anything different. And it was very different. I have about three years of angry tweets complaining about Haskell, but I’d never go back. Using Haskell for 10 years now and it’s great. Learning Rust after Haskell was easy because it’s the best of both worlds.

                                                                                  1. 29

                                                                                    Yes, but usually not in the functional-programming sense of the term “side effects.”

                                                                                    On a lot of projects, I see easily an order of magnitude more problems caused by business-logic side effects than mutable-state side effects. The requirements get increasingly complex and as the code grows and evolves to meet them, different parts of it start to interact in ways that nobody anticipated.

                                                                                    In these kinds of cases the code is performing exactly according to spec, and there isn’t any inconsistency in internal state from a data-integrity point of view. Operation A triggers operation B which triggers operation C which triggers operation D which messes up the result of operation A. Each of those triggers is correct, but the system is huge and the people who specified and implemented operation A didn’t stop to think about the C->D relationship because it seemed to have nothing to do with A.

                                                                                    That’s not to say I never run into problems with “state mutated unexpectedly when I called this function” kinds of side effects. I do. But they are usually a very small percentage of the total number of problems I need to fix, and the cost of changing to a programming paradigm that makes them categorically impossible needs to be pretty close to zero to be a worthwhile tradeoff.

                                                                                    My hunch is that people’s answers to this question will vary dramatically based on what kinds of projects they work on. If you are working on massively-parallel data processing pipelines with a ton of cross-thread communication, you’re going to have a different perspective on this than if you’re working on an enterprise web app that mostly exists to translate HTTP requests into complex SQL queries.

                                                                                    1. 6

                                                                                      Out Of The Tar Pit categorizes complexity into accidental and essential.

                                                                                      On a lot of projects, I see easily an order of magnitude more problems caused by business-logic side effects than mutable-state side effects.

                                                                                      I believe that’s essential complexity in the ontology, and so it’s just part of the business you’re in, not a problem OOTTP seeks to address. Over-complex business practices notwithstanding, your “That’s not to say” paragraph makes your opinion clear. ;)

                                                                                      My hunch is that people’s answers to this question will vary dramatically based on what kinds of projects they work on

                                                                                      That has me nodding.

                                                                                      1. 1

                                                                                        I can see how in a pipeline of operators A -> B -> C -> D, that even if it was all functional, you get “cross-talk” if D somehow munges the result A came up with downstream, esp if you didn’t consider it.

                                                                                        Do you get a sense of why this happens? It sounds like it’s a coordination problem outside of code. Since there isn’t a clear separation of concerns between teams, or if it’s necessarily coupled, it’s easy for one person to munge the results of another’s. And this happens because no one person understands the entire pipeline, and only has a local view to what they’re working on?

                                                                                        Based on your answer, and pkolloch’s, it seems like the request/response cycle + database allows you not to worry much about side effects.

                                                                                        Also, thanks for the hunch, because I’m also beginning to think maybe the view depends on the type of work you do, and the shape of the problem space you work in, and the design of the systems that you have to interact will might dictate more, the influence of mutable side effects in the problems you encounter.

                                                                                        1. 3

                                                                                          It’s the same problem as side effects in code, just at a higher level. It’s can be worse because the people “writing” the business logic “code” (rules?) are often not programmers so they less frequently have the vocabulary to talk about much less think about this without stepping through it each time.

                                                                                          At some point your product becomes a virtual machine for a low-code/no-code runtime, sometimes with a poorly defined grammar and a language with sketchy logic rules

                                                                                      1. 4
                                                                                        {-# LANGUAGE
                                                                                            TypeFamilies
                                                                                          , TypeOperators
                                                                                          , DataKinds
                                                                                          , UndecidableInstances
                                                                                          , NoStarIsType
                                                                                          , PolyKinds
                                                                                        #-}
                                                                                        

                                                                                        So, GHC’s type system, not Haskell’s

                                                                                        1. 3

                                                                                          Ah, a fellow jhc user, I presume?

                                                                                          1. 3

                                                                                            hugs hugs

                                                                                            1. 3

                                                                                              While you’re certainly correct, Haskell and GHC are synonymous for most people at this point. Whether or not that’s a good thing is another question, but it’s the state that we’re in regardless.

                                                                                            1. 6

                                                                                              This is a false equivalency. A better question would be: Is nix going to overtake docker, puppet, Linux distros, chef, ansible, terraform, and cloudfoundry?

                                                                                              Probably not, because there are a lot more people spending effort making all of those things better. Nix can do all of those things, however, which makes it a very powerful tool. These debates about which is better, for such vastly different tools, are unproductive.

                                                                                              Whereas Nix is primarily designed for building packages & environments in a reproducible way.

                                                                                              The author is I guess only referring to nix the tool, to the exclusion of nixpkgs the package repo and nixos the Linux distro and nixops the operations tool. Maybe the author shouldn’t do that.

                                                                                              1. 5

                                                                                                I haven’t used Haskell but it’s baffling to me that A) there are multiple proposals for breaking changes after all these years and B) they’re spread out across multiple releases! At least get all your breaking changes done at once so people can fix things and move on instead of this constant drip of breakage. Just looks really weird from the outside. Maybe it’s not so bad if you have more context.

                                                                                                1. 6

                                                                                                  As a Haskell user, none of these changes are a big deal.


                                                                                                  Ongoing: Word8#

                                                                                                  Code will use less memory. This is probably a game changer for folks with a lot of ffi code to c, but otherwise this won’t affect most people.

                                                                                                  Upcoming: Remove Control.Monad.Trans.List

                                                                                                  I didn’t even know this existed. List is already a monad, so why would anybody ever import this?

                                                                                                  Upcoming: Remove Control.Monad.Trans.Error

                                                                                                  This module has had a big deprecation warning on it for like 3 years telling you to use ExceptT.

                                                                                                  Upcoming: Monomorphise Data.List

                                                                                                  Yeah, I thought it was weird these functions were polymorphic. This is a good change. Probably won’t break my code because I imported Foldable/Traversable when I wanted the polymorphic variants, but some people might have to change imports after this.

                                                                                                  Planned: forall becomes a keyword

                                                                                                  If you named your variables “forall” then you are in a tiny group of people affected by this.

                                                                                                  Planned: remove return from Monad

                                                                                                  Presumably the top level definition is still in the Prelude module, so no big deal. This only affects people who override it incorrectly and break the laws it obeys.

                                                                                                  Planned: remove mappend from Monoid

                                                                                                  Ditto. This only affects people who wanted to overload an optimized version, which probably isn’t common.

                                                                                                  Planned: remove (/=) from the the Eq class

                                                                                                  Ditto. This only affects people who override it incorrectly.

                                                                                                  Planned: disable StarIsType by default

                                                                                                  This disables a rarely used kind-level identifier which looks like an operator, but isn’t. Instead you use a readable identifier. That’s a good thing. It probably won’t affect much of my code.

                                                                                                  1. 4

                                                                                                    This.

                                                                                                    I get more breaking changes from template-haskell than from anywhere else in The Language or base.

                                                                                                  2. 3

                                                                                                    I feel like it’s nicer when you just handle breaking changes one by one over time. Python 3 was the “break everything at once” thing, where you just get hit with a deluge of things all at once.

                                                                                                    1. 2

                                                                                                      It kind of fits Haskell’s slogan of “avoid success at all costs”. It seems that the people behind Haskell prioritize the inherent purity of the language over stability. I actually sympathize with this way of thinking, but that is easy for me to say, as I am not using Haskell in any serious way.

                                                                                                      1. 1

                                                                                                        One of Haskell’s claims to fame is its ease of refactoring so this is really just showboating.

                                                                                                        “Hey, look it’s so easy to refactor in Haskell, we are going to break the language all the time.”