Threads for danilafe

  1. 1

    Why would four people flag this as off-topic?

    1. 2

      Because it’s not very technical

      1. 2

        It’s not suprising to see a service having some trouble. Eventually they might or might not resolve this but either way it’s more expected that they won’t share any insightful information on this incident.

        1. 2

          Product information is off topic on Lobsters?

          Lobsters is focused pretty narrowly on computing; tags like art don’t imply every piece of art is on-topic. Some rules of thumb for great stories to submit: Will this improve the reader’s next program? Will it deepen their understanding of their last program? Will it be more interesting in five or ten years?

          This post meets none of the criteria. “it’s relevant because it’s a JPEG compression bug” is the same as saying “any website is relevant because Nginx serves it”. There’s no technical content.

        1. 1

          In practice, are constructive tools simpler for verifying software?

          In @hwayne cool benchmark https://github.com/hwayne/lets-prove-leftpad, dependent-type based systems (Lean, Agda, F* and Idris) look quite compact. Coq looks a bit more verbose, though.

          On the other hand, a classical approach like Isabelle looks pretty massive: https://github.com/hwayne/lets-prove-leftpad/blob/master/isabelle/Leftpad.thy.

          Separation logic approaches like Dafny, my favorites, are both short and simple as they are quite declarative.

          Obviously, this is just one problem and many factors might make it hard to extract general conclusions. Dafny has been used to verify IronFleet, which is a pretty large and complicated distributed system. There are similar success stories for Isabelle (seL4), Coq (CompCert) and F* (Everest).

          1. 2

            In practice, are constructive tools simpler for verifying software?

            In the general case, no. In a specific case, it depends. There are certainly things that dependent types make much simpler. There are also things that it does not. Check out this dependently typed quicksort in Idris vs. a proof of quicksort in Isabelle. The Idris version captures the correctness criteria in a dependent-type, whereas the Isabelle version just implements a simple version of quicksort and proves it separately. The Isabelle version is much simpler.

            In Programming with Dependent Types: Passing Fad or Useful Tool?, Xavier Leroy has this to say:

            Dependent types are a “niche” feature with a couple of convenient uses: … In all other cases, I believe it’s just more effective to write plain ML/Haskell-style data structures and functions, and separately state and prove properties about them.

            This is generally how I feel, but I am not at the forefront of math academia. What I am most definitely opposed to, though, is the religion around constructivism, and that it’s the only acceptable way to reason about programs. For example, take TLA+ which is based purely on untyped set theory. It’s been used to verify many real-world systems, and its utility and effectiveness is pretty inarguable at this point. It is based on a firmly classical logic, and doesn’t present many practical problems. In fact, it makes TLA+ easy to pick up, easy to use, and easy to get results with. Quickly.

            I dare you to try and prove something about a distributed algorithm in Coq.

            Constructive vs. classical logics will unfortunately always be a flame war, like Emacs vs. Vim, or static vs. dynamic types. They each really do have benefits, and at the end of the day it just boils down to which tool you know better sometimes. I currently know Isabelle better, and I very much appreciate its simplicity over something like Coq. I have proven things in Isabelle, and never thought to myself “this proof isn’t as valid, because it’s not constructive.”

            You get the proof, and you move on.

            1. 2

              I dare you to try and prove something about a distributed algorithm in Coq.

              Isn’t this, like, the point of Aneris and Iris? https://github.com/logsem/aneris

              1. 1

                I’m not a Coq hater, just reiterating that. I was commenting on the steepness of the learning curve of Coq, meaning if you don’t have any experience in theorem proving or verification, how long will it take you to get productive in Coq?

                1. 1

                  Huh, I thought Iris had LEM, but turns out it doesn’t (footnote 2). That’s pretty neat!

                2. 1

                  There are also things that it does not. Check out this dependently typed quicksort in Idris vs. a proof of quicksort in Isabelle. The Idris version captures the correctness criteria in a dependent-type, whereas the Isabelle version just implements a simple version of quicksort and proves it separately. The Isabelle version is much simpler.

                  I don’t think that’s proof a dependent-typed proof of quicksort is necessarily more complex than the Isabelle proof. Notice how the Idris proof proves element-equivalence by literally encoding the properties of a permutation, while the Isabelle proof instead proves it via multisets. IME the Isabelle approach is much smarter— maybe if the Idris approach would be much simpler if it used that too.

                  For example, take TLA+ which is based purely on untyped set theory. It’s been used to verify many real-world systems, and its utility and effectiveness is pretty inarguable at this point. It is based on a firmly classical logic, and doesn’t present many practical problems. In fact, it makes TLA+ easy to pick up, easy to use, and easy to get results with. Quickly.

                  Industrial programmers don’t really use TLA+ to “prove” things in the same way you do Isabelle. TLA+ “proves” things by model checking a finite state space. So you can use it to prove that quicksort works on lists up length N, with numbers in the set a..b, but not that all finite lists with any range of numbers will quicksort properly. You can do this with TLAPS, but it’s much harder and more in line with the difficulty of theorem provers. Apalache might also be able to do it, but I don’t know for certain. I think it’s still limited to a given list length bound, but can prove quicksort for any such list with that bound (a..b = ℤ).

                  1. 2

                    I don’t think that’s proof a dependent-typed proof of quicksort is necessarily more complex than the Isabelle proof.

                    They’re definitely different approaches, so not really apples to apples, that’s fair. But the dependent-type machinery itself is what I feel is more complicated. If I were to sum up my feeling on dependent types it would be: “just because types are propositions doesn’t mean that all of your propositions should be types.”

                    It just seems easier to make a separate proposition about your program vs. encoding all of the propositions as types within your program. This may just be a syntactic thing today? Who knows.

                    Industrial programmers don’t really use TLA+ to “prove” things in the same way you do Isabelle.

                    It’s not so much about proving for me as much as formal reasoning. And TLA+ is amazing for formal reasoning! Which is where I agree with the post - treating constructivism as a goal in itself doesn’t make sense, because we have so many non-constructive tools out there that we get real results with. Those results shouldn’t be disqualified.

              1. 2

                Slowly but surely making my way through Gravity’s Rainbow. I read before bed, and I can go maybe 10 pages before it stops making any sense.

                Also reading Jacobson’s Basic Algebra and Armstrong’s Basic Topology to try get more background for HoTT.

                1. 1

                  Does anyone know what the icons mean after the search? I’ve seen “eye”, “man with magnifying glass”, and, I think, “factory”? No way to gleam their meaning on mobile from what I can tell.

                  1. 1

                    This is good timing - I’m working on my thesis right now. It’s ridiculous how little of Vimtex I’ve been using! It turns out almost all of the manual LaTeX work I’ve been performing can be done via the plugin; I got forward and backward search practically for free. The only thing is, the “real time” claim is a little bit impossible - for documents like mine, there’s a 2-3 second delay between saving and re-rendering. I think that’s a limitation of LaTeX though, and one I’ll have to live with.

                    1. 2

                      I think that’s a limitation of LaTeX though, and one I’ll have to live with.

                      KeenWrite, my plain text editor, can preview 1,000 simple TeX formulas in about 500 milliseconds on modern hardware. The Knuth-Plass Line Breaking Algorithm, which drives most TeX implementations, slows down rendering. KeenWrite uses an HTML previewer and inlines the math as SVG so that the math pops up instantly:

                      https://github.com/DaveJarvis/keenwrite/blob/master/docs/screenshots.md#equations

                      KeenWwrite typesets to PDF using ConTeXt, rather than LaTeX, because ConTeXt makes separating content from presentation much easier. The idea is to focus on the content, then muck with the formatting and layout when the content is finished.

                      https://wiki.contextgarden.net/Main_Page

                      Here’s an example of a Markdown document I wrote along with its bibliography. Both were typeset using ConTeXt:

                    1. 4

                      This existential encoding kind of reminds me of a Zipper. You basically split a piece of data into a “context” and a “focus”, and then reassemble it (unzip and zip?). A lens focusing on a particular element of a list might have a c ~ ([a], [a]), for “elements before and after”.

                      1. 4

                        Yeah there is a strong connection between optics and zippers - you can nearly think of them as essentially the same thing.

                        I think the Store encoding of lenses makes that clearer. The post misses one useful intermediate step between the getter/setter and van Laarhoven encoding - if you apply some DRY to

                        data Lens s a = Lens (s -> a) (s -> a -> s)
                        

                        we can factor out the s -> to get

                        data Lens s a = Lens (s -> (a, a -> s))
                        

                        or, a lens is a function which given an s, can give you an a, and a way to re-fill the hole made by removing that a from the s

                        This isn’t that different to what a zipper is - it’s the combination of some focused part of some structure, and the rest of the structure, whatever that rest is is what makes sense for that structure.

                        Because there is this strong link, the Haskell lens package used to include Data.Lens.Zipper (now moved into its own package https://hackage.haskell.org/package/zippers-0.3.2/docs/Control-Zipper.html). What this gives you is the ability to use lenses to make zippers into any structure, with arbitrary movement around.

                      1. 6
                        • Haskell, for one, does not have maps in base either.
                        • Query strings parameter collections are multimaps. There are at least five sensible ways to implement them for this particular use case.
                        1. 13

                          Haskell doesn’t have hashmaps in base, but the tools that haskell gives you to implement maps are so powerful that it makes them easy. With hare you have to draw the rest of the owl. Some people do like drawing the rest of the owl though.

                          1. 2

                            :shrugs: I cannot imagine using a language without generics. Even in C one frequently ends up emulating vtables or passing comparators manually and so on. But the arguments in the article were lacking.

                          2. 3
                            lookup :: Eq a => a -> [(a, b)] -> Maybe b
                            

                            Is in the Prelude I think, and works well until you need performance. I mostly use Haskell for research code (read: not performance intensive) and you’d be surprised how far list-based maps take you.

                            1. 2

                              Is that like an alist (association list) in Lisp?

                              1. 3

                                Yes, it looks like it.

                                1. 2

                                  Yes, the tuple (a, b) represents a key paired with a value (equal to Lisp’s '(a . b)), and the [ ] brackets imply a Cons list. The alist is linear and insert/append/removes are still very much O(n).

                                  Data.Map implements a hash-map-like balanced tree, and is simple to implement on your own (if you like writing trees in Haskell, that is).

                            1. 1

                              Reminds me of this paper, which uses Haskell to develop some physics. Very neat!

                              1. 3

                                But don’t you have to come up with a seed number … at random?

                                There ought to be a way to generate a seed by observing your environment … maybe adding up all the digits you can see at that moment, or counting the number of something…

                                1. 5

                                  Well, given how bad humans are at generating sequences of random numbers, perhaps even arbitrarily picking one seed is a lot better than arbitrarily picking 59 numbers without an algorithm. So you’re right that some human choice is required, but hopefully its impact is diminished.

                                  maybe adding up all the digits you can see at that moment, or counting the number of something…

                                  Counting the number of doors and windows, perhaps? Or just checking the time?

                                  1. 2

                                    If you have a method (or multiple methods) of generating random numbers that you don’t quite trust, you can add them up modulo the base number. This will take numbers sampled from biased distributions, and produce a much more uniform distribution.

                                1. 12

                                  I’d like to mention Bartosz Ciechanowski, who explains complex technical topics with interactive, animated figures.

                                  1. 5

                                    These are so well crafted that I’m at loss. I can’t imagine the amount of time it would take to come up with something like this.

                                  1. 7

                                    Thanks for sharing this! I was recently in a “work-on-blog” mood, and was at loss at what to add that would actually positively contribute to my site.

                                    Whereas I don’t really want to adopt the ideas that display more personal information (what you’re listening to, where you are, that sort of thing), seeing this list is inspiration enough. I myself have recently added a content graph, which shows all the articles on my site and the links between them. I’m quite happy with it!

                                    1. 3

                                      Oh, this is neat. How did you get the within-site link set?

                                      1. 2

                                        I just wrote a (pretty dirty) Ruby script: https://dev.danilafe.com/Web-Projects/blog-static/src/branch/master/analyze.rb

                                        It reads the markdown files that make up the site and searched for the relref (within-site link) Hugo shortcode.

                                        1. 2

                                          Ah, cool—that’s roughly what I was guessing. I’ve done similar with a tiny Rust tool to run to rewrite links when dealing with legacy content. Thanks!

                                    1. 4

                                      Like someone else has pointed out, there’s a glaring lack of rust code to go with all of these descriptions. I’d like to see what writing “lunatic rust” looks like, and I can’t find it in this post, nor anywhere on the website.

                                      1. 5

                                        This is beside the point, but the un-closable contact banner is very frustrating when reading.

                                        1. 2

                                          Good point! I’ll do something about that soon.

                                          1. 2

                                            Thanks! The article itself was great, by the way.

                                        1. 5

                                          Threads’ functionality in the Element apps aside, the addition of threads to the Matrix specification makes for a nice experience in non-chat clients. In matrix highlight, threads are used to keep track of comments on a single highlighted text snippet, and I dare say it works quite well!

                                          1. 5

                                            The new hotness is a single binary blog, with all the posts embedded inside.

                                            mutters kids these days <closes tab>

                                            Seriously though, why? Generating a static site from a bunch of files or some content in a DB is a Solved Problem™. I guess it’s the age old truth that it’s way more fun to design and code a blog engine than to… blog.

                                            1. 13

                                              That’s what I’d expect, one is programming and the other is writing. Most of us find programming easier.

                                              1. 8

                                                to me, part of the idea is that the “blog” can be extended to be much more than a blog. if you use hugo or a similar tool, how would you implement a tool like my age encryption thing or even a public ip fetcher?

                                                you noted that building a blogging engine is fun - it is! a lot of fun! my take is that people should focus on making a website that’s not just a blog - it’s a fun, personal engine that happens to contain a blog. focusing on “blogs are a solved problem” is missing the point, imo.

                                                1. 6

                                                  I use Hugo, and to provide the tools you do I’d simply use a separate subdomain and program for each tool. Why should my blog be concerned with the details of age encryption or even echoing a user’s IP? A blog, in my mind, is a collection of articles. And a website can be backed by several programs.

                                                  In fact, I provide a variety of small sub-services to myself, and they’re simply separate programs. This has the added benefit that I could shuffle them between machines independently of one another.

                                                  1. 4

                                                    Right. It feels like we’re basically reinventing CGI scripts, but worse.

                                                    1. 1

                                                      why should my blog concern itself with…

                                                      if you enjoy what you’re currently doing, i’m not here to persuade you. i optimize my personal projects for fun, reliability, and maintainability. :3 building separate services and making subdomains for trivial functions isn’t a good time imho. i also don’t like that i’d have to think pretty hard if i wanted to integrate those services on the blog (say, dynamically load a users IP in a post). with one binary, everything is already all connected. but honestly to defend the “validity” of “my way” feels meh. i like parades - the more the merrier!

                                                    2. 1

                                                      I use software to generate HTML from Markdown for my blog, but that’s only part of my site. I have some other static content, some CGI service, a gemsite, etc.

                                                      As far as I can see from my limited understanding of the linked post, it’s basically akin to serving a single HTML page, albeit compressed, from a compiled executable. You still need to generate the HTML before zipping it. So you’ve just shifted one step of the integration pipeline.

                                                      By using tried and tested technology, I can focus on producing the stuff I want. I’ve already run into some issues with gemini server software and it reminded me why I don’t want to deal with that kind of stuff.

                                                      https://portal.mozz.us/gemini/gerikson.com/gemlog/tek/gemserv-update-woes.gmi

                                                      In summary and conclusion, serving my site as a single executable would give me nothing other than bragging rights, and like I stated above, I’m too old for that particular flavor of ordure.

                                                    3. 5

                                                      Same reason why it’s so much more fun to write a game engine than a game 😅

                                                    1. 3

                                                      Hey, that’s me! Thank you for sharing :)

                                                      1. 4

                                                        Np, I like the approach and kinda want it to work in firefox, maybe I will fiddle a pull request at some point.

                                                        1. 3

                                                          It more-or-less works on firefox right now. Seems to crash occasionally, but I’ve been able to use it. The only thing is I haven’t looked into packaging it.

                                                        2. 2

                                                          Hey, could you describe the format of the annotations? People have tried to do stuff like this before (in a more centralized way) and the hard — I would almost say “insoluble” — problem is how you refer to the text being annotated, when the page it’s on can change arbitrarily.

                                                          (IIRC, much of the complexity of Ted Nelson’s (in)famous Xanadu had to do with this. In Xanadu, links could point to any range within a document.)

                                                          1. 2

                                                            (IIRC, much of the complexity of Ted Nelson’s (in)famous Xanadu had to do with this. In Xanadu, links could point to any range within a document.)

                                                            This is kind of reminding me of the Chrome’s fragment anchors - kind of a worse is better version of Xanadu’s.

                                                            1. 2

                                                              Right now, the annotations contain a pair of (path, offset) tuples. The first (path, offset) indicates where an annotation starts, and the second one where it ends. The path is for selecting a DOM node, and the offset is for textual offset within that node. I’m aware that this is not how it’s always done in other systems. For instance, I’ve seen a pattern matching approach (“text that starts with A, and ends with B”); , it seems like this would malfunction on pages here similar text is often repeated. However, this approach would prevent structural changes (wrapping the content in a div, say) from mangling the annotation.

                                                              You’re completely right that pages changing presents a serious problem. There was a writeup by hypothes.is (I’m struggling to find it now) about the sort of annotation model they use, which involves several different schemas (e.g., a combination of both formats I’ve described above). I am hoping to apply a similar strategy to Matrix Highlight. However, it’s currently in its early stages, so I’ve had to worry about other parts of the software.

                                                              Ultimately, no annotation format is resilient to a page being completely replaced or fundamentally altered. Another hope of mine is to integrate with a web archiving technology (e.g. WARC) and store an archive of the page being annotated at the time it was accessed. This way, even if the original changes, the old, annotated version will remain intact.

                                                          1. 2

                                                            My understanding was that this is still not the ideal data structure as far as Okasaki is concerned. In particular, code may still retain reference to a queue before something was popped from it (sharing purely functional data structures is perfectly fine!); this code would once again pay the O(n) penalty if it decided to pop.

                                                            The secret ingredient is laziness; there are some clever tricks that can be done to make sure that the “reverse” operation amounts to forcing some kind of thunk; this thunk, once forced, remains computed, and thus (I believe), the pre-pop version of the data structure can be popped from a second time in O(1).

                                                            1. 5

                                                              There’s a convention in C – and even some other languages, I’ve heard – to call loop indices i. Would it be easier to read code that used variables named index instead?

                                                              I’d hazard that the answer might be yes, for a beginner: if it’s your first time seeing a for loop, having to pick up the local slang at the same time you’re wrapping your head around the concept of iteration might be distracting noise.

                                                              But you can have both: you can present non-idiomatic code to beginners, who are already having enough trouble parsing a brand new syntax and understanding brand new concepts, and then explain common conventions afterwards.

                                                              So to repeat: Haskell code is too short. In particular, a ton of the example code in various resources designed to teach Haskell to beginners is too short.

                                                              I agree with the second part of this, but the generalization sounds like “You should be writing for (int index = 0; ...).”

                                                              (But is the term “index” any more meaningful, to someone who is encountering index-based iteration for the first time? Perhaps that term is just as arbitrary as i, to a beginner.)

                                                              map :: (a -> b) -> [a] -> [b]
                                                              

                                                              Does this clarify that x above is a thing while xs is a list?

                                                              This is interesting to me: you know xs is a list because it appears on the right-hand of the : constructor. You don’t need a type annotation for that; it can’t not be a list. But of course you can’t know that if you aren’t used to reading Haskell.

                                                              map f (element:list) = (f element) : map list
                                                              

                                                              I don’t think this is really much better than x:xs. Sure, list is a list. But lots of things are lists. It doesn’t tell you anything about this list, except its type – which the type signature already tells you.

                                                              map f (head:tail) = (f head) : map tail
                                                              

                                                              Do the terms “head” and “tail” make more sense? That’s probably another learned convention.

                                                              map f (car:cdr) = (f car) : map cdr
                                                              

                                                              Crystal clear.

                                                              map f (first:rest) = (f first) : map rest
                                                              

                                                              I think that’s how I would write it.

                                                              map f (firstElement:restOfTheElements) = (f firstElement) : map restOfTheElements
                                                              

                                                              Is “element” a learned term, or an intuitive one for native English speakers? It’s hard to remember.

                                                              On single letter module imports: this feels like a very different sort of argument than “element is more clear than x.” Single-letter module imports make code harder to read difficult precisely because it’s not a convention: if every file has its own set of custom module abbreviations, I have to learn what all of them are supposed to mean before I can read the file. (I think there’s an argument for consistent abbreviations used across a codebase, which have a one-time cost associated with learning them, but I don’t think that’s what the article is raging against.)

                                                              I was interested to see that the blog is generated in Hakyll. I wonder if that was the case when the article was posted (nearly a year ago). Seems not.

                                                              1. 8

                                                                When I was being taught Haskell, everywhere I would see functions name “sht” or “rp” or some other combination of up to 4 letters. I didn’t like this at all, even way past the “beginner” stage, because it was impossible to decipher what the heck a function or variable was. So I started writing code with descriptive variable names: withNoDecreasing, recordFixpoints, and so on, and I write code like that to this day.

                                                                But what I notice is that it becomes increasingly hard to see patterns in your code when the variable names get very long. Things that would look very familiar, like f <$> m1 <*> m2, would start getting really “stretched out”, to the point where maybe a part of the expression is on the next line, or put into a let expression to avoid making the line 200 characters long. Now it’s not so clear what’s going on; and now, too, you’re pushed to assign names to previously “intermediate” computations.

                                                                My point is, I didn’t just learn conventions such as x and xs, a and b, or m and f, but I also learned the “shape” of common types of computations (like a function being applied to two arguments within an applicative functor). Renaming the variables doesn’t make the code more readable to me for the reasons you described, but also because I can’t recognize these “shapes”.

                                                                1. 2

                                                                  Now it’s not so clear what’s going on; and now, too, you’re pushed to assign names to previously “intermediate” computations.

                                                                  And this is usually a good thing. Decompose and abstract until it fits your screen and brain.

                                                                2. 5

                                                                  I like the way you’ve thought through this, kudos! Personally, I agree that first:rest is probably the most intuitive syntax, and has the benefit of being relatively short as well!

                                                                  For index-based iteration, I try to use names that have meaning within the problem domain. For example, rowIndex and columnIndex if I’m iterating a matrix generically, or personIndex for iterating a list of Person records (or whatever). Part of the value I see here, though, is that it is harder to mix the indexes up in a nested loop. It also facilitates moving code around as there’s less chance of accidentally shadowing i.

                                                                  1. 3

                                                                    Haskell conventionally abbreviates index to ix, which I rather like. If I have a list foos, then I might have an index into that list called fooIx, which I rather like. Reads much better than i and then j to me.

                                                                    1. 1

                                                                      I see an actual problem if you use both i and j in nested loops - those are so easily mixed up at a glance.

                                                                    1. 1

                                                                      Haskell is great, except for all the monad transformer stuff. That’s all an absolute nightmare. At this point, I just don’t really see a reason to use it over Rust for writing practical (i.e. non-research) software. Rust has the most important pieces from Haskell.

                                                                      1. 12

                                                                        My experience with monad transformers is that they can offer a lot of practical value. There’s a little bit of a learning curve, but I think that in practice it’s a one-time cost. Once you understand how they work, they don’t tend to add a lot of cognitive burden to understanding the code, and can often make it easier to work with.

                                                                        I do like some of the work people are doing with effects systems to address the down sides of monad transformers, and eventually we might move on, but for a lot of day to day work it’s just very common to end up doing a lot of related things that all need to, e.g. share some common information, might fail in the same way, and need to be able to do some particular set of side effects. A canonical example would be something like accessing a database, where you might have many functions that all need to access a connection pool, talk to the database, and report the same sorts of database related errors. Monad transformers give you a really practically effective way to describe those kinds of things and build tooling to work with them.

                                                                        1. 8

                                                                          What’s wrong with “all the monad transformer stuff”?

                                                                          1. 3

                                                                            Monads are mostly complexity for the sake of being able to imagine that your functions are “pure”. I have not found any benefits for such an ability, besides purely philosophical, at least in the way most functional programming languages are built. There are better ways, that can forgo the need for imagination, but the functional programming crowd doesn’t seem to find them.

                                                                            1. 15

                                                                              Monads are for code reuse, they’re absolutely, completely, not at all about purity.

                                                                              1. 3

                                                                                I have not found them any good for that use case either. The code I’ve seen usually ends up as a recursive monad soup, that you need to write even more code to untangle. They can work well in some limited contexts, but those contexts can often work just as well using other programming constructs in my opinion. Limited code reuse in general is a problem with many half-assed solutions that only work in limited contexts, for example inheritance, DSLs, composition(the OOP kind), etc. Monads are just another one of them, and honestly, they are just as, if not more easy to overuse as the other solutions.

                                                                                1. 9

                                                                                  I do not understand this perspective at all. traverse alone saves me an astonishing amount of work compared to reimplementing it for every data structure/applicative pair.

                                                                                  1. 2

                                                                                    The reason you need traverse at all is monads. It’s all complexity for the sake of complexity in my eyes.

                                                                                    1. 5

                                                                                      Not at all. traverse works for a far wider class of things than just monads. And even if a language didn’t have the notion of monad it would still benefit from a general interface to iterate over a collection. That’s traverse.

                                                                                      1. 3

                                                                                        general interface to iterate over a collection

                                                                                        So, a for loop? A map() in basically any language with first-class functions?

                                                                                        Anyways, my comment about needing traverse at all is in response of needing to reimplement it for many different data structures. The problem I see in that, is that the reason you get all of those data structures is because of Monads. There a lot less of a need to have such a function when you don’t have monads.

                                                                                      2. 3

                                                                                        The reason you need traverse at all is monads. It’s all complexity for the sake of complexity in my eyes.

                                                                                        How would you write, say,

                                                                                        traverseMaybeList :: (a -> Maybe b) -> [a] -> Maybe [b]
                                                                                        traverseEitherBoolSet :: (a -> Either Bool b) -> Set a -> Either Bool (Set b)
                                                                                        

                                                                                        in a unified way in your language of choice?

                                                                                        1. 3

                                                                                          On a good day, I’d avoid the Maybe and Either types that are used for error handling, and just have good old exceptions and no need any traversal. On a bad day, I’d probably have to use traverse, because Maybe and Either, are monads, and create this problem in the first place.

                                                                                          1. 1

                                                                                            I think if you prefer exceptions to Maybe/Either then you’re sort of fundamentally at odds with Haskell. Not saying this in a judgmental way, just that “exceptions are better than optional/error types” is not how Haskell thinks about things. Same with Rust.

                                                                                            Though, even in Python I typically write functions that may return None over functions that throw an exception.

                                                                                            1. 1

                                                                                              I think if you prefer exceptions to Maybe/Either then you’re sort of fundamentally at odds with Haskell.

                                                                                              I’m pretty sure by just disliking monads I’m at odds with Haskell as it currently is. But do note, that not all exceptions are crafted equally. Take Zig for example, where errors functionally behave like traditional exceptions, but are really more similar to error types in implementation. A lot nicer than both usual exceptions, and optional/error types in my opinion.

                                                                                              Though, even in Python I typically write functions that may return None over functions that throw an exception.

                                                                                              It really depends if the function makes sense if it returns a none. If you’re trying to get a key from cache, and the network fails, returning a None is fine. If you are trying to check if a nonce has been already used, and network fails, returning None is probably the wrong thing to do. Exceptions are a great way to force corrective behavior from the caller. Optional types have none of that.

                                                                                              1. 1

                                                                                                I don’t understand why you say Zig error types “behave like traditional exceptions”. My understanding is that if I have a function that returns a !u32, I can’t pass that value into a function that takes a u32.

                                                                                                Similarly, I don’t understand the idea that exceptions force correctional behavior. If I have a function that throws an exception, then I can just… not handle it. If I have a function that returns an error type, then I have to specify how to handle the error to get the value.

                                                                                                1. 1

                                                                                                  Yes, but essentially, you are either handling each error at the call site, or, more often, you bubble the error upwards like an exception. You end up with what I would call forcibly handled exceptions.

                                                                                                  Not correcting some behavior leads to your program dying outright with exceptions. If you handle the exception, I’d say you are immediately encouraged to write code that corrects it, just because of how the handling is written. With functions that return an error type, it’s very easy to just endlessly bubble the error value upwards, without handling it.

                                                                                                  1. 1

                                                                                                    With functions that return an error type, it’s very easy to just endlessly bubble the error value upwards, without handling it.

                                                                                                    If I have an Optional Int, and I want to put it in a function that takes an int, I have to handle it then and there. If I have an optional int and my function signature says I return an int, I must handle it within that function. The optional type can’t escape out, versus exceptions which can and do.

                                                                                          2. 2

                                                                                            I’d argue that these specific types are actually not very useful. If any error occurs, you don’t get _any _ results? In my experrience it’s more likely that we need to partition the successful results and log warnings for the failures. The problem with these rigidly-defined functions is that they don’t account for real-world scenarios and you just end up writing something by hand.

                                                                                            1. 1

                                                                                              Haskell’s standard library is anything but rigid in my opinion. Take the specific case of “something that contains a bunch of (Maybe item).

                                                                                              • If you want a list of all items inside Just but only if there is no Nothing anywhere, you write toList <$> traverse l.
                                                                                              • If you want a list of all items inside Just, you can write fold $ toList <$> l.
                                                                                              • If you want just the first item, if any, you write getFirst $ fold $ First <$> l
                                                                                              • If you want the last item, if any, you can write getLast $ fold $ Last <$> l

                                                                                              These are specific to Maybe, especially the First and Last, I’ll give you that. But functions from the stdlib can be snapped together in a huge number of ways to achieve a LOT of things succinctly and generally.

                                                                                              1. 1

                                                                                                OK, this doesn’t actually answer my question. Say I have a stream of incoming data. What I really want to do is validate the data, log warnings for the ones that fail, and stream out the ones that succeed.

                                                                                                1. 2

                                                                                                  Then use an API that’s designed for streaming processing of data, for example https://hackage.haskell.org/package/streamly

                                                                                              2. 1

                                                                                                I wrote up a few different operations on “collections of Maybe” or “collections of Either” in Haskell. The total number of lines of code required to express these operations using the standard Haskell library was around 12, including some superfluous type signatures. They cover all the cases in my other comment, as well as the “partitioning” you mention in your post. Here’s a gist:

                                                                                                https://gist.github.com/DanilaFe/71677af85b8d0b712ba2d418259f31dd

                                                                                    2. 9

                                                                                      Monads are mostly complexity for the sake of being able to imagine that your functions are “pure”.

                                                                                      That’s not what they’re for. Monad transformers (well, the transformers in the mtl with the nice typeclasses) in particular let you clearly define what effects each piece of code has. This ends up being pretty useful: if you have some sort of monad for, say, SQL server access, you can then see from a given function’s type if it does any SQL transactions. If you attempt to do SQL where you’re not supposed to, you get a type error warning you about it. I think that’s pretty convenient. There’s lots of examples of this. If you’re using the typeclasses, you can even change the effect! Instead of reading from an actual db, you could hand off mocked up data if you use one monad and real db info with the other. This is pretty neat stuff, and it’s one of my favorite features of Haskell.

                                                                                      I agree that they might not always be super clear (and monad transformers start to have pretty shit perf), but they’re not just for intellectual pleasure.

                                                                                      1. 1

                                                                                        Monad transformers (well, the transformers in the mtl with the nice typeclasses) in particular let you clearly define what effects each piece of code has.

                                                                                        Slight correction: they usually let you define what classes of effects a piece of code has. This of course can range in abstraction, from a very specific SQLSelect to an overly broad, and not at all descriptive IO. One problem often seen with this, is that methods often combine several different effects to achieve the result, which leads to either having an obnoxiously large function signature, or having to merge all the effects under the more generic one, whether that be the more useful SQL if you’re lucky and the method only touches the SQL, or the frankly useless IO, in both cases loosing a big part of the usefulness of it.

                                                                                        But the thing is, that you don’t need monads to achieve any of that anyways. If you represent external state (which the effects are meant to move away from you) as an input to a function, and the function outputs the same external state back, just with the commands it wants to do, a runtime can perform the IO, and bring you back the information on second function call. This of course might be somewhat counter-intuitive, as people are used for their main() function to be run only once, but it leads to another way of thinking, a one where you are more aware of what state you carry, and what external systems each function can interface with, as it lives straight in the function signature, only with an opportunity to hide inside a type to group several of them. This style would also naturally promote IO pipelining, since you easily can (and probably want to) submit more than one IO request at once. You can build the IO runtime on anything you want as well, be it io_uring, or a weird concoction of cloud technologies, if you provide your program with the same interface. It also brings the same testing possibilities, even slightly more, as making a golden data tests becomes ridiculously easy. More impressively, it brings the possibility of relatively easy time-travel debugging, as you only need to capture the inputs to the main function every function call to accurately replay the whole computation, and in part, enable to check some fixes without even re-doing the IO. I think this is a better way to move towards in functional programming, but I myself don’t have the time, or motivation in functional programming to push it that way.

                                                                                        1. 2

                                                                                          Classes of effects instead of effects is a distinction without a difference, right? I can define a monad typeclass that only puts things in a state and a monad typeclass that only takes things out instead of using StateT (in fact they exist and are called Reader and Writer), and I can get as granular as I’d like with it. The amount of specificity you want is entirely up to you.

                                                                                          I agree that the IO monad is pretty frustratingly broad. You’re also correct that you don’t need monads to do this sort of thing. I’m having a little bit of trouble understanding your replacement. You mean a function with external state a and pure inputs b with result c should have the type a -> b -> (c, a), right? What would you do when you want to chain this function with another one?

                                                                                          1. 1

                                                                                            No. Your main function’s signature looks like a -> a. And a runtime calls it again and again, taking the actions the function specified in the output type that contains the external state objects, performing them, and putting the results back into the same objects. Your other functions as such grow in a similar manner, for example a function that takes an external resource a and a pure input b, to for example submit a write request, it would look like a -> b -> a. An important thing to note, that it only submits a request, but doesn’t do it yet. It would only be performed once the main function ends, and the runtime takes over. As such, you couldn’t do reading as trivially as a -> b -> (a, c), as you cannot read the data out while “your” code is running. This isn’t great for usability, but that can in large part be solved by using continuations.

                                                                                            As a side note, I don’t particularly enjoy chaining. It’s another solution that is only needed because monads make it appear that the function isn’t performing IO, when it’s more useful for you to think that it does. With continuations, you could just make this look like several function calls in a row, with plain old exceptions to handle errors.

                                                                                            1. 2

                                                                                              This seems far more complex than using monads to me, but different people think in different ways. I don’t know what you mean by you don’t enjoy chaining— you don’t like sequencing code?

                                                                                              1. 1

                                                                                                I like sequencing code, but I don’t enjoy sequencing code with monads, since monads force the sequencing of code they touch to be different, just because they are monads.

                                                                                                1. 2

                                                                                                  Can you provide an example of monads changing the way you sequence code? That’s one of the major benefits of do-notation in my mind: you can write code that looks like it is executing sequentially.

                                                                                                  1. 2

                                                                                                    The do-notation is the problem. Why would sequencing functions that do IO would need to be different from sequencing functions that don’t? IO is something normal that a program does, and functional programming just makes it weird, because it likes some concept of ‘purity’, and IO is explicitly removed from it when the chosen solution are monads.

                                                                                                    1. 2

                                                                                                      Because functions that do IO have to have an order in which they execute. The machinery of a monad lets you represent this. I don’t care which side of (2+2) + (2+2) executes first, but I do care that I read a file before I try to display its content on screen.

                                                                                                      1. 1

                                                                                                        In the general case, you don’t care about the order the IO executes as long as you don’t have any dependencies between it. multiply(add(2, 2), 2) will always perform addition first, multiplication second, just like displayData(readFile(file)) will always read the file first, and display the data second. Compiler will understand this, without needing to distinguish the functions that do IO, from those that don’t. In the few cases where you don’t have any fully direct data dependencies, but still need to perform IO in specific order, you then may use specific barriers. And even with them, it would still feel more natural for me.

                                                                                                        1. 2

                                                                                                          In the general case, it’s impossible to determine which code might depend on the other. A contrived counter example would be writing to a socket of program a, that itself writes to the socket of program b, and then writing to the socket of program b. The order here matters, but no compiler would be able to determine that.

                                                                                                          In the few cases where you don’t have any fully direct data dependencies, but still need to perform IO in specific order, you then may use specific barriers.

                                                                                                          Yes, these specific barriers are exactly what monads provide.

                                                                                                          Can you provide an example of a monad restructuring how you want to sequence something? I’m very open to seeing how they fall short, I haven’t written Haskell in a long time (changed jobs) but I miss the structure monads give very often.

                                                                                                          1. 3

                                                                                                            Of course no compiler can determine all dependencies between IO. In other languages you don’t need to worry much about it, because in other languages the evaluation order is well defined. Haskell though, forgoes such definition, and with the benefits it brings, it also brings it’s problems, namely, the inability to easily order unrelated function evaluation. There is seq and pseq, but they are frowned upon because they break monads :). So the way the IO monad works is by introducing artificial data dependencies between each monad. This feels quite hacky to me. But do note that this is mostly a problem with Haskell, and many other functional programming languages that are full of monads could get rid of them without much change in the language semantics.

                                                                                                            Monads don’t change how I sequence something. But they do greatly annoy me, by needing special handling. It’s like mixing async and non-async code in other languages - either you go fully one way, or fully the other. Mixing both does not work well.

                                                                                                            1. 2

                                                                                                              seq and pseq are definitely not frowned upon!

                                                                                                              1. 1

                                                                                                                Monads don’t change how I sequence something.

                                                                                                                Then why did you say:

                                                                                                                I like sequencing code, but I don’t enjoy sequencing code with monads, since monads force the sequencing of code they touch to be different, just because they are monads.

                                                                                                                They also don’t need special handling. Do-notation is syntax sugar, but there’s nothing in Haskell that privileges monads outside of the standard library deciding to use them for certain things. They are just a typeclass, the same as any other.

                                                                                              2. 1

                                                                                                As a response to your edit: no, reading is still a class of actions. You can read a single byte, or you can read a million bytes, and those two are very different actions in my mind. Trying to represent such granularity in monads is difficult, and honestly, a waste of time, since you don’t need such granular control anyways. But this is mostly disagreements in definition at this point, so no need to discuss this further I think.

                                                                                              3. 1

                                                                                                Yeah, linear IO is a major motivator for my work on Dawn.

                                                                                            2. 8

                                                                                              This does not match my experience using monads at all.

                                                                                              1. 1

                                                                                                Monads arise naturally from adjoint functors. Perhaps they are not obvious, but that does not mean that they are artificially complex.

                                                                                                It sounds like you vaguely disapprove of effects-oriented programming, but you need to offer concrete alternatives. Handwaves are not sufficient here, given that most discussion about monads comes from people who do not grok them.

                                                                                                1. 3

                                                                                                  Monads arise naturally from adjoint functors. Perhaps they are not obvious, but that does not mean that they are artificially complex.

                                                                                                  Such technobabble explanations are why I try to distance myself from functional programming. While technically correct, they offer no insight for people who do not already understand what monads are.

                                                                                                  It sounds like you vaguely disapprove of effects-oriented programming, but you need to offer concrete alternatives. Handwaves are not sufficient here, given that most discussion about monads comes from people who do not grok them.

                                                                                                  I do, in this comment. It might not be the most understandable, it might not have the strong mathematical foundations, and it definitely is wildly different as to how people usually think about programs. But I still think that it can offer better understanding of the effects your program does, besides giving a bunch of other advantages.

                                                                                                  Also, I don’t disapprove of effects-oriented programming, it’s just that monads are a terrible way of doing it. I feel like there are a lot of better ways of making sure effects are explicit, my suggestion being one of them, effect handlers being the other one about which I learned recently.

                                                                                                  1. 2

                                                                                                    I looked this up and it seems that the idea that every monad comes up as an adjunction occurs, if you define a category based on that monad first. isn’t this totally cyclic?

                                                                                                    1. 3

                                                                                                      In many cases, the algebras for a monad will be things we already cared about. In fact, that was sort of the original point of monads – a way of abstractly capturing and studying a wide variety of algebraic theories together.

                                                                                                      For example, if you’re familiar with the list monad, its algebras are simply monoids, and so its Eilenberg-Moore category is (at least equivalent to) the category of monoids.

                                                                                                      There are other monads whose algebras would be groups, or rings, or vector spaces over a field K, or many others.

                                                                                                      But I think Corbin was probably not referring to the way in which every monad comes from at least one adjunction (or two, if you also throw in the one involving the Kleisli category), but rather that if you already have adjunctions hanging around, you get a monad (and a comonad) from each of them in a very natural way. If you’re familiar with order theory by any chance, this is a direct generalisation of how you get a closure operator from a Galois connection between partially ordered sets.

                                                                                                      This entire discussion is almost completely irrelevant to someone using monads to get programming tasks done though. As an abstraction of a common pattern that has been showing up in combinator libraries since the early days of functional programming, you can fully understand everything you need to know about it without any of this mathematical backstory.

                                                                                                      Why we recognise the monad structure in programming is mostly not really to be able to apply mathematical results – maybe occasionally there will be a spark of inspiration from that direction, but largely, it’s just to save writing some common code over and over for many libraries that happen to have the same structure. Maybe monad transformers take that an additional step, letting us build the combinator libraries a bit more quickly by composing together some building blocks, but these would all still be very natural ideas to have if you were just sitting down and writing functional programs and thinking about how to clean up some repetitive patterns. It would still be a good idea even if the mathematicians hadn’t got to it first.

                                                                                                2. 2

                                                                                                  It’s completely opaque to me how to get them to do what I need them to do. I found myself randomly trying things, hoping something would work. And this is for someone who found Rust lifetimes to be quite straightforward, even before NLL.

                                                                                              1. 6

                                                                                                The only thing I know about opal is this issue.

                                                                                                1. 2

                                                                                                  Yikes.