1. 2

    For coding, I love the craft, the theory, and all the pieces.

    To me it’s like rearranging magnetic fridge poetry to cast spells. I can affect the real world with my BRAIN!

    I have big piles of books (that I have read) about type theory or theoretical math or everyday pragmatics. I have expensive keyboards I’ve assembled myself. I recently refactored my emacs config down to 250 lines of code. I teach programming for free several times a week. I’m slowly picking up FPGA development, I’ve been building wearable electronics widgets for years.

    I started doing programming because it was the most fun I could have by myself, and suddenly people started paying me money. I don’t expect programming to be this profitable forever, but it is nice to get paid to do what I love.

    1. 4

      My first thought, why write a PDF about developing for the web?! Why not show me instead of tell me? I clicked through the link by Mex in the comments, and discovered:

      Prolog IDE in a browser: https://swish.swi-prolog.org/example/examples.swinb

      1. 1

        Halfway down the page there’s a surprise discursion into Scala vs Haskell, with a pinch of “Are you sure you really need Apache Spark on AWS?” thrown in.

        This is definitely the most entertaining [ANNOUNCE] I’ve read in the past few months.

        1. 2

          even better, Part 2 is already linked from Part 1 !

          1. 1

            Wouldn’t it be easier to have one font for zero and there second for one? Easy binary encoding?

            1. 2

              This toolkit is a bit dated but still maintained (according to its website).

              I was linked to Cello in another thread on Lobsters and it reminded me of Fudgets. I wanted to study its source because I wanted to know how to make GUI toolkits (not GUIs with toolkits) and still want to do that at some point but don’t know enough Haskell to make it easy enough.

              Other GUI toolkits with source that I know of for higher level and functional programming languages are wrappers around C/C++ and inherit the underlying model. Fudgets seem to redo things in Haskell and is pretty comprehensive in terms of GUI widgets implemented. Unfortunately (for me), they jump pretty straight into “traditional” UI elements instead of considering components that could assemble them.

              I found the html version of the thesis linked from that page to be pretty good documentation.

              If someone here knows of a large project that uses this or have a (more) languages agnostic description of how each GUI widget is implemented, I’d be interested to know.

              1. 1

                I’m pretty sure House used fudgets for its UI. I’m not sure if the updated lighthouse also used fudgets. I also likely have a copy of both handy if you want one.

                1. 1

                  I didn’t know what House is. Google says its an OS! Yes, I’d be interested in reading a bit of the source. Hopefully, there’s some more complex GUI programs in there.

                  Edit: Most of the link on that page are broken, as expected I suppose.

              1. 1

                Had me believing it for the first few paragraphs. Clearly I’ve been working on Enterprise software too long.

                There are still some good points, Trac isn’t friendly to new users, I do wish ghc were using more recent open source tools for coordination.

                1. 6

                  Just a note: I found the term Model-Based Testing a bit distracting - then again, I come from a Rails background. I think “Generative Testing in Rust with QuickCheck” would have been more helpful with no prior knowledge of QuickCheck.

                  This also set me off into exploring QuickCheck. For those who don’t know, the most helpful thing I saw to help understand it was watching this video that showed off test.check, a QuickCheck implementation in Clojure: https://www.youtube.com/watch?v=u0TkAw8QqrQ

                  Basically, it’s a way to generate random data and data structures (within certain bounds that you define) to be used as inputs in testing your application logic. Since I was also confused about this, it seems like people run QuickCheck as a step separate from their non-generative specs to identify specific edge cases and then add those edgecases as regression tests to their overall test suite. In some generative testing libraries I saw after poking around, they’re even run as part of the test suite, though I’m not sure how I feel about that - couldn’t that result in missing a case locally that then fails on CI due to different inputs?

                  1. 5

                    In the past, it was called specification-based (QuickCheck paper said “specifications”), model-based, or contract-based… test generation depending on which crowd you were listening to. Recently, most posts call it property-based testing. All are accurate given they’re technically true and have prior work. The superset would probably be specification-based since what they use in each is a specification. Formal specifications are also oldest technique of them.

                    Generative is ambiguous since it sounds like it just means automated. All the test generators are automated to some degree. So, we name them from where the process starts. Far as what name, even I started using property-based testing instead of specification-based testing as default to go with the flow. I still use others if the discussion is already using those words, though. For instance, I might say…

                    1. Spec-based if we’re talking formal specifications

                    2. Model-based if we’re talking Alloy models.

                    3. Contract-based if we’re talking Design-by-Contract, Eiffel, or Ada/SPARK since that’s their language.

                    4. Property-based if talking to people using popular languages or something since they’ll find helpful stuff if they Google with that.

                    1. 2

                      Thanks for the background information!

                    2. 2

                      There’s also been some work done to save failing inputs for later retest. I’ve used that to do test driven development with properties.

                      I know that’s supported in version 2 of the original quick check, almost certain Python’s hypothesis supports that, not sure about others.

                      1. 2

                        If you have a QuickCheck implementation that permits easy testing of a concrete test case, grab it and use it. Once upon a time, QC found a bug. Keep that concrete test case and add it to your regression test suite. Randomized testing means that you don’t really know when randomness will create that same concrete test case again. But if your regression suite includes the concrete test case, you are assured that your regression suite will always check that scenario.

                        In the Erlang QuickCheck implementations (the commercial version from Quviq AB in Sweden and also the open “PropEr” package), there’s a subtlety in saving a concrete test case. I assume it’s the same feature/problem with Rust’s implementation of QC. The problem is: re-executing the test assumes that the test model doesn’t change. If you’re actively developing & changing the QC model today, then you may unwittingly change the behavior of re-executing a concrete test that was added to your regression test suite last year. If you’re aware of that feature/problem, then you can change your process/documentation/etc to cope with it.

                        1. 2

                          That’s probably because the first prototype for this required the random value as input to the value generator. I know that because I wrote it, and pushed for its inclusion in the second version of QuickCheck.

                          Nowadays there are libraries that will generate the actual value in such a way that you can copy and paste into a source file.

                          I’ve heard that hypothesis in Python keeps a database of failed inputs, not sure if anything else has that feature.

                          1. 2

                            Randomness is only one place where things can go wrong with saved concrete test cases.

                            For example (not a very realistic one), let’s extend the Rust example of testing a tree data structure. The failing concrete test case was: ([Insert(0, 192), Insert(0, 200), Get(0)])

                            Let’s now assume that X months later, the Insert behavior of the tree changes so that existing keys will not be replaced. (Perhaps a new operation, Replace, was added.) It’s very likely that re-execution of our 3-step regression test case will fail. A slightly different failure would happen if yesterday’s Insert were removed from the API and replaced by InsertNew and Replace operations. I’m probably preaching to the choir, but … software drifts, and testing (in any form) needs to drift with it.

                            1. 1

                              That’s an excellent point, I have no idea how to automate that. You’d have to somehow notice that the semantics changed and flush all the saved test inputs, sounds like more work than gain.

                              This is great info, any other thoughts on how saved inputs could go wrong?

                              1. 2

                                Ouch, sorry I didn’t see your reply over the weekend. I can’t think of other, significantly different problems. I guess I’d merely add a caution that “semantics changed” drift between app/library code & test code isn’t the only type of drift to worry about.

                                If you change the implementation, and the property test is validating a property of the implementation, you have more opportunity for drift. For example, checking at the end of a test case. For example, for testing a hash table when deleting all elements, “all buckets in the hash have lists of length zero” could be a desirable property. The test actually peeks into the hash table data structure and checks all the buckets and their lists. The original implementation had a fixed number of buckets; a later version has a variable number of buckets. Some bit of test code may or may not actually be examining all possible buckets.

                                It’s a contrived example, one that doesn’t apply only to historical, failing test cases. But the best that I can think at the moment. ^_^

                                -Scott

                      2. 1

                        In some generative testing libraries I saw after poking around, they’re even run as part of the test suite, though I’m not sure how I feel about that - couldn’t that result in missing a case locally that then fails on CI due to different inputs?

                        This is a potential problem with property-based testing, but to turn the question around - if you’re writing unit tests by hand, how do you know you didn’t miss a case?

                        That’s why you use them together.

                        1. 2

                          I understand using property-based testing to find edge cases, but including it in the test suite seems to introduce a lot of uncertainty as to whether your build will succeed? And potentially how much time it will take to run the tests. Granted, finding edge cases is important regardless of when you find them, I’d just be more comfortable running the property-based tests as a separate step, though I’d be happy to be convinced otherwise.

                          1. 1

                            Correct me if I’m misunderstanding you. If the testing is part of build cycle, a build failure will likely indicate the software didn’t work as intended. You’ll also have a report waiting for you on what to fix. If it’s taking too much time, you can put a limit on how much time is spent per module, per project, or globally on test generation during a build. For instance, it’s common for folks using provers like SPARK Ada’s or model-checkers for C language to put a limit of 1-2 min per file so drawback of those tools (potentially unlimited runtime) doesn’t hold the work up. Also, if it takes lots of running time to verify their code, maybe they need to change their code or tooling to fix that.

                            1. 2

                              No, I think your understanding is correct, and that’s definitely part of the point of running specs in the build process. I guess I’m just operating from advice I got early on to keep specs as deterministic as possible. I don’t remember where I got this advice, but here’s a blog post: https://martinfowler.com/articles/nonDeterminism.html

                              He also recommends this, which is what I would instinctively want to do with property-based testing:

                              If you have non-deterministic tests keep them in a different test suite to your healthy tests.

                              Though the nondeterministic tests Fowler is talking about seem to be nondeterministic for different reasons than one would encounter when setting out to do property-based testing:

                              • Lack of Isolation
                              • Asynchronous Behavior
                              • Remote Services
                              • Time
                              1. 2

                                Just going by the problem in his intro, I remember that many people use property-based testing as a separate pass from regression tests with some failures in PBT becoming new, regression tests. The regression tests themselves are static. I’d guess they were run before PBT, as well, with logic being one should knock out obvious, quick-to-test problems before running methods that spend lots of time looking for non-obvious problems. Again, I’m just guessing they’d do it in that order since I don’t know people’s setups. It’s what I’d do.

                                1. 2

                                  Ah, okay, so separating regression tests from PBT does seem to be a common thing.

                      1. 10

                        The title is a little dramatic.

                        Despite the self-deprecation, it does sound like they were fixated on this one language and refused to consider others seriously. I’m sure the author could have picked up the other languages, and I’m sure they would have found trade-offs and things to like and not like when compared to LISP and been very good at programming in them too. It’s just that they did not want to.

                        It should more accurately be “I refused to re-tool when companies changed their tech stack”.

                        1. 12

                          He was fixated but did relent eventually. In his own words:

                          What’s more, once I got knocked off my high horse (they had to knock me more than once – if anyone from Google is reading this, I’m sorry) and actually bothered to really study some of these other languges I found myself suddenly becoming more productive in other languages than I was in Lisp. For example, my language of choice for doing Web development now is Python

                          1. 9

                            I agree about the title being dramatic, but Kenny Tilton’s proposal makes for a better headline ^_^

                            “How Lisp Made Me Look Good So I Could Get Hired By A Booming Company and Get Rich Off Options”?

                            http://coding.derkeiler.com/Archive/Lisp/comp.lang.lisp/2006-04/msg01691.html

                            J/king aside your point makes sense in the abstract, but can you make it more concrete? Like what trade-offs would they have found in the late 90’s?

                            1. 3

                              Hi, i think it’s there in the article. They found that c++ and python had caught up so I assume they took a look but did not commit, being too enamored of one language. I accept that different languages have tradeoffs but to refuse to use a language others are using productively is not a virtue

                              1. 3

                                They found that c++ and python had caught up

                                That is not how I read it, they saw people being productive on them in spite of it (in their impression).

                                Because I don’t have context/information about the state of lisp implementations vs Java or Python back then I can’t really weigh in about the truth of those statements (From what I’ve heard the GC of free implementations was really slow for example). Even today I struggle to see anything that Python, the language, does better than Lisp. There are other factors that may make it a more sensible option for an organization or even a developer’s perspective (Wealth of libraries) but nothing about the language that seems like a trade-off to me. It is less extensible, slower (CPython vs SBCL) and less interactive.

                            2. 3

                              Seems like he tried, but knowing Lisp by heart makes you more aware about all that PLT stuff (I think) and he considered every other language inferior.

                              You clearly don’t want to work with the tools/languages/concepts which seem to be bad for you, even when companies weren’t enlightened by Holy Lisp paradigms and wanted to use Java instead.

                              1. 1

                                This seems like the attitude he had, but it also seems wrong.

                                Practical use has a very different set of goals from exploration & learning, and the two have values that conflict. So, one should go about them in different ways. (Of course, when you’re learning for work, this is difficult. I don’t recommend it, unless you can convince someone to pay you to learn things correctly!)

                                When you’re learning a new technology, you should focus on doing things the hard way – reinventing square wheels, etc. You should burn the bad parts into your memory. (That way, you know how to avoid them, and you know how to make the best of them when you need to.) The easiest way to do this is to specifically seek out the wrong job for the tool.

                                Once you’ve learned the technology, when you’re on the exploit side of the explore/exploit division (i.e., when somebody has given you a task and a budget), you should be using the right tool for the job.

                                General purpose languages are good-enough tools for many jobs, and lisp was clearly a slightly better tool for all the jobs that his peers were doing in other general purpose languages. But, it’s not like lisp doesn’t have areas where it falls down and requires awkward code. He could have honed his skills even in lisp by focusing on those areas (and then he’d have better tolerance for languages like C, where many more things are awkward.)

                                1. 2

                                  One shoukd know what to avoid but seeking out bad tools is an unnecessary, extra step for an experienced dev. I dont need to see 50 ways to experience buffer overflows to know checking buffers by default should be in the language.

                                  Likewise, I know a few features like easy metaprogramming and incremental compilation that dramatically boost productivity. Using a language without something similar isnt going to teach me anything other than language author didnt add those features. Now, it might be worthwhile to try one that claims similar benefits with different techniques. That’s not what most job languages are offering, though.

                                  One still doesnf havd to learn them grudgingly all the way. They will have useful tools and libraries worth adding to better languages. Redoing them with those languages’ benefits can be fun exercise.

                                  1. 5

                                    The way I see it, staying on the happy path while learning leaves you blissfully ignorant of unpublicized downsides and awkward corners. (It’s how you get people using hadoop and a large cluster for problems that are 80x faster with a shell one-liner on one core – they don’t really know what their favorite technology is bad at.) The fact that most professional devs don’t know the ugly corners of their preferred technologies makes learning those corners even more valuable: when backed against a wall & locked into a poor-fitting tech, the person who has put in the effort will know how to get the job done in the cleanest & least effortful way, when all possibilities look equally ugly and painful to the casuals.

                                    This doesn’t mean learning these ugly corners has to be grudging. A good programmer is necessarily a masochist – the salary isn’t worth the work, otherwise. Exploring the worst parts of a language while challenging yourself to learn to be clever with them is lots of fun – when you don’t have a deadline!

                                    Facing the worst parts of a technology head-on also encourages you to figure out ways to fix them. That’s nice for the people who might follow you.

                                    I dont need to see 50 ways to experience buffer overflows to know checking buffers by default should be in the language.

                                    I concede this point, but I think it’s irrelevant. I don’t suggest we dive into uninteresting language flaws (or bother diving deep into languages that are shallow re-skins of ones we already know). But, writing a web server or graphics library in Prolog is a real learning experience; writing a befunge interpreter in brainfuck likewise.

                                    1. 5

                                      I like to learn very different languages so I have a new viewpoint. In Haskell space is function application, in Joy it’s “apply this to the stack”. Unification in Prolog was mind expanding, as well as definite clause grammars. When I find something that’s easy and powerful in one language, and not in another, I try to understand what underlying design and implementation decisions were involved. I’m still fighting with APL, but I’m starting to see the good sides.

                                      I enjoy solving programming katas or advent of code problems in these out of the way languages, there’s so much understanding to gain!

                                      1. 5

                                        good programmer is necessarily a masochist – the salary isn’t worth the work, otherwise.

                                        I think there’s a delicate balance to be struck between knowing the crufty bits of your language (like JSON un/marshaling in Haskell) and spending so much time banging your head against these parts that you become a specialist in the language/environment and refuse to leave its confines. While I’ve definitely met professionals who choose their language based on flashy tours that reduce cruft and constantly reach for different languages without learning the details of one, I also think you shouldn’t outright reject new and different paradigms by getting turtled into a single paradigm.

                                        1. 3

                                          Definitely. If your familiarity with a language prevents you from using a better match, you’ve failed. I think it’s important to be deeply familiar with the ugly bits of a wide variety of very different languages.

                              1. 3

                                So many missed opportunities here. Why not draw a map and animate it over time… with little dots (yellow!) appearing at each intersection or address named in the JSON data, at the appointed date and time, then fading out…

                                1. 3

                                  Maybe add a little sunrise sunset indicator, and/or a bar-close indicator…

                                  1. 2

                                    I can’t tell if the entire article is a pisstake or not, but these suggestions are golden.

                                  1. 1

                                    The architecture section of this post is excellent if you’ve ever wondered how to structure a mid-sized application in Haskell.

                                    1. 2

                                      Regarding the IDE side of things, it seems there would be some value to having the parser depend on the previous parse. E.g. if the user is editing in one “block”, cut that block out and parse it individually in some way, so unbalanced parentheses or string literals don’t affect the whole file.

                                      (Why is this tagged haskell?)

                                      1. 1

                                        I’ve seen parsers implemented as monoids, where you cut the source at semicolons or matched braces or whatever is your chosen chunk delimiter. Seems to work pretty well.

                                        While your suggestion of depending on previous successful parses sounds like a bunch of work, I can see where it would be very powerful. You could find where the previous source is the same and incrementally parse changes and that kind of thing.

                                        Also, I think it’s tagged haskell because the blog post uses Haskell syntax? Probably not enough reason for the tag though.

                                      1. 2

                                        My company got three or four times larger, and I’m best at all the “important for a smaller company” parts. I guess I get to either change jobs or learn another facet of senior dev.

                                        1. 1

                                          Fascinating, ball lightning might be a gordian knot of magnetic lines of force!

                                          1. 9

                                            This is a bold statement, I do quite a bit of ssh -X work, even thousands of miles distant from the server. I do very much wish ssh -X could forward sound somehow, but I certainly couldn’t live without X’s network transparency.

                                            1. 6

                                              Curious, what do you use it for? Every time I tried it, the experience was pain-stakingly slow.

                                              1. 7

                                                I find it okay for running things that aren’t fully interactive applications. For example I mainly run the terminal version of R on a remote server, but it’s nice that X’s network transparency means I can still do plot() and have a plot pop up.

                                                1. 5

                                                  Have you tried SSH compression? I normally use ssh -YC.

                                                  1. 4

                                                    Compression can’t do anything about latency, and latency impacts X11 a lot since it’s an extremely chatty protocol.

                                                    1. 4

                                                      There are some attempts to stick a caching proxy in the path to reduce the chattiness, since X11 is often chatty in pretty naive ways that ought to be fixable with a sufficiently protocol-aware caching server. I’ve heard good things about NX, but last time I tried to use it, the installation was messy.

                                                      1. 1

                                                        There’s a difference between latency (what you talk about) and speed (what I replied to). X11 mainly transfers an obscene amount of bitmaps.

                                                        1. 1

                                                          Both latency and bandwidth impact perceived speed.

                                                  2. 6

                                                    Seconded. Decades after, it’s still the best “remote desktop” experience out there.

                                                    1. 3

                                                      I regularly use it when I am on a Mac and want to use some Linux-only software (primarily scientific software). Since the machines that I run it on are a few floors up or down, it works magnificently well. Of course, I could run a Linux desktop in a VM, but it is nicer having the applications directly on the Mac desktop.

                                                      Unfortunately, Apple does not seem to care at all about XQuartz anymore (can’t sell it to the animoji crowd) and XQuartz on HiDPI is just a PITA. Moreover, there is a bug in Sierra/High Sierra where the location menu (you can’t make this up) steals the focus of XQuartz all the time:

                                                      https://discussions.apple.com/thread/7964085

                                                      So regretfully, X11 is out for me soon.

                                                      1. 3

                                                        Second. I have a Fibre connection at home. I’ve found X11 forwarding works great for a lot of simply GTK applications (EasyTag), file managers, etc.

                                                        Running my IntelliJ IDE or Firefox over X11/openvpn was pretty painfully slow, and IntelliJ became buggy, but that might have just been OpenVPN. Locally within the same building, X11 forwarding worked fine.

                                                        I’ve given Wayland/Weston a shot on my home theater PC with the xwayland module for backward compatibility. It works .. all right. Almost all my games work (humble/steam) thankfully, but I have very few native wayland applications. Kodi is still glitchy, and I know Weston is meant to just be a reference implementation, but it’s still kinda garbage. There also don’t appear to be any wayland display managers on Void Linux, so if I want to display a login screen, it has to start X, then switch to Wayland.

                                                        I’ve seen the Wayland/X talk and I agree, X has a lot of garbage in it and we should move forward. At the same time, it’s still not ready for prime time. You can’t say, “Well you can implement RDP” or some other type of remote composition and then hand wave it away.

                                                        I’ll probably give Wayland/Sway a try when I get my new laptop to see if it works better on Gentoo.

                                                        1. 2

                                                          No hand waving necessary, Weston does implement RDP :)

                                                      1. 7

                                                        Laziness is neat. But just not worth it. It makes debugging harder and makes reasoning about code harder. It was the one change in python 2->3 that I truly hate. I wish there was an eager-evaluating Haskell. At least in Haskell, due to monadic io, laziness is at least tolerable and not leaving you with tricky bugs (as trying to consume an iterator in python twice).

                                                        1. 6

                                                          I had a much longer reply written out but my browser crashed towards the end (get your shit together, Apple) so here’s the abridged version:

                                                          • Lazy debugging is only harder if your debugging approach is “printfs everywhere”. Haskell does actually allow this, but strongly discourages it to great societal benefit.

                                                          • Laziness by default forced Haskellers to never have used the strict-sequencing-as-IO hack that strict functional languages mostly fell victim to, again to great societal benefit. The result is code that’s almost always more referentially transparent, leading to vastly easier testing, easier composition, and fewer bugs in the first place.

                                                          • It’s impossible to appreciate laziness if your primary exposure to it is the piecemeal, inconsistent, and opaque laziness sprinkled in a few places in python3.

                                                          • You almost never need IO to deal with laziness and its effects. The fact that you are conflating the two suggests that you may have a bit of a wrong idea about how laziness works in practice.

                                                          • Haskell has the Strict language extension which turns on laziness by default. It’s very rarely used because most people experienced enough with Haskell to know about it prefer laziness by default. This is experimental evidence that laziness by default may actually be a good idea, once you’ve been forced to grok how it’s used in practice.

                                                          1. 1

                                                            Haskell has the Strict language extension which turns on laziness by default. It’s very rarely used because most people experienced enough with Haskell to know about it prefer laziness by default. This is experimental evidence that laziness by default may actually be a good idea, once you’ve been forced to grok how it’s used in practice.

                                                            I am not quite sure whether this is really evidence. I actually never tried to switch it on. Iwonder whether that option plays nicely with existing libraries, I gues not many are tested for not depending on lazy-evaluation for efficient evaluation. If you use Haskell and Hackage, I guess you are bound with rolling with the default.

                                                            1. 2

                                                              It works on a per-module basis. All your modules will be compiled with strict semantics, and any libraries will be compiled with the semantics they chose.

                                                          2. 3

                                                            Idris has strict evaluation. It also has dependent types, which are amazing, but strict evaluation is a pretty good perk too.

                                                            1. 2

                                                              I thought there were annotations for strictness in Haskell.

                                                              1. 3

                                                                yes, but I consider it to be the wrong default. I’d prefer having an annotation for lazy evaluation. I just remember too many cases where I have been bitten by lazy evaluation behaviour. It makes code so much more complicated to reason about.

                                                                1. 1

                                                                  Do you happen to remember more detail? I enjoy writing Haskell, but I don’t have a strong opinion on laziness. I’ve seen some benefits and rarely been bitten, so I’d like to know more.

                                                                  1. 1

                                                                    I only have vague memories to be honest. Pretty sure some where errors due to non-total functions, which I then started to avoid using a prelude that only uses total ones. But when these occured, it was hard to exactly find the code path that provoked it. Or rather: harder than it should be.

                                                                    Then, from the tooling side I started using Intero (or vim intero). (see https://github.com/commercialhaskell/intero/issues/84#issuecomment-353744900). Fairly certain that this is hard to debug because of laziness. In this thread there are a few names reporting this problem that are experienced haskell devs, so I’d consider this evidence that laziness is not only an issue to beginners that haven’t yet understood haskell.

                                                                    PS: Side remark, although I enjoy haskell, it is kind of tiring that the haskell community seems to conveniently shift between “Anyone can understand monads and write Haskell” and “If it doesn’t work for you, you aren’t experienced enough”.

                                                              2. 2

                                                                Eager-evaluating Haskell? At a high level, Ocaml is (more or less) an example of that.

                                                                It has a sweet point between high abstraction but also high mechanical sympathy. That’s a big reason why Ocaml has quite good performance despite a relatively simple optimizing compiler. As a side effect of that simple optimizing compiler (read: few transformations), it’s also easy to predict performance and do low-level debugging.

                                                                Haskell has paid a high price for default laziness.

                                                                1. 2

                                                                  As a side effect of that simple optimizing compiler (read: few transformations), it’s also easy to predict performance and do low-level debugging.

                                                                  That was used to good effect by Esterel when they did source-to-object code verification of their code generator for aerospace. I can’t find that paper right now for some reason. I did find this one on the overall project.

                                                                  1. 1

                                                                    Yes, however I would like to have Typeclasses and Monads I guess, that’s not OCaml’s playing field

                                                                    1. 1

                                                                      OCaml should Someday™ get modular implicits, which should provide some of the same niceties as typeclasses.

                                                                      1. 1

                                                                        OCaml has monads so I’m really not sure what you mean by this. Typeclasses are a big convenience but as F# has shown are by no means required for statically typed functional programming. You can get close by abusing a language feature or two but you’re better off just using existing language features to accomplish the same end that typeclases provide. I do think F# is working on adding typeclasses and I think the struggle is of course interoperability with .Net, but here’s an abudantly long github issue on the topic. https://github.com/fsharp/fslang-suggestions/issues/243

                                                                      2. 1

                                                                        F# an open source (MIT) sister language is currently beating or matching OCaml in the for fun benchmarks :). Admittedly that’s almost entirely due to the ease of parallel in F#.
                                                                        https://benchmarksgame.alioth.debian.org/u64q/fsharp.html

                                                                      3. 1

                                                                        Doesn’t lazy io make your program even more inscrutable?

                                                                        1. 1

                                                                          well, Haskell’s type system makes you aware of many side-effects, so it is a better situation than in, for example, Python.

                                                                          Again, I still prefer eager evaluation as a default, and lazy evaluation as an opt-in.

                                                                        2. 1

                                                                          Purescript is very close to what you want then - it’s basically “Haskell with less warts, and also strict” - strict mainly so that they can output clean JavaScript without a runtime.

                                                                        1. 1

                                                                          This is great, lots of cool tricks I can steal from this!

                                                                          Is there a version that uses cabal new-build instead of stack? Does this support make -j8 for faster compiles?

                                                                          I haven’t yet tried doctest and hlint/weeder, that should help my dev process.

                                                                          1. 7

                                                                            In the other direction, there’s the Grammatical Framework that uses dependent types to translate among multiple languages.

                                                                            I used GF to build a small webapp to improve my Swedish vocabulary while I lived there. The code would generate random sentences in both Swedish and English and I’d enter the translation and check my input against the GF translation.

                                                                            1. 1

                                                                              Wow, that’s neat. Did you open source it, by any chance?

                                                                              1. 1

                                                                                Please share this with us. Would love to see it

                                                                                1. 1

                                                                                  Sadly no, I think it’s gone forever. Probably wouldn’t be hard to recreate it though.

                                                                              1. 2

                                                                                As a counterpoint, there was recently an article on lobsters about doing advent of code in haskell. In it @rbaron mentioned several challenges caused by lazy evaluation he had to work around.

                                                                                1. 5

                                                                                  From my experience, that’s culture instead of difficulty. Most programmers expect certain behavior, we’re just accustomed to things being a certain way. Because of that, learning to code in a lazy language is a good brain stretching exercise.

                                                                                  1. 1

                                                                                    I beg to differ, there may be some cases where indeed an experienced haskeller will not experience a penalty because Haskell is lazy by default (as compared to let’s say OCaml). However, I have seen my fair share of Haskell programs with space leaks and other issues that weren’t fixed, because the root cause was not found.

                                                                                1. 5

                                                                                  This is a wonderful post, an adventurous read, and should be called something like hacking to satisfy the need for speed. I had a blast reading this!