1.  

    Cool game :-) I’ve managed to grow some plants in lava, it’s fun :-)

    1.  

      If you use a mix of the clone dots, seeds, and oil you can get a neat cycle of fire burning up plants and regrowth https://imgur.com/a/Qxmt6x9

      1.  

        how did you accomplish that? :0 upload an example?

        1.  

          The trick was to constantly water the surviving plant matter on the lava until it stabilized. Here’s an example: https://sandspiel.club/#uuL30AiL3VoU9roKgbwo

          1.  

            I think i had a similar effect of growing plants on lava if it the lava had died down enough. second time around i just put a small area of sand on top of the lava.

            One more: https://imgur.com/4WnEg4F

        1. 1

          Code is constantly changing, so the more code you put into your docs, the faster they’ll go stale.

          1. Link back and forward between code and the documentation for that code. Even better, use doc comments.
          2. Check documentation for staleness as part of the code review process.

          Producing documented, as-correct-as-possible software is our job. Don’t leave it to Slack.

          1. 4

            You don’t have to write JavaScript, you have to write Elixir - which has a much smaller community around it than JavaScript does.

            This does look cool though, I just wish there were some live examples I could play with in my browser.

            1. 11

              On the other hand, the Elixir community is very friendly. :)

              Supposedly something like LiveView is coming to .NET - https://codedaze.io/introduction-to-server-side-blazor-aka-razor-components/ - but the post says:

              We don’t really know yet how well server-side Blazor (Razor Components) will scale with heavy use applications.

              In principle, people could take this approach in other languages as well. But I think Elixir / Erlang are uniquely positioned to do it well, as LiveView is built on Phoenix Channels, which (because they use lightweight BEAM processes) can easily scale to keep server-side state for every visitor on your site: https://phoenixframework.org/blog/the-road-to-2-million-websocket-connections

              1. 2

                On the other hand, the Elixir community is very friendly. :)

                Is that comment supposed to contrast the friendly Elixir community with the JS community? Is the JS community considered unfriendly? It’s way, way bigger than the Elixir community, so there are bound to be some/more unfriendly people. Maybe it’s so big that the concept of a “JS community” doesn’t even make sense. It’s probably more like “Typescript community”, “React community”, “Node community”, etc… But there are a lot of friendly people and helpful resources out there in JS-land, in my experience. I hope others have found the same thing.

                1. 10

                  The Elixir community is still in the “we’re small and must be as nice as possible to new people so they’ll drink the koolaid” phase. The “community” such as it is is also heavily pulled from job shops and the conference circuit, so there’s a big factor too.

                  Past the hype it’s a good and servicable language, provided you don’t end up on a legacy codebase.

                  1. 3

                    Sounds like Rails, all over again.

                    Who hurt you @friendlysock?

                    1. 3

                      legacy codebase

                      How would you define ‘legacy codebase’? I’m assuming it’s something other than ‘code that is being used to turn a profit’..

                      1. 3

                        Ha, you’re not wrong! I like that definition.

                        From bitter experience, I’d say it would be an Elixir codebase, written in the past 4 or 5 years, spanning multiple major releases of Ecto and Phoenix and the core language, having survived multiple attempts at CI and deployment, as well as hosting platforms. Oh, and database drivers of varying quality as Ecto got up to speed. Oh oh, and a data model that grew “organically” (read: wasn’t designed) from both an early attempt at Ecto as well as being made to work with non-Ecto-supported DB backends, resulting it truly delightful idioms and code smells.

                        Oh, and because it is turning a profit, features are important and spending time doing things that might break the codebase are somewhat discouraged.

                        Elixir for green-field projects is absolutely a joy…brown-field Elixir lets devs just do really terrible heinous shit.

                        1. 2

                          So you’re saying that Elixir is just another programming language? It’s not the Second Coming or anything?

                          1. 1

                            I mean, it’s really quite good in a number of ways, and the tooling is really good. That said, there’s nothing by construction that will keep people from doing really unfortunate things.

                            So, um, I guess to answer your question: yep. :(

                          2. 2

                            Elixir for green-field projects is absolutely a joy…brown-field Elixir lets devs just do really terrible heinous shit.

                            Totally agree, but I would say that significantly more heinous shit is available to devs in Ruby or another dynamic imperative language. The Elixir compiler is generally stricter and more helpful, and most code is just structured as a series of function calls rather than as an agglomeration of assorted stateful objects.

                            The refactoring fear is real though. IMO the only effective salve for that sickness is strong typing (and no, Dialyzer doesn’t count).

                      2. 7

                        😊 I can see how it sounded that way, but I didn’t mean to imply anything about anyone else. The parent post said the Elixir community is small, so I was responding to that concern.

                        1. 4

                          Is the JS community considered unfriendly?

                          I feel you’re just trying to polemic on the subject… The author of this comment probably didn’t mean harm, don’t make it read like so.

                          1. 2

                            I’m not what you mean by “trying to polemic”, that doesn’t make sense to me as a phrase, but it was a genuine question about whether the JS community is considered to be unfriendly. I’d be happy to be told that such a question is off-topic for the thread, and I certainly don’t want to start a flame war, but I didn’t bring up the friendliness of the community. I’m sure the author didn’t mean harm, but I read (perhaps incorrectly) that part of their reply as part of an argument for using Elixir over JS to solve a problem.

                            1. 6

                              What I meant to say was: “If this looks like it could be a good fit for thing you want to do, but you’re daunted by the idea of learning Elixir, don’t worry! We are friendly.”

                              1. 3

                                I meant starting a controversy, sorry for my poor English! I’m sorry if it felt harsh, that wasn’t what I tried to share. I really thought your goal was to start this flame war.

                                Every community has good and bad actors. Some people praise a lot some communities, but I don’t think they mean the others aren’t nice either.

                                The only thing that I could think of is that smaller communities have to be very careful with newcomers, because it helps to grow the community. JS people don’t need to be nice with each other, the community and the project are way pas that need. So I guess you would find a colder welcome than with a tiny community.

                                1. 0

                                  Hey there, polemic is a legit English word, so don’t be sorry for someone else’s ignorance! :)

                                  1. -2

                                    I’m not ignorant (well I am, but not about this): polemic is indeed an English word, but it’s not a verb. The phrase “trying to polemic” doesn’t make sense in English, it requires interpretation, which makes the meaning unclear. I can think of two interpretations for “trying to polemic” (there may be others) in the context of the comment:

                                    1. My comment was polemic
                                    2. I was attempting to start a polemical comment thread, aka a flame war. With the later clarification that seems like what the author was thinking.
                                    1. 1

                                      The thing is that not everyone is at your level of English proficiency. You’re having a discussion here with people from around the world, you’ll need to make a couple of adjustments for expected quality of English and try to get the rough meaning of what they’re saying, otherwise you’ll be stuck pointing out grammatical errors all day.

                                      1. 1

                                        I wasn’t really trying to point out an English error, and perhaps I did a poor job of that. I stand by the claim that it is an English error though.

                                        I work with non-native English speakers all day, I’m aware of the need to try and understand other people and to make sure we’re on the same page. I’ll give a lot of slack to anyone, native or non-native, who’s trying to express themselves. The problem with the phrase “I feel you’re just trying to polemic on the subject’ is that at least some of the interpretations change the meaning. On the one hand, it could be saying that my comment was polemic, on the other it could be saying that my comment was trying to start a polemical thread. It’s not the same thing. And, for what it’s worth, if you’re going to throw an uncommon (and quite strong) English word like “polemic” out there it’s best if you correctly understand the usage. If the author had accused me of trolling, which is I think what they meant, that would have been both clearer and more accurate (though my intent was not to troll)

                      1. 7

                        One thing I would love to read more about is how to determine the cutoff between scaling horizontally and investing time/money in optimizing your software.

                        1. 2

                          Oooo, that’s a good one. I have a couple hand-wavey heuristics. I’ll think more about trying to turn that into something more real.

                          I have the next 2-3 posts vaguely planned out. But, I’ll definitely be thinking about your idea @hwayne.

                          1. 1

                            Doesn’t it boil down to, basically, “it’s a knack, spend 10 years doing it and you’ll get good at judgment”?

                            1. 2

                              I’d definitely take a more “business” driven approach to that problem. Optimizing your software for cost should only be done if the money it saves is superior to what it costs to optimize it.

                              You also have to take into account indirect costs like when using weird optimization tricks can make code less readable sometimes and also has a cost for future development.

                              1. 1

                                On the other hand, scaling horizontally adds costs of coordinating multiple servers, which includes load balancing, orchestrating deployments, distributed system problems, etc.

                                1. 1

                                  The business-driven approach is not always the best for society as a whole. It doesn’t take into account negative externalities like the environmental cost of running inefficient programs and runtimes.

                                2. 2

                                  I had a few that helped a lot:

                                  1. Use the fastest components. Ex: Youtube used lighttpd over Apache.

                                  2. If caching can help, use it. Try different caching strategies.

                                  3. If it’s managed, use a system language and/or alternative GC.

                                  4. If fast path is small, write the data-heavy part in an optimizable language using best algorithms for that. Make sure it’s cache and HD-layout friendly. Recent D submission is good example.

                                  5. If it’s parallelizable, rewrite the fast path in a parallel, programming language or using such a library. Previously, there was PVM, MPI, Cilk, and Chapel. The last one is designed for scale-up, scale-out, and easy expression simultaneously. Also, always use a simpler, lighter solution like that instead of something like Hadoop or SPARK if possible.

                                  6. Whole-program, optimizing compilers (esp profile-guided) if possible. I used SGI’s for this at one point. I’m not sure if LLVM beats all of them or even has a profile-guided mode itself. Haven’t looked into that stuff in a while.

                                  Notice that most of this doesn’t take much brains. A few take little to no time either. They usually work, too, giving anything from a small to vast improvement. So, they’re some of my generic options. Via metaprogramming or just good compiler, I can also envision them all integrated into one language with compiler switches toggling the behavior. Well, except choosing fastest component or algorithm. Just the incidental stuff.

                                1. 2

                                  I had to immediately look for April 1st in the date on the article. Disturbing that it wasn’t there.

                                  1. 3

                                    “We have a license for IBM Blockchain, what do we do with it?”

                                    1. 2

                                      And in related news, Bitcoin is down 1.8% after the announcement. Whereas, Walmart’s is up 0.37% due to increased confidence in their tech strategy by some investors who also have money in Bitcoin.

                                1. 5

                                  Somewhat contrary to the OCaml community at large we tend to use Jane Street Core and where relevant Async. Core is an alternative, extensive and opinionated standard library to replace the default (“compiler”) standard library.

                                  The other alternatives in the standard library space are less exciting:

                                  • The compiler standard library is missing many useful things, the functions tend not to be tail-recursive and it defaults to throwing exceptions. In general this library is not ready for production use and leads to every project having a random module with more or less well implemented missing bits.
                                  • Extlib went a long time unmaintained and keeping compatibility with the standard library makes it default to exceptions
                                  • Batteries started as maintained superset of Extlib, but is hardly maintained nowadays. […]
                                  • Containers is a good library, carefully designed. If not for Base, this would be the most interesting contender for a good standard library replacement

                                  Would it be fair to say that there is no actual standard library for OCaml? In the ReasonML world (which I’m vaguely familiar with) there appears to be another one emerging, called Belt.

                                  From a Python perspective it’s kind of surprising that there are so many re-implementations of what is presumably core functionality. Are there things in one “standard” library that you wish you could have in another? Or is that that they all have the same features, but implemented differently? How much does the choice of a standard library affect you in terms of the other libraries you can use? Like, if some library is built on a standard library you don’t use, are you SOL or is it just a lot of fiddling to get it working?

                                  In JS land, where I’m also very comfortable, the complete lack of a standard library is also a problem, but because JS types are so basic, interop between libraries built on different foundations usually isn’t that much of a problem in practice. But I’d imagine the type system in OCaml makes things more complicated.

                                  Or am I just misunderstanding how OCaml works?

                                  1. 6

                                    OCaml ships with a standard library: https://caml.inria.fr/pub/docs/manual-ocaml/libref/index.html . This has a lot of stuff: lists, maps, arrays, I/O, file manipulation, networking, etc. But for industrial-strength usage you might need more. This is where Jane Street’s Core (or now the lighter-weight Base) comes in. It has issues like complexity, thin documentation, and somewhat closed development style. But for production use it can get the job done. In ReasonML you have Belt as you pointed out which is designed to compile down to really tight JavaScript.

                                    The upshot is that you will probably want some kind of a ‘standard’ library package for production use, but this is not a huge problem because of the strong package ecosystem. As to interop the two main competing libraries are Lwt and Async which are concurrency libraries, but serious packages which need concurrency support ship adapter packages for both of them, e.g. library foo will ship foo-lwt and foo-async.

                                    1. 1

                                      It is also true that there has been a few prerequisite needed to start refreshing the standard library. In addition to the strong backward compatibility story of the language itself (look how many years and releases took to move to safe-strings, or for the deprecation of a few strings functions), the flat namespacing was a potential blocker for many changes and reorganisations, to name a couple. The compatibility story in particular requires each change to be weighted and discussed very thoroughly: once it lands it got to stay for a very long time.

                                      Some of this issues have been now solved, and indeed the standard library in ocaml 4.08.0 will already be noticeably improved, and the improvement work is ramping up so I will not be surprised if ocaml 4.09.0 will be getting even better.

                                      I think the biggest pain point is currently the lack of safe resource acquisition and cleanup in the standard library, but discussions on this front have already started.

                                    2. 5

                                      Author of the post here.

                                      Would it be fair to say that there is no actual standard library for OCaml? In the ReasonML world (which I’m vaguely familiar with) there appears to be another one emerging, called Belt.

                                      The compiler standard library is the “standard library” in a way. I call it that in the blog post because it often seems like it is mostly there to implement OCaml, but missing many useful bits that people actually need for other real-world projects, which is why many projects use alternative “standard libraries” or reimplement their own (most of the time, badly).

                                      It is sort of similar to Python, where you have all kinds of things included but for actual production software it is not great to use it (asyncore/asynchat, all the command line parsers, config file parsers, HTTP clients, HTTP servers).

                                      Reason is not helping much there, by introducing yet another library that you might potentially want to use in OCaml. I am not a big fan of the “reimplement everything that already exists in OCaml”, but I guess it is an entirely different community with different values. I think it might be a bit of a lost opportunity but what can you do.

                                      Are there things in one “standard” library that you wish you could have in another? Or is that that they all have the same features, but implemented differently?

                                      There are two approaches

                                      1. extend the existing library with things people reimplement on their own (like exceptionless variants of functions, some useful features like operators and combinators) which is what Extlib, Batteries and Containers are doing. This can be useful if you’re using the default standard library and just want some additional functionality.
                                      2. replace the existing library by vastly changing the API, to provide an API that is more aligned to how you would want the stdlib to look like if there were no backwards-compatibility concerns. This is what Base and Core are doing.

                                      Like, if some library is built on a standard library you don’t use, are you SOL or is it just a lot of fiddling to get it working?

                                      It is not a problem at all. You can use all code just fine, no matter which library it uses without having to care what it was implemented in. There is some resistance to Core, because it pulls in a lot of dependencies, which is why there are “smaller” variants like Core_kernel and Base with reduced amounts of dependencies, while preserving as much of the Core API as possible.

                                      On the other hand, if the library uses a different concurrency library than what you use, then you’re sort-of SOL, since you can’t really combine an Lwt.t (from Lwt) with an Deferred.t (from Async). This is e.g. similar to Twisted and Tornado in Python, where it is not really feasible to combine them. I heard some considerations to base them on a common foundation but not sure where this is going. The effect system will likely obsolete concurrency libraries in the future in any case.

                                      1. 3

                                        That was helpful - thanks for the detailed reply!

                                        1. 2

                                          wanted to note a thanks as well, really appreciate it when the author dives in to extend the conversation from an interesting piece.

                                      1. 9

                                        A function may succeed. Or it may, for example, fail because of disconnected backend. Or it may time out. And that’s it. There are only two failure modes and they are documented as a part of the API. Once you have that, the error handling becomes obvious.

                                        I must be missing something because it really feels like there are plenty of other ways for a function to fail. Is this limited to a specific context? If it’s only for infrastructure, it still seems woefully pidgeonholed.

                                        As already mentioned, classic exceptions are the worst.

                                        I’m not clear on why they are “the worst”.

                                        The discussion does hit on something that makes sense to me: think about and document the error conditions. Frankly, if you have that, the methodology of reporting the error becomes less of a hassle. But still, error handling is plauged by the fact that it is often something non-local that is affecting the computation and there is rarely any useful information or language constructs that make dealing with it anything short of a massive chore. (Correcting it usually means interacting or “conversing” with some other entity to gain the knowledge to proceed.)

                                        1. 4

                                          I must be missing something because it really feels like there are plenty of other ways for a function to fail. Is this limited to a specific context? If it’s only for infrastructure, it still seems woefully pidgeonholed.

                                          POSIX is quite a good example of how it could work. Every function can return few possible error codes and that’s it. The idea is that implemeter of the function deals with the complexity and factors all possible error conditions into a small neat set of error codes that makes sense from the user’s point of view.

                                          The rule here should be: If you don’t know what to do with an error condition, don’t just pass it to the caller. The caller understands the problem domain even less than you do.

                                          But still, error handling is plauged by the fact that it is often something non-local that is affecting the computation and there is rarely any useful information or language constructs that make dealing with it anything short of a massive chore.

                                          The point is to use encapsulation for errors as well as for normal functionality. If something non-local causes an error somewhere down the stack, the layer that deals with the thing (and every layer above it) should convert it into an error that makes sense in the local context.

                                          1. 1

                                            If something non-local causes an error somewhere down the stack, the layer that deals with the thing (and every layer above it) should convert it into an error that makes sense in the local context.

                                            When said this way, I understand the point better. I did not get that from the original post. I think that’s a reasonable way to deal with things, although I don’t think it precludes exceptions as the mechanism for doing it.

                                            1. 1

                                              True, but exceptions make it super easy to screw it up. Just forget a catch block in one function and the raw low-level exception escapes up the stack. In C/Golang style of error handling you have to at least pass it up manually which will, hopefully, make you consider whether it’s a good idea in the first place.

                                          2. 3

                                            (Correcting it usually means interacting or “conversing” with some other entity to gain the knowledge to proceed.)

                                            That’s why, even though it is relatively heavy-weight for an API, it seems that passing a callback to be called on error is one of the most versatile things you can do. The callback can correct the error and allow the call to proceed or just throw an exception. At deeper level, doing this allows you to interact with context at the point of detection not the point where you express your intention: the initial call that led to the error.

                                            I think this is the closest we can come to approximating Lisp’s condition system in languages without those constructs.

                                            1. 3

                                              Signals and restarts are wonderful things. It’s such a shame no other language or programming system (to my knowledge) has made a serious effort to emulate it, let alone build on it. Callbacks are the best we can do – or what we’re willing to abide – it seems.

                                              1. 3

                                                Have you heard of the Zen of Erlang? https://ferd.ca/the-zen-of-erlang.html

                                          1. 6

                                            Given that most bugs are transient, simply restarting processes back to a state known to be stable when encountering an error can be a surprisingly good strategy.

                                            ~ Fred Hebert, Erlang in Anger

                                            1. 3

                                              This of course is how a great many bots find themselves on my blacklist. If a request returns 404, continuing to pound the same URL does not resolve the error.

                                              1. 4

                                                Which is of course why error handling still needs some amount of contextual logic, e.g. for a 404 the resource is not there so stop trying. Or even a general-purpose retry logic like exponential backoff with a failure cutoff.

                                            1. 2

                                              I’ve recently been working on a fun little project, a stub server written in Elixir and Phoenix: https://github.com/yawaramin/stubbex

                                              I think it’s pretty cool, but as I’ve been finding out talking to various people, stubbing is a very contentious topic and there’s a lot of disagreement whether it’s a good way to write reliable tests.

                                              All that said, it’s been fun and educational for me … so I’ll keep hacking on it.

                                              1. 7

                                                In PureScript or OCaml, you can use open variant types to do this flawlessly

                                                Yes!! Polymorphic variants are incredible

                                                  1. 4

                                                    Note that open variants are different from polymorphic variants, albeit both are incredible.

                                                    1. 1

                                                      Oh good to know! Is open variant then the ability to define new exceptions, all of which are part of the exn variant?

                                                      1. 4

                                                        In fact, you can consider exceptions of exn to be a special case of the open, or extensible variant! See https://caml.inria.fr/pub/docs/manual-ocaml/extn.html#sec266 for more details.

                                                        1. 2

                                                          That’s awesome!

                                                    2. 2

                                                      Another thing I thought was nice to mention: row polymorphism.

                                                      In row polymorphism, variants are open by default. I’ve been toying with writing my own implementation but this one has a nice readme explaining pretty well how it works.

                                                      https://github.com/willtim/Expresso#variants

                                                      The reason given for variant literals being open by default is usually to do error handling and only expose our current problems.

                                                      1. 4

                                                        Polymorphic variants in OCaml work by row polymorphism - this paper introduced the algorithm and resulted in the ocaml implementation. Good luck with the implementation! It’s my favorite type system feature :)

                                                    1. 4

                                                      Peter Theil has a philosophy degree from Stanford. Brett Kavanaugh has a cum laude history degree from Yale.

                                                      1. 13

                                                        This is an incredibly lazy form of argument. You can’t disprove an observation about a trend with a counter-example. Counter-examples disprove universal quantification, not statistical deltas.

                                                        I see this pattern a lot, and we shouldn’t treat it as if it’s a compelling refutation.

                                                        1. 1

                                                          by golly! all them big words and everything. So, if you need it spelled out: neither the original post nor my response had anything to do with either universal quantification or statistical deltas, whatever they may be in this context. The original argument, which I see too often, is based on the theory that there is some magical ingredient in humanities that is necessarily missing in a STEM education. As far as I can see, however, it is as easy to absorb an arrogant and dismissive attitude and a tendency to use fancy nomenclature in lieu of thinking and open discussion from humanities courses as from science classes. And I think the underlying problem could and should also be addressed within science/engineering education which is taught in a narrow way. You should learn critical thinking and how to collaborate in engineering school, just as you should in a philosophy department, but it’s not only possible, but the standard, not to learn those in either program. As an example: I really like what Olin college is trying to do http://www.olin.edu/discover-olin

                                                          1. 1

                                                            Don’t get caught up in the labels. I didn’t read the article as you need a liberal arts degree to address the problems identified. I read it as simply suggesting putting more emphasis on humanities. To quote the person of focus in the article, “Students of computer science go on to be the next leaders and creators in the world, and must understand how code intersects with human behaviour, privacy, safety, vulnerability, equality, and many other factors.”

                                                            1. 3

                                                              I agree with that and think it’s important, but I don’t believe that adding a generic humanities course or two,or 100, can do it or is necessarily even the right approach. To teach people to be responsible citizens is a complex project. I plead guilty to assuming too much about what Baker meant. I have seen an argument that the humanities program is key to deeper understanding a lot and I think that’s a superficial and maybe reductionist approach.

                                                        2. 3

                                                          I don’t think Brett Kavanaugh is in the audience being targetted by the message in the article.

                                                          1. 14

                                                            The point is that humanities graduates don’t magically fix the issues of misinformation - they can be just as flawed and politically biased as anyone else.

                                                            Really we need to optimise for hiring those with “moral backbone”, make them feel able to say “no”, and then listen to employees when they do. I feel part of this can be fixed by regulating and licensing employees similarly to how line engineers need to be licensed. When your personal license to work is on the line, you have a strong incentive to be rigorous in your work and to say no when your employer asks you to work inappropriately. When engineers say no, these decisions are often respected and engineers have a strong network of support where they will often be backed up in their decisions if they are made for the right reasons, even when that runs counter to the business arms aims.

                                                            1. 0

                                                              Those two were intended as counter-examples.

                                                              1. 4

                                                                The examples were also unnecessarily political, especially for lobste.rs. It distracts from whatever point you are trying to make.

                                                                1. 0

                                                                  How does it distract from the point?

                                                                  1. 3

                                                                    By using polarizing figures, you run the risk of the debate steering away from the actual point either parties were trying to make, and right into the realm of partisanship. It becomes hard, then, to exit the “no u” dead end that the discussion becomes. It’s usually frustrating for all parties involved, except maybe the trolls.

                                                          1. 2

                                                            I mostly use WhatsApp. My friends and family are there (some helped by me), it’s free and reasonably secure (for now, despite Facebook’s best efforts), and bonus, it runs on Erlang which gives me a lot of confidence in its stability.

                                                            1. 7

                                                              I still owe folks a blog post about total vs partial math in Pony and how that relates to division by 0. Sadly I’ve had zero time for that as I’ve been spending all my time on Wallaroo Labs work.

                                                              Part of that post was going to be “and as a pre-1.0 language all this is going to change as we will be introducing partial integer math operators in the future”. Well, those operators are here. All the division by zero kerfuffle got someone inspired to implement the RFC that has been open and waiting to be.

                                                              Still, I owe folks a post on partial vs total integer math and eventually that will come. Maybe when I’m on vacation in November, although honestly, that sounds like an awful vacation.

                                                              1. 4

                                                                Incidentally, have you watched Evan Czaplicki’s recent talk, ‘The Hard Parts of Open Source’? It sounds like you’re in the same situation (and I’ve probably contributed to that, sorry!).

                                                                1. 2

                                                                  I just finished watching the talk. I enjoyed it. Thank you for the recommendation.

                                                                  1. 1

                                                                    I haven’t watched it. Evan was incredibly thoughtful and nice when I met him a few years ago at ICFP and hung out. From that interaction and the title, I imagine its something I would enjoy.

                                                                    Care to summarize it?

                                                                    1. 4

                                                                      I think you would enjoy it. He examines patterns of behaviour in open source communities that seem hurtful, like ‘Why don’t you just do it like this?’, ‘What gives you the right to do this?’, and so on, and traces them back to the birth of hacker culture and other very interesting historical context that directly influence today’s online communities.

                                                                      1. 18

                                                                        That does sound interesting. It certainly expands past open source communities. Programmers in general are quite happy to critique the product of a series of tradeoffs without context.

                                                                        We do this when we look at other people’s systems and pick out one thing to critique outside of the other feature. The Pony divide by zero kerfuffle was an example of that. Many people who knew nothing about Pony critiqued that single “feature” on the basis of the impact within system they know rather than as a “feature” within Pony as a whole (which in the end is what my blog post needs to be about). In that Pony case, a series of decisions that were made to make Pony safer moved it towards being part programming language and part theorem prover. It’s in an interesting place right now where we have downsides from both that lead to “interesting” things like divide by zero being zero because of a series of other choices. All in all, Pony is safer than many languages but, we can to find that there were a number of features needed to address issues like divide by zero. For example, dependent types, partial integer math as an option.

                                                                        I think this happens in every system. You make a number of well intentioned decisions where they are the right decision but inevitabily, they are going to lead to “wat” and “ugh” moments as they come together. I’ve never seen a language that doesn’t have those and if you spend the time to understand the language and its choices, you can see how when favoring certain values, you would end up there. No tool will ever be perfect.

                                                                        There’s a Bryan Cantrill talk on this that is really good: “Platform as a refection of value”.

                                                                        Often times, we also see the results of constraints on the code. For example, perhaps there was an artificial but reasonable time limit. “This needs to be fixed but we only have a couple weeks to do it, what is the best we can make this in two weeks because other things are more pressing”.

                                                                        I had real problems with this earlier in my career. I was incredibly judgemental. Sean in his 20s would have been all over Pony for the “stupid divide by zero”, Why? Well, I wouldn’t have taken time to understand the problem. Everything I knew had divide by zero as an error so that would “obviously be the right thing to do”. And in general, I lacked empathy. I had no ability to try and understand why someone would do something that I could see a reason to do. Worse, I didn’t care to understand. I just loved to go “wat” and laugh at things. I was awful towards PHP for example. Now, I recognize that PHP is an awesome tool for some tasks. I don’t really ever take on those tasks but that doesn’t make PHP any less valuable for those tasks.

                                                                        I had to work incredibly hard on empathy. Its not something we do in my family. My mother, to this day, is still incredibly selfish and as one of her children, I picked that up. My stepfather hurt his back a couple years ago. My mother was somewhat concerned with his injury but mostly was annoyed with how it impacted on her life and the extra work she had to do because he wasn’t capable.

                                                                        It wasn’t until I worked for an asshole CEO and was a team leader and had to try and hold my team together that I really started to get good at empathy. I realized that in order to deal with said asshole, I needed to try and understand why they did what they did. Without that understanding, there was no way I could get what my team needed from said CEO. There was no way, I could put together an argument that would speak to his needs, desires, and concerns. I developed this empathy skill for purely selfish purposes but, its turned out to be incredibly helpful in general. I have a much better appreciation and understanding of other people’s software. Where before I would judgementally dismissing things as crap, I have now often taken the time to understand why the software was the way it is, and, I’ve learned a ton in the process.

                                                                        Anyway, I could write 5,000 more words on this topic and things tangential to it. Given the context, that seems like rambling to the extreme, so I’m going to stop now. Thanks for the talk recommendation. I’ll definitely check it out.

                                                                        I’d really advise anyone who read this and found that it reasonated at all the check out that Cantrill talk. It’s really really good. And also, if you don’t think empathy and understanding can be valuable as an engineer, I’d pass along the advice to give it a serious try for a couple years. If you are like me, you will be amazed and delighted with the results.

                                                                        1. 3

                                                                          Thanks a lot for your very open and honest text, it was really moving and I’m glad you’ve made empathy a priority, and that it has worked out for you.

                                                                          1. 2

                                                                            Thanks for the context. I too struggle with being empathetic. We’re all such a deep well of emotions and desires, that sometimes I feel like if I try to open that door of trying to understand people on a deeper level, I’ll spend all my emotional budget on it. But even on a superficial level–what I try to do nowadays (not always successfully!) is realize that people probably do things that make sense to them and it’s OK for it not to make sense to me, because it doesn’t affect my life.

                                                                            1. 2

                                                                              Empathy and doing stuff with other humans is about being at the point where it does affect your life and still being able to deal with other people as fully formed, totally broken but in a different way to you, feeling, incomprehensible beings.

                                                                              1. 2

                                                                                That sounds about right. I’m still learning, I guess!

                                                                            2. 2

                                                                              “All in all, Pony is safer than many languages but, we can to find that there were a number of features needed to address issues like divide by zero. For example, dependent types, partial integer math as an option.”

                                                                              I suggest a translation to WhyML in Why3 platform that feeds verification conditions to automated provers. Why3 is the middle-end that Frama-C, SPARK, and the Java one all use. They prove absence of things like you describe. The automation in SPARK is often over 90%. The backends keep improving.

                                                                              So, my default recommendation for any type/verification of things like number ranges is either static analyzer that’s extensible or a langusge-specific front-end for Why3. A side benefit is SPARK-style annotations are easy for programmers to learn. And you can do property-based, test generation if proof is too hard.

                                                                              1. 1

                                                                                We do this when we look at other people’s systems and pick out one thing to critique outside of the other feature. The Pony divide by zero kerfuffle was an example of that. Many people who knew nothing about Pony critiqued that single “feature” on the basis of the impact within system they know rather than as a “feature” within Pony as a whole (which in the end is what my blog post needs to be about). In that Pony case, a series of decisions that were made to make Pony safer moved it towards being part programming language and part theorem prover. It’s in an interesting place right now where we have downsides from both that lead to “interesting” things like divide by zero being zero because of a series of other choices.

                                                                                The main criticism I saw was criticism of it being presented as somehow more ‘mathematically pure’, at least around here, not criticism of Pony. The blog post claiming that it 1/0 = 0 was actually consistent with mathematics was nonsense.

                                                                                1. 1

                                                                                  We never presented it as more mathematically pure. The langauge we used was that it was an unfortunate compromise. If you want to argue with @hwayne about his blog post, go for it.

                                                                                  1. 0

                                                                                    I never said you did

                                                                      1. 20

                                                                        The “lacks” of Go in the article are highly opinionated and without any context of what you’re pretending to solve with the language.

                                                                        Garbage collection is something bad? Can’t disagree harder.

                                                                        The article ends with a bunch of extreme opinions like “Rust will be better than Go in every possible task

                                                                        There’re use cases for Go, use cases for Rust, for both, and for none of them. Just pick the right tool for your job and stop bragging about yours.

                                                                        You love Rust, we get it.

                                                                        1. 2

                                                                          Yes, I would argue GC is something that’s inherently bad in this context. Actually, I’d go as far as to say that a GC is bad for any statically typed language. And Go is, essentially, statically typed.

                                                                          It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left. In other words, you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.

                                                                          That’s why Go has the “defer” statement, it’s there because of the GC. Otherwise, destructors could be used to defer cleanup tasks at the end of a scope.

                                                                          So that’s what makes a GC inherently bad.

                                                                          A GC, however, is also bad because it “implies” the language doesn’t have good resource management mechanisms.

                                                                          There was an article posted here, about how Rust essentially has a “static GC”, since manual deallocation is almost never needed. Same goes with well written C++, it behaves just like a garbage collected language, no manual deallocation required, all of it is figured out at compile time based on your code.

                                                                          So, essentially, a GC does what language like C++ and Rust do at compile time… but it does it at runtime. Isn’t this inherently bad ? Doing something that can be done at CT during runtime ? It’s bad from a performance perspective and also bad from a code validation perspective. And it has essentially no upsides, as far as I’ve been able to tell.

                                                                          As far as I can tell the main “support” for GC is that they’ve always been used. But that doesn’t automatically make them good. GCs seem to be closer to a hack for a language to be easier to implement rather than a feature for a user of the language.

                                                                          Feel free to convince me otherwise.

                                                                          1. 11

                                                                            It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left.

                                                                            Why do you think this would be the case? A language with GC can also have linear or affine types for enforcing that resources are always freed and not used after they’re freed. Most languages don’t go this route because they prefer to spend their complexity budgets elsewhere and defer/try-with-resources work well in practice, but it’s certainly possible. See ATS for an example. You can also use rank-N types to a similar effect, although you are limited to a stack discipline which is not the case with linear/affine types.

                                                                            So, essentially, a GC does what language like C++ and Rust do at compile time… but it does it at runtime. Isn’t this inherently bad ?

                                                                            No, not necessarily. Garbage collectors can move and compact data for better cache locality and elimination of fragmentation concerns. They also allow for much faster allocation than in a language where you’re calling the equivalent of malloc under the hood for anything that doesn’t follow a clean stack discipline. Reclamation of short-lived data is also essentially free with a generational collector. There are also garbage collectors with hard bounds on pause times which is not the case in C++ where a chain of frees can take an arbitrary amount of time.

                                                                            Beyond all of this, garbage collection allows for a language that is both simpler and more expressive. Certain idioms that can be awkward to express in Rust are quite easy in a language with garbage collection precisely because you do not need to explain to the compiler how memory will be managed. Pervasive use of persistent data structures also becomes a viable option when you have a GC that allows for effortless and efficient sharing.

                                                                            In short, garbage collection is more flexible than Rust-style memory management, can have great performance (especially for functional languages that perform a lot of small allocations), and does not preclude use of linear or affine types for managing resources. GC is hardly a hack, and its popularity is the result of a number of advantages over the alternatives for common use cases.

                                                                            1. 1

                                                                              What idioms are unavailable in Rust or in modern C++, because of their lack of GC, but are available in a statically typed GC language ?

                                                                              I perfectly agree with GC allowing for more flexibility and more concise code as far as dynamic language go, but that’s neither here nor there.

                                                                              As for the theoretical performance benefits and real-time capabilities of a GCed language… I think the word theoretical is what I’d focus my counter upon there, because they don’t actually exist. The GC overhead is too big, in practice, to make those benefits outshine languages without runtime memory management logic.

                                                                              1. 9

                                                                                I’m not sure about C++, but there are functions you can write in OCaml and Haskell (both statically typed) that cannot be written in Rust because they abstract over what is captured by the closure, and Rust makes these things explicit.

                                                                                The idea that all memory should be explicitly tracked and accounted for in the semantics of the language is perhaps important for a systems language, but to say that it should be true for all statically typed languages is preposterous. Languages should have the semantics that make sense for the language. Saying a priori that all languages must account for some particular feature just seems like a failure of the imagination. If it makes sense for the semantics to include explicit control over memory, then include it. If it makes sense for this not to be part of the semantics (and for a GC to be used so that the implementation of the language does not consume infinite memory), this is also a perfectly sensible decision.

                                                                                1. 2

                                                                                  there are functions you can write in OCaml and Haskell (both statically typed) that cannot be written in Rust because they abstract over what is captured by the closure

                                                                                  Could you give me an example of this ?

                                                                                  1. 8

                                                                                    As far as I understand and have been told by people who understand Rust quite a bit better than me, it’s not possible to re-implement this code in Rust (if it is, I would be curious to see the implementation!)

                                                                                    https://gist.github.com/dbp/0c92ca0b4a235cae2f7e26abc14e29fe

                                                                                    Note that the polymorphic variables (a, b, c) get instantiated with different closures in different ways, depending on what the format string is, so giving a type to them is problematic because Rust is explicit about typing closures (they have to talk about lifetimes, etc).

                                                                                    1. 2

                                                                                      My God, that is some of the most opaque code I’ve ever seen. If it’s true Rust can’t express the same thing, then maybe it’s for the best.

                                                                                      1. 2

                                                                                        If you want to understand it (not sure if you do!), the approach is described in this paper: http://www.brics.dk/RS/98/12/BRICS-RS-98-12.pdf

                                                                                        And probably the reason why it seems so complex is because CPS (continuation-passing style) is, in general, quite hard to wrap your head around.

                                                                                        I do think that the restrictions present in this example will show up in simpler examples (anywhere where you are trying to quantify over different functions with sufficiently different memory usage, but the same type in a GC’d functional language), this is just a particular thing that I have on hand because I thought it would work in Rust but doesn’t seem to.

                                                                                        1. 2

                                                                                          FWIW, I spent ~10 minutes trying to convert your example to Rust. I ultimately failed, but I’m not sure if it’s an actual language limitation or not. In particular, you can write closure types in Rust with 'static bounds which will ensure that the closure’s environment never borrows anything that has a lifetime shorter than the lifetime of the program. For example, Box<FnOnce(String) + 'static> is one such type.

                                                                                          So what I mean to say is that I failed, but I’m not sure if it’s because I couldn’t wrap my head around your code in a few minutes or if there is some limitation of Rust that prevents it. I don’t think I buy your explanation, because you should technically be able to work around that by simply forbidding borrows in your closure’s environment. The actual thing where I got really hung up on was the automatic currying that Haskell has. In theory, that shouldn’t be a blocker because you can just introduce new closures, but I couldn’t make everything line up.

                                                                                          N.B. I attempted to get any Rust program working. There is probably the separate question of whether it’s a roughly equivalent program in terms of performance characteristics. It’s been a long time since I wrote Haskell in anger, so it’s hard for me to predict what kind of copying and/or heap allocations are present in the Haskell program. The Rust program I started to write did require heap allocating some of the closures.

                                                                            2. 5

                                                                              It’s inherently bad since GC dictates the lack of destruction mechanisms that can be reliably used when no reference to the resource are left. In other words, you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.

                                                                              Deterministic freeing of resources is not mutually exclusive with all forms of garbage collection. In fact, this is shown by Rust, where reference counting (Rc) does not exclude Drop. Of course, Drop may never be called when you create cycles.

                                                                              (Unless you do not count reference counting as a form of garbage collection.)

                                                                              1. 2

                                                                                Well… I don’t count shared pointers (or RC pointers or w/e you wish to call them) as garbage collected.

                                                                                If, in your vocabulary, that is garbage collection then I guess my argument would be against the “JVM style” GC where the moment of destruction can’t be determined at compile time.

                                                                                1. 8

                                                                                  If, in your vocabulary, that is garbage collection

                                                                                  Reference counting is generally agreed to be a form of garbage collection.

                                                                                  I guess my argument would be against the “JVM style” GC where the moment of destruction can’t be determined at compile time.

                                                                                  In Rc or shared_ptr, the moment of the object’s destruction can also not be determined at compile time. Only the destruction of the Rc itself; put differently the reference count decrement can be determined at compile time.

                                                                                  I think your argument is against tracing garbage collectors. I agree that the lack of deterministic destruction is a large shortcoming of languages with tracing GCs. It effectively brings back a parallel to manual memory management through the backdoor — it requires manual resource management. You don’t have to convince me :). I once wrote a binding to Tensorflow for Go. Since Tensorflow wants memory aligned on 32-byte boundaries on amd64 and Go allocates (IIRC) on 16-byte boundaries, you have to allocate memory in C-land. However, since finalizers are not guaranteed to run, you end up managing memory objects with Close() functions. This was one of the reasons I rewrote some fairly large Tensorflow projects in Rust.

                                                                                  1. 2

                                                                                    However, since finalizers are not guaranteed to run, you end up managing memory objects with Close() functions.

                                                                                    Hmm. This seems a bit odd to me. As I understand it, Go code that binds to C libraries tend to use finalizers to free memory allocated by C. Despite the lack of a guarantee around finalizers, I think this has worked well enough in practice. What caused it to not work well in the Tensorflow environment?

                                                                                    1. 3

                                                                                      When doing prediction, you typically allocate large tensors relatively rapidly in succession. Since the wrapping Go objects are very small, the garbage collector kicks in relatively infrequently, while you are filling memory in C-land. There are definitely workarounds to put bounds on memory use, e.g. by using an object pool. But I realized that what I really want is just deterministic destruction ;). But that may be my C++ background.

                                                                                      I have rewritten all that code probably around the 1.6-1.7 time frame, so maybe things have improved. Ideally, you’d be able to hint the Go GC about the actual object sizes including C-allocated objects. Some runtimes provide support for tracking C objects. E.g. SICStus Prolog has its own malloc that counts allocations in C-land towards the SICStus heap (SICStus Prolog can raise a recoverable exception when you use up your heap).

                                                                                      1. 1

                                                                                        Interesting! Thanks for elaborating on that.

                                                                                  2. 3

                                                                                    So Python, Swift, Nim, and others all have RC memory management … according to you these are not GC languages?

                                                                                2. 5

                                                                                  One benefit of GC is that the language can be way simpler than a language with manual memory management (either explicitly like in C/C++ or implicitly like in Rust).

                                                                                  This simplicity then can either be preserved, keeping the language simple, or spent on other worthwhile things that require complexity.

                                                                                  I agree that Go is bad, Rust is good, but let’s be honest, Rust is approaching a C++-level of complexity very rapidly as it keeps adding features with almost every release.

                                                                                  1. 1

                                                                                    you can’t have basic features like the C++ file streams that “close themselves” at the end of the scope, then they are destroyed.

                                                                                    That is a terrible point. The result of closing the file stream should always be checked and reported or you will have buggy code that can’t handle edge cases.

                                                                                    1. 0

                                                                                      You can turn off garbage collection in Go and manage memory manually, if you want.

                                                                                      It’s impractical, but possible.

                                                                                      1. 2

                                                                                        Is this actually used with any production code ? To my knowledge it was meant to be more of a feature for debugging and language developers. Rather than a true GC-less option, like the one a language like D provides.

                                                                                        1. 1

                                                                                          Here is a shocking fact: For those of us who write programs in Go, the garbage collector is actually a wanted feature.

                                                                                          If you work on something where having a GC is a real problem, use another language.

                                                                                  1. 1

                                                                                    I found Nim being actually awesome, combining the type safety and metaprogramming features from Rust, asyncrhonous and concurrent execution from Golang, deep introspection from C (actually, you can transpile Nim to C instead of native compilation) and syntax ease from Python.

                                                                                    So, my question is - why it didn’t gain the audience and „hype” like other langguages did in the late-2010s language bubble

                                                                                    1. 3

                                                                                      Nim compiles to C, C++ or JS – it doesn’t have native compilation.

                                                                                      Early on – bad name (used to be “Nimrod”), “all the features”(seriously, so many features) – which is a negative for learning, flexibility galore (again, hard to teach/learn), poor documentation and rough build/debug/obscure errors. Oh, and the stupid, profoundly stupid Wikipedia fight. That was the story of early Nimrod.

                                                                                      From the start – Nim made bold promises and wanted to be different in a lot of useful ways. Since then, it has grown up quick, it is starting to live up to the promises. The documentation and standard library has improved dramatically. The community around it has started to really take seriously writing examples and is very welcoming.

                                                                                      That said – that power – that flexibility isn’t free. It is training, it is maintenance, it is spooky-action-at-a-distance to the Nth degree. This type of “magic” is something that some developers revel in – live for, and others loathe.

                                                                                      1. 1

                                                                                        Can you provide an example of Nim’s ‘spooky action at a distance’ magic?

                                                                                        1. 1

                                                                                          Haven’t touch Nim in a bit, but I will try to write a little one up with modern Nim — templates are always fun, redefine != and such (!= is just a macro anyway).

                                                                                    1. 3

                                                                                      I’ve found the SQLite explanation of mutation testing to be very compelling: https://sqlite.org/testing.html#mutation_testing

                                                                                      1. 4

                                                                                        As someone who never used Rust I want to ask: does the section about crates imply that all third-party libraries are recompiled every time you rebuild the project?

                                                                                        1. 6

                                                                                          Good question! They are not; dependencies are only built on the first compilation, and they are cached in subsequent compilations unless you explicitly clean the cache.

                                                                                          1. 2

                                                                                            I would assume dependencies are still parsed and type checked though? Or is anything cached there in a similar way to precompiled headers in C++?

                                                                                            1. 10

                                                                                              A Rust library includes the actual compiled functions like you’d expect, but it also contains a serialized copy of the compiler’s metadata about that library, giving function prototypes and data structure layouts and generics and so forth. That way, Rust can provide all the benefits of precompiled headers without the hassle of having to write things twice.

                                                                                              Of course, the downside is that Rust’s ABI effectively depends on accidental details of the compiler’s internal data structures and serialization system, which is why Rust is not getting a stable ABI any time soon.

                                                                                              1. 4

                                                                                                Rust has a proper module system, so as far as I know it doesn’t need hacks like that. The price for this awesomeness is that the module system is a bit awkward/different when you’re starting out.

                                                                                              2. 1

                                                                                                Ok, then I can’t see why the article needs to mention it. Perhaps I should try it myself rather than just read about its type system.

                                                                                                It made me think it suffers from the same problem as MLton.

                                                                                                1. 4

                                                                                                  I should’ve been more clear. Rust will not recompile third-party crates most of the time. It will if you run cargo clean, if you change compile options (e.g., activate or deactivate LTO), or if you upgrade the compiler, but during regular development, it won’t happen too much. However, there is a build for cargo check, and a build for cargo test, and yet another build for cargo build, so you might end up still compiling your project three times.

                                                                                                  I mentioned keeping crates under control, because it takes our C.I. system at work ~20 minutes to build one of my projects. About 5 minutes is spent building the project a first time to run the unit tests, then another 10 minutes to compile the release build; the other 5 minutes is spent fetching, building, and uploading a Docker image for the application. The C.I. always starts from a clean slate, so I always pay the compilation price, and it slows me down if I test a container in a staging environment, realize there’s a bug, fix the bug, and repeat.

                                                                                                  One way to make sure that your build doesn’t take longer than is needed to is be selective in your choice of third party crates (I have found that the quality of crates varies a lot) and making sure that a crate pays for itself. serde and rayon are two great libraries that I’m happy to include in my project; on the other hand, env_logger brings a few transitive libraries for coloring the log it generates. However, neither journalctl nor docker container logs show colors, so I am paying a cost without getting any benefit.

                                                                                                  1. 2

                                                                                                    Compiling all of the code including dependencies, can make some types of optimizations and inlining possible, though.

                                                                                                    1. 4

                                                                                                      Definitely, this is why MLton is doing it, it’s a whole program optimizing compiler. The compilation speed tradeoff is so severe that its users usually resort to using another SML implementation for actual development and debugging and only use MLton for release builds. If we can figure out how to make whole program optimization detect which already compiled bits can be reused between builds, that may make the idea more viable.

                                                                                                      1. 2

                                                                                                        In last discussion, I argued for multi-staged process that improved developer productivity, esp keeping mind flowing. The final result is as optimized as possible. No wait times, though. You always have something to use.

                                                                                                        1. 1

                                                                                                          Exactly. I think developing with something like smlnj, then compiling the final result with mlton is a relatively good workflow. Testing individual functions is faster with Common Lisp and SLIME, and testing entire programs is faster with Go, though.

                                                                                                          1. 2

                                                                                                            Interesting you mentioned that; Chris Cannam has a build setup for this workflow: https://bitbucket.org/cannam/sml-buildscripts/

                                                                                                1. 8

                                                                                                  For those wanting the rationale, this is in the same Pony article:

                                                                                                  “From a practical perspective, having division as a partial function is awful. You end up with code littered with trys attempting to deal with the possibility of division by zero. Even if you had asserted that your denominator was not zero, you’d still need to protect against divide by zero because, at this time, the compiler can’t detect that value dependent typing. So, as of right now (ponyc v0.2), divide by zero in Pony does not result in error but rather 0.”

                                                                                                  1. 5

                                                                                                    I’m going to be (when I have time) writing a longer and more detailed discussion of the issue.

                                                                                                    1. 7

                                                                                                      Im sure many of us would find it interesting. I have a total, mental block on divide by zero given it’s always a bug in my field. This thread is refreshingly different. :)

                                                                                                      1. 7

                                                                                                        I’ll post it on lobste.rs when its done and I’ve had several people review and give feedback.

                                                                                                        1. 3

                                                                                                          Thanks!

                                                                                                    2. 4

                                                                                                      This is very true. The fact that division by zero causes us to write so many guards can cause major issues.

                                                                                                      I wonder, though, won’t explicit errors be better than implicit unexpected results which may be caused by this unusual behavior?

                                                                                                      1. 1

                                                                                                        I guess if you write a test before writing code, it should be possible to spot the error either way?

                                                                                                        1. 2

                                                                                                          It would be good to push this to the type system exactly so that we don’t have to remember to test for it.

                                                                                                          1. 1

                                                                                                            Totally, but I am saying that there are specific cases where this may still throw people off and cause bugs - even when the typing is as expected here.

                                                                                                          2. 1

                                                                                                            Sure… if you write a test…

                                                                                                      1. 1

                                                                                                        I have an intuition that a Prolog program works like a SAT solver, is this an accurate view?

                                                                                                        1. 4

                                                                                                          Pretty close… the technical differences are mostly about using heuristics. Here’s a nice paper about implementing a reasonably-efficient toy SAT solver in Prolog:

                                                                                                          1. 1

                                                                                                            A while back, hwayne submitted this article on how SAT works with code examples in Racket Scheme. It was one of better ones I’ve seen. You might want to start with a primer on propositional logic first, though. Lots of them to Google/DuckDuckGo for.