1. 3

    Hey @brendan, too many tags! You really don’t need compsci or compilers for this one, and programming is intended to be a generic catch-all. Lobsters uses tags for filtering purposes, so more tags means fewer viewers.

    1. 1

      Woops, sorry! Not sure how to edit it to fix it?

      1. 2

        That’s the fun bit, and why I commented. You can’t edit your own story after a certain window (I think 1 hour?), but other people can use the “suggest” link to propose changes to the tags or title. If enough people make the same suggestion, the site makes the change automatically. That’s why you’ll sometimes see terse comments like “suggest foo” (or “suggest -bar” for tag removal).

        Thanks for posting, by the way!

    1. 8

      Interesting. I have been having similar thoughts about error handling in my own current language project. Algebraic result types are awesome, but there needs to be a good way to solve the ‘Hadouken Problem’.

      1. 7

        I love the name ‘Hadouken Problem’. I’m going to have to remember that one!

        If you’re interested in some prior art, when designing try we mostly looked at Haskell’s <-, OCaml’s rebindable let*, Scala’s for, Rust’s ?, Swift’s try, Zig’s catch, Elixir’s with, and some of the proposals for error handling in Go 2.

        1. 4

          Why not a more general monad sequencing syntax, like Scala for? The same issue arises with options.

          1. 5

            A great question, thank you.

            Scala’s for is made possible by interfaces, a feature that Gleam does not have. In future we may add some kind of interface-like polymorphism (such as traits) and then we can revisit try.

            Another reason is that Gleam aims to be very simple, predictable, and conventional. Having monomorphic constructs in the langauge encourages users to all write similar code, which makes the language more easier to learn. We value approachability, productivity, and ease of adoption over conciseness or more exciting language features.

            1. 2

              All good reasons, and I applaud restraint in language features and syntax. I haven’t looked in detail at the docs, but presumably adding monadic interfaces similar to Scala’s option would be another way to solve the hadouken problem without new syntax.

              1. 5

                We do have a result.then function in the standard library (which is >>= in Haskell, but specialised to Result). It works nicely but can get quite verbose when depending on multiple previous results, so this syntax is a welcome addition.

                I was lucky enough to be able to test out try with a junior developer and they found it much more understandable than result.then, which I took as a good omen.

          2. 2

            Have you had a chance to look into the way Frank and Unison do algebraic effects and handlers (Unison uses a strange term ‘ability’ for this)? The nice thing is that you can have multiple effects for different error cases, without needing to build up the error conversion machinery that you would need to do in Rust. The plumbing is also done in a more automatic way, without need for an explicit do/try/for*.

          3. 2

            just use goto ;)

            function register()
            {
            	$msg = '';
            
            	if (empty($_POST)) {
            		goto err;
            	} else if (!$_POST['user_name']) {
            		$msg = 'Empty Username'
            		goto err_msg;
            	} else if (!$_POST['user_password_new']) {
            		$msg = 'Empty Password';
            		goto err_msg;
            	} /* etc */
            
            	create_user();
            	$_SESSION['msg'] = 'You are now registered so please login';
            	header('Location: ' . $_SERVER('PHP_SELF']);
            	exit();
            		
            err_msg:
            	$_SESSION['msg'] = $msg;
            err:
            	register_form();
            }
            
            1. 1

              I agree. I’ve spent years using goto-less programming languages of all kinds and interestingly, I use goto rather often when I go back to C. Not only for the error handling pattern illustrated in your example but even for loops sometimes. Quite often, goto really makes sense where you would recurse in a functional programming language.

              1. 1

                Genuinely not the worst solution but I think 52 years after Dijkstra’s missive we can do a bit better :-)

            1. 3

              Woah! Didn’t know Soldat was made in Pascal!

              1. 3

                Nice writeup!

                One thing I don’t see mentioned here is error recovery, which is really important for parsers of human-facing programming languages! LALPOP can do this, but afaik Nom can’t as of yet - not sure about Pest. You can also hook up a custom lexer to LALRPOP, like one implemented using Logos (or a hand-written one).

                1. 2

                  At my current company, we do basically all of our work on Github. I don’t think I’ve ever looked at the Github notification panel. A simple view of all PRs that my team is tagged in works well enough.

                  1. 1

                    It’s more important if you work on multiple repos across multiple orgs. Can get hard to manage fast.

                  1. 16

                    As a library maintainer, I really like the new design, as somebody who lived in constant fear of clicking on an update to an important issue then it immediately disappearing from my inbox in the old design. I’m sure there’ll be more polish done on it in the coming months.

                    1. 2

                      The new functionality is useful, but the list layout and dickbar design are still poorly thought out.

                      You can get the “unread” feature with Refined GitHub extension. It integrates nicely with the sidebar, where it should be. It also has immensely useful “open all in tabs” button that makes dickbar unnecessary.

                      The design has been in beta for a while. I thought the bar was like that only because it was just a rough prototype and they didn’t want to touch the rest of the page for an experiment, but they’d properly redesign it for the final version. Apparently not!

                    1. 5

                      Really like the import cycle message! Reminds me of the one in Elm’s compiler!

                      1. 2

                        It was a direct inspiration :)

                      1. 3

                        I really dislike this visually noisy style of error message. It makes things much harder to scan visually through a list of them, and forces my eyes to seek around for the useful bit of information.

                        I understand that it’s trying to be helpful, but a clear, well phrased error with all the involved locations is more useful to me. Especially since I am already looking at the code in my editor.

                        1. 11

                          So far the feedback has been the inverse, and there’s a more general trend to more verbose error messages (Elm, Haskell, Rust, OCaml, etc) so I feel confident in having this style by default

                          I would be open to having a more conventional and concise style for errors behind a flag, and I intend to have a suitable LSP error format that works well in editors.

                          If you’ve any specific suggestions please do open an issue on GitHub! Thank you

                          1. 3

                            Yeah, codespan at least has a short diagnostic mode - I think we can do work to improve it though! Been doing a bunch of work on codespan recently after having attempted to use annotate-snippets-rs - hopefully Gleam will eventually be able to take advantage of it!

                            1. 2

                              Thanks for all your work, it’s really a great library. I also tried annotate-snippeta-rs but decided codespan was much more to my liking.

                        1. 0

                          Why can’t we have a simple markdown renderer in a terminal with just regular bold, italic, etc? Someone, do it dammit! Do it in Rust! :)

                          1. 3

                            You can. Glow lets you define your own styles: https://github.com/charmbracelet/glamour/tree/master/styles

                            The notty and simple styles probably are a bit too raw for your taste, but it’d be easy to strip the colors from the default dark & light styles.

                            1. 2

                              For a markdown renderer for the terminal in Rust there is termimad. Not sure if it’s simple enough for you though!

                              1. 1

                                Cool, thanks! This lead me to https://github.com/Canop/clima which uses termimad.

                              2. 2

                                bat does some minimal syntax highlighting from Markdown, and it tends to be good enough for my own uses:

                                https://github.com/sharkdp/bat

                                1. 1
                                1. 5

                                  Is the syntax (as seen in the signature of generic functions) unique to Go? I can’t say I’ve seen it anywhere else. As with the method syntax, it seems strange and difficult to read, but maybe that’s just because I haven’t used Go enough. To an experienced Gopher, does this look logical and in keeping with the syntactic design of the rest of the language?

                                  1. 6

                                    I think it looks like D templates.

                                    1. 2

                                      it seems strange and difficult to read, but maybe that’s just because I haven’t used Go enough

                                      As someone who has written a bit of Go over the years, I also find it strange and not exactly straightforward to read. It is new though, so maybe I’ll get used to it?
                                      To me, it seems/feels/looks like some strange form of partial application. Perhaps that is what is happening under the hood with the compiler, and this is just a bit of a leaky abstraction? I’ll probably think of partial application in the back of my head every time I see this anyway, and wish Go had it as a core feature.

                                      1. 9

                                        To me, it seems/feels/looks like some strange form of partial application.

                                        It is! You can think of the Reverse function as desugaring to something like this:

                                        const Reverse tfunc (Element type) func(s Element[]) = 
                                            tfunc (Element) {
                                                return func(s Element[]) {
                                                    ...
                                                }
                                            }
                                        

                                        Where tfunc is either:

                                        • a type that is parameterised by another type
                                        • a term that takes a type as an argument and returns a term specialized to that type argument

                                        …depending on whether we are in the type syntax or the value syntax.

                                        You can call Reverse like so:

                                        elements := {1, 2, 3}
                                        Reverse(elements)
                                        

                                        For sanity’s say, the type checker will infer the type that needs to be supplied to Reverse, but we can invent a syntax for ‘explicit type application’ like so (inspired by D’s template application syntax):

                                        elements := {1, 2, 3}
                                        Reverse!(Int)(elements)
                                        
                                        // this shows how we might be able to partially apply the
                                        // type function to get specialised function:
                                        const IntReverse func(Int[]) = Reverse!(Int)
                                        

                                        Perhaps that is what is happening under the hood with the compiler, and this is just a bit of a leaky abstraction?

                                        I don’t think this is ‘leaky’ - it very clearly maps to something called called System F, with the added semantics that type arguments can be inferred. We can elaborate our invented Go syntax quite straightforwardly into System F:

                                        • For types:
                                          tfunc(A type) B   ⟹   ∀ A : Type. B
                                          func (A) B        ⟹   A → B
                                          
                                        • For values:
                                          tfunc(a A) { b }  ⟹   Λ a : A. b
                                          func (a A) { b }  ⟹   λ a : A. b
                                          f!(A)             ⟹   fᴬ
                                          f(a)              ⟹   f a
                                          

                                        Please note that I am not suggesting that Go actually does this, I’m just making it clear that the intuition behind it being partial application is not a bad one!

                                        1. 1

                                          Super interesting. Thanks for commenting!

                                          1. 4

                                            No worries! There’s lots of really great theory that’s been developed around this stuff, which is why I really encourage language designers to learn it. It can make designing new features like this much easier, especially when it comes to figuring out the semantics that makes sense.

                                            Sadly it can be a little intimidating to break into - it took me a long time to learn that ‘forall’ was not a type level lambda - it was more akin to a more general function type, where the type of the body depends on the applied argument. I think this was because the only time I’d ever seen things that bound other things was in lambdas. If this stuff interests you, I’d highly recommend finding a copy of Types and Programming Languages by Benjamin Pierce.

                                      2. 1

                                        I imagine this will be the first of many attempts at the syntax for generics but it seemed pretty tough to grok by just reading it. I have been writing Go for four years and I couldn’t grasp it at first glance. There isn’t enough logical separation to see a generic type declaration in a complex example like this:

                                        type Graph (type Node, Edge G) struct { ... }
                                        

                                        It took me a while to read that as a new struct definition with two generic types Node and Edge. For all I know, I am still wrong. Why not use <T> or [T] to denote a generic type like many other languages have? The type keyword is overloaded here.

                                        I also did not care for the contract use case here:

                                        contract Sequence(T) {
                                            T string, []byte
                                        }
                                        

                                        This seems like it will be reinvented many times over. I imagine many users will create a contract that defines a type that can be iterated over like this example given in the post. Maybe that’s too judgmental for a first cut though.

                                        I am confident that things will shape up with time and the Go team will settle on something productive.

                                        1. 8

                                          In their Draft Design they mentioned:

                                          Why not use F<T> like C++ and Java?

                                          When parsing code within a function, such as v := F<T>, at the point of seeing the < it’s ambiguous whether we are seeing a type instantiation or an expression using the < operator. Resolving that requires effectively unbounded lookahead. In general we strive to keep the Go parser simple.

                                          This reason personally doesn’t convince me

                                          1. 7

                                            It does make sense. This is a serious problem in C++ to the point that you sometimes have to type template X<..> instead of X<...> just to give a hint to the parser.

                                            1. 4

                                              It does indeed get surprisingly problematic, it’s why you have the adorable turbofish syntax in Rust for disambiguating generic types in function calls. Being able to have a function call that goes foo<bar, baz>(bop) is just too ambiguous to live with.

                                              Fortunately, nobody seems to have considered solving this by using non-standard symbols for the greater-than and less-than operators.

                                              1. 2
                                            2. 5

                                              C# bit that bullet (and it worked rather well), but that language is closer to C++ in complexity than anything else already.

                                              Sane languages use [T], but that isn’t an option in Go either, for the same parsing reasons (clashes with arrays).

                                              1. 5

                                                Or we could just use s-expressions and call it a day.

                                                1. 11

                                                  It seems unlikely go will switch to s expressions for go 2.

                                                2. 2

                                                  D adds an exclamation mark for templates:

                                                  Foo(bar) // function call
                                                  Foo!(bar) // template instantiation
                                                  
                                                  1. 2

                                                    Pretty ugly – but I think that’s a property shared by all languages which bolted these things on without wanting to question their existing design.

                                                    Design by “what-already-used-symbol-can-be-made-to-carry-even-more-syntax”.

                                          1. 1

                                            My approach is to try to keep getting a better understanding of foundational stuff (including history), and lazily relate back the new stuff back to that. I don’t go too deep unless I need to, but it really helps cut down the churn if you have a general idea where certain ideas and approaches come from, or can easily look it up if you need to. Not in a ‘hey look, that’s already been done before!’ kind of way, but more a ‘hey cool, somebody is doing that again - I wonder what is the same/different!’ kind of way. It’s not something you can learn all at once, but investing in it over time seems to make things easier.

                                            1. 1

                                              I yesterday looked into InfluxDB 2.x and Flux language and it also uses the same syntax. I wondered yesterday why did they choose it over shells | - looks ugly and a character more to type so often…

                                              Anyway, this is great direction to go for ruby to be viable alternative to PowerShell. There was one implementation, called rush but failed for obvious reason (ruby is not designed to be shell language).

                                              1. 1

                                                I assume because they borrowed the concept from OCaml rather than shell (and | means something else in OCaml).

                                                1. 1

                                                  OCaml borrowed it from F# I think. F# also has <| for backward application, and also << and >> for backward and forward composition, which is actually quite nice.

                                                  1. 2

                                                    This made me curious - Apparently |> wasn’t always in the OCaml standard library; it was introduced in version 4.01 in 2013 (Edit: I originally said F# was released in 2015. It was actually released in 2005).

                                                    OCaml has @@ instead of <|, also from 4.01, but there’s no >> or << built in (I haven’t used it in long enough that I can’t remember if either has a different name).

                                              1. 1

                                                This was actually explained really well. I always found TCO a bit hard to understand!

                                                1. 2

                                                  Great to see people looking into this! I’ve always wondered how much Elm’s highly dynamic output affects run time performance. I’ve been much more impressed by the level of optimization done by BuckleScript, but not seen any comparisons made to see if it has much of a noticeable difference. It’d be super interesting to see the output of Elm and BuckleScript compared at some stage!

                                                  1. 3

                                                    Interesting post and kudos to the author for testing in Firefox, Safari, and Chrome. I fully expected it to be Chrome only at first. I’m surprised how much slower Firefox’s performance appears to be, especially in the first List benchmark.

                                                    1. 2

                                                      AFAIK, Firefox does more just in time compiling than Chrome, so I’d expect it to get better as things warm up. It’s still pretty slow though. :/

                                                    1. 4

                                                      If you haven’t already read “What Color is Your Function?”, I highly recommend it: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/

                                                      I wonder why so many languages opted for async/await rather than threads. I understand that granting untrusted code the power to create threads is a risk, so at least in JavaScript’s case it makes some sense. But I find it curious that languages like Go are the exception, not the norm. (My own language also uses threads.)

                                                      1. 18

                                                        Rust has threads. The standard library API is right here.

                                                        • Threads as they currently exist in Rust are just a wrapper on top of POSIX or Win32 threads, which are themselves implemented in the OS kernel. This means spawning a thread is a syscall, parking a thread is a syscall, jumping from one thread to another is a syscall*, and did I mention that the call stack in Rust can’t be resized, so every thread has a couple of pages of overhead? This isn’t a deal breaker if you wrap them in a thread pool library like rayon, but it means you can’t just use OS threads as a replacement for Erlang processes or Goroutines.

                                                        • Green threads introduce overhead when you want to call C code. For one thing, if you make a blocking call in C land, green threads mean you’re blocking a whole scheduler, not just one little task. For another thing, that stack size thing bites you again, since it means your green threads all have to have enough stack space to run normal C code, or, alternatively, you switch stack every time you make an FFI call. Rust used to have green threads, but the FFI overhead convinced them to drop it.

                                                        So, since green threads aren’t happening, and you can’t spawn enough OS threads to use them as the main task abstraction in a C10K server, Rust went with the annoying leaky zero-overhead abstraction. I don’t really like it, but between the three options, async/await seems like the least bad.

                                                        * Win32 user mode threads can allow you to avoid task switching overhead, but the rest of the downsides, especially the stack size one, are still problems.

                                                        1. 3

                                                          Great comment! Just want to nitpick about this:

                                                          Green threads introduce overhead when you want to call C code. For one thing, if you make a blocking call in C land, green threads mean you’re blocking a whole scheduler, not just one little task.

                                                          Regarding “blocking calls in C land”, using async/await with an event loop is not better than green threads: both will be blocked until C land yields control.

                                                        2. 11

                                                          I wonder why so many languages opted for async/await rather than threads

                                                          I think you have to understand that this isn’t an either-or question. Rust, of course, has had threads for ages – the async/await RFC revolves around providing an ergonomic means of interacting with Futures, which are of course just another abstraction built on top of the basic thread API.

                                                          The better question would be, “What are the ergonomic issues and engineering trade-offs involved in designing threading APIs, and why might abstractions like Futures, and an async/await API, be more appealing for some sorts of use-cases?”

                                                          1. 2

                                                            I’m much more of a fan of algebraic effects for this stuff. Multicore OCaml seems to be moving in the right direction here, in a way that can reduce the splitting of the language in two. I would have loved to have seen something like this in Rust, but I can understand that the more pragmatic choice is async/await + futures. We still need to figure out how to make algebraic effects zero cost.

                                                            1. 1

                                                              Yeah. The problem is the language needs some sort of programmable sequencing operators built in that async primitives can make use of, while users can write code that is agnostic to them.

                                                              1. 1

                                                                OCaml has let+ syntax now (in 4.08) which addresses this.

                                                                1. 1

                                                                  One example is how you can write:

                                                                  map : ('a ~> 'b) ->> 'a list ~> 'b list
                                                                  

                                                                  which is sugar for:

                                                                  map : ('a -[e]-> 'b) ->> 'a list -[e]-> 'b list
                                                                  

                                                                  where e is a type level set (a row) of effects. That way you can have a map function that works in pure situations, or for any other combination of effects. Super handy.

                                                                  1. 1

                                                                    That’s really, really nice!

                                                              2. 1

                                                                With the proper functions / operators (bind / lift) this is not much of an issue in practice.

                                                                1. 1

                                                                  There’s certainly value in the greenthread solution, as evidenced by the success Go, but Rust’s approach makes much more control over the execution context possible, and therefore higher performance is possible. To achieve the absolute highest performance you have to minimize synchronization overhead, which means you need to distinguish between synchronous and asynchronous code. “What color is your function” provides an important observation, but we shouldn’t read it as “async functions are fundamentally worse”. It’s a trade-off.

                                                                  Of course, prior to Rust, async functions didn’t give much (if any) control over the execution context, and so the advantages of async functions over greenthreads were less clear or present.

                                                                  1. 1

                                                                    I’m not 100% sure if that’s a good intuition that I have, but I kinda think that in case of Go, it’s more like “every line is/can be async-await” in it — because of the “green threads” a.k.a. goroutines model (where goroutines are not really your OS’s threads: they’re multiplexed on them — as will happen with async/await functions, IIUC).

                                                                    1. 4

                                                                      Agreed. I’m very up on the idea of getting more languages to run on the BEAM. I miss static types and, frankly, I wish that Rust could compile down to run on the BEAM!

                                                                      1. 5

                                                                        Yeah, I love Elixir to death, but sometimes I find myself wishing for a real type system. Some folks swear by Dialyzer, but it feels a bit like a cludgy piece of typing duct tape.

                                                                        1. 12

                                                                          The dynamically-typed nature of Erlang and Elixir and BEAM comes from a design requirement: that the systems built in Erlang can be upgraded at runtime. Strong typing gets quite a bit more complicated when you need to be able to have multiple versions of a type coexist at runtime.

                                                                          Side note, this took me a while to absorb when beginning to write Elixir. My instinct was to use structs instead of generic maps for GenServer state, since better-defined types are better, right? But that imposes hard requirements on hot upgrades that wouldn’t have been there if I’d used untyped maps from the start; removing fields from a struct breaks upgrades. This knowledge was somewhere between “esoteric” and “esoteric to Ruby assholes who just showed up, well-known to wonks”. The Erlang Way is a lot more than “let it crash”. :)

                                                                          1. 3

                                                                            The dynamically-typed nature of Erlang and Elixir and BEAM comes from a design requirement: that the systems built in Erlang can be upgraded at runtime. Strong typing gets quite a bit more complicated when you need to be able to have multiple versions of a type coexist at runtime

                                                                            Yeah, I really wish there was more type system research going into figuring out how to use them effectively in upgradable, always-on systems, where you might have heterogeneous versions across a cluster. I actually think static types could be super helpful here, but as far as I’m aware there doesn’t seem to be much work put into it.

                                                                            1. 4

                                                                              It’s very difficult. It’s not like nobody tried — https://homepages.inf.ed.ac.uk/wadler/papers/erlang/erlang.pdf

                                                                              And when people talk about “I wish there was a type system” they probably don’t realise that Erlang is very different animal (that can do things other animals have no concepts for). Just bolting on types is not an option (if you want to know what happens if you do so, look at CloudHaskell — you have to have a exact binary for every node in the entire cluster, or else).

                                                                              1. 1

                                                                                Just bolting on types is not an option (if you want to know what happens if you do so, look at CloudHaskell — you have to have a exact binary for every node in the entire cluster, or else).

                                                                                That’s what I mean. I see Cloud Haskell as interesting, but really not the distributed type system I want. It would be super cool to see more new ideas here (or rediscovery of old ones, if they’re around). Eg. you may need some kind of runtime verification step to ensure that a deployment is valid based on the current state of the world. Perhaps some stuff from databases and consensus would help here. Doing that efficiently could be… interesting. But that’s why research is important!

                                                                              2. 3

                                                                                I think protocol buffers (and similar systems like Thrift / Avro) are pretty close to the state of the art (in terms of many large and widely deployed systems using them). When you write distributed systems using those technologies, you’re really using the protobuf type system and not the C++ / Java / Python type system. [1] It works well but it’s not perfect of course.

                                                                                I also would make a big distinction between distributed systems where you own both sides of the wire (e.g. Google’s), and distributed systems that have competing parties involved (e.g. HTTP, e-mail, IRC, DNS, etc.). The latter case is all untyped because there is a “meta problem” of agreeing on which type system to use, let alone the types :) This problem is REALLY hard, and I think it’s more of a social/technological issue than one that can be addressed by research.

                                                                                [1] This is a tangent, but I think it’s also useful to think of many programs as using the SQL type system. ORMs are a kludge to bridge SQL’s type system with that of many other languages. When the two type systems conflict, the SQL one is right, because it controls “reality” – what’s stored on disk.

                                                                                1. 2

                                                                                  I think protocol buffers ⟨…⟩ are pretty close to the state of the art

                                                                                  Seriously? PB, where you can’t even distinguish between (int)-1 and (uint)2 is state of the art?

                                                                                2. 2

                                                                                  Alice ML is a typed programming language designed to enable open extensions of systems. Objects can be serialized/deserialized and retain their types and it’s possible to dynamically load new code.

                                                                                3. 2

                                                                                  The Erlang Way is a lot more than “let it crash”. :)

                                                                                  I am so with you on this one, and I’ve got so much to learn!

                                                                                  1. 6

                                                                                    You might find ferd’s intro helpful. For historical perspective with some depth, you might like Armstrong’s thesis from 2003 that describes everything in deep detail.

                                                                                  2. 2

                                                                                    Yup, this is related to the point I was making about protobufs and static “maybe” vs. dynamic maps here. In non-trivial distributed systems, the presence of fields in message has to be be checked at RUNTIME, not compile-time (if there’s a type system at all).

                                                                                    https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey#c_povjwe

                                                                                    I think of protobufs/thrift as trying to “extend your type system over the network”. It works pretty well, but it’s also significantly different from a type system you would design when you “own the world”. Type systems inherently want a global view of your program and that conflicts with the nature of distributed systems.

                                                                                    edit: this followup comment was more precise: https://lobste.rs/s/zdvg9y/maybe_not_rich_hickey#c_jc0hxo

                                                                                  3. 2

                                                                                    So this is really interesting. I read the paper on success typing and it seems pretty cool. It still, however, doesn’t guarantee soundness. Then on the other hand, neither does TypeScript, so it’s hard for me to make up my mind about what I want.

                                                                                  4. 4

                                                                                    That would be cool. There’s at least Rustler.

                                                                                    1. 4

                                                                                      Static types per se — that’s easy. But think about the distributed system with different versions of VMs. Think about live upgrades.

                                                                                  1. 14

                                                                                    Yay! It’s been great to have you on the team!

                                                                                    In my second week I implemented a change to make a certain pattern more ergonomic. It was refreshing to be able to build the initial functionality and then make a project wide change, confident that given it compiled after the change I probably hadn’t broken anything. I don’t think I would have had the confidence to make such a change as early on in the Ruby projects I’ve worked on previously.

                                                                                    Yeah, this is what made me consistently frustrated and annoyed at my previous work place (they used Ruby, Elixir, and JS). Static types are a bit of a trade off, and not perfect, but I’m thankful to finally be able to use them in my day job.

                                                                                    1. 11

                                                                                      I think the same problem exists even in statically typed languages. Java people talk about the open-close principle all the time, but I think it terribly encourages overengineering and bloat (due to append-only programming). What you really want is a language (and a codebase that isn’t trying hard to thwart it) that gives you the confidence to change anything as the requirements change.

                                                                                      1. 4

                                                                                        Sandi Metz hammers this home. I can’t recommend POODR and every talk she gives on youtube enough. It’s all about the cost of maintaining the code.

                                                                                        1. 3

                                                                                          I hadn’t heard about Sandi Metz as a non-Rubyist, so went ahead and listened to some of her talks. Her messages are exactly what I would’ve applauded before I met Haskell and functional programming. When you have a very powerful type system behind your back, you’re not afraid of writing code that mixes abstraction levels. You’re not afraid of hard-coding that configuration value, because when you need it, the time it takes to refactor it out to a configuration file is exactly as much as it is now, and the risk is zero in either case. So, in the long term, you end up with a simpler codebase, because you didn’t introduce needless complexity fearing that you might need it later (that’s a very valid concern in Ruby or Java.).

                                                                                          The principles I recommend to junior Haskellers are much more simple.

                                                                                          1. Stick to DRY as much as possible
                                                                                          2. Make sure your non-local assumptions are reflected in the types (purity is a side-effect of that principle)
                                                                                          3. Don’t work around abstractions, go in and refactor them
                                                                                          1. 2

                                                                                            Yeah, as I said in another comment, I think this is why Prince is such a nice code base, even after 16 years. Although Rust isn’t pure like Mercury or Haskell, it still seems to provide similar benefits as well.

                                                                                          2. 1

                                                                                            Yeah, Sandi Metz’s stuff is really great from what I’ve seen!

                                                                                          3. 2

                                                                                            Aka The Smalltalk Advantage

                                                                                            1. 1

                                                                                              I think it’s fine to have to change code when requirements change, so I’m less strict about following open-close stuff from the Java world. What I’m looking for is tooling that helps me know what to change and where - that’s something that can really help when requirements change underneath you. I definitely find that type systems in Rust, Haskell, OCaml, etc to give that power to refactor in a far more ruthless way than Java’s does - although I find Java a step up from say, Ruby, where there tends to be much fear around changing things, even in with a good test suite in place.

                                                                                            2. 1

                                                                                              Would unit tests help in this case? I’ve regularly had something compile and later segfault , (not sure if that’s possible with rust), whereas often the tests caught that.

                                                                                              1. 3

                                                                                                Barring compiler bugs Rust’s memory model guarantees no segfaults in “safe Rust”. Use of the unsafe keyword allows dereferencing of raw pointers so if you get that wrong you could get a segfault. Our code only uses unsafe for FFI with the existing application so it’s possible to get that wrong but when writing and testing the Rust in isolation we’ve written no unsafe (although the standard library does make use of it).

                                                                                                1. 3

                                                                                                  Barring compiler bugs Rust’s memory model guarantees no segfaults in “safe Rust”.

                                                                                                  That’s not actually true, stack overflows are for instance not exactly hard to cause. Segfaults just don’t violate memory safety.

                                                                                                2. 2

                                                                                                  You can still get panics and stuff in safe Rust, and there are plenty of things that you definitely need tests for. You don’t need to worry about segfaults if you’re in safe code though (barring bugs in Rust and the standard library).

                                                                                                  The change in question was more an API change - something that would be tedious and error-prone to do a find and replace for when using Ruby. The type system can do a pretty good job of telling you where you need to change things at the the exact place where you need to make that change.

                                                                                              1. 2

                                                                                                I noticed you submitted the Helix article as well. Do you use it in production?

                                                                                                1. 6

                                                                                                  No, we do use the FFI quite a bit though - most of the higher level code at YesLogic is implemented in Mercury and we communicate to Rust via the C FFI.

                                                                                                  1. 5

                                                                                                    That might be worth a writeup itself on the ups and downs of Mercury in production.

                                                                                                    1. 6

                                                                                                      Disclaimer: I don’t work on the Mercury part of the code myself, so this is just what I’ve gleaned from outside:

                                                                                                      According to my boss it’s given us around a 10%ish advantage (in units of ‘something’)? But yeah, for a 16 year old code base, and it looks extremely well maintained, with very little accumulated tech debt getting in the way of development. I’m sure that is largely due to the highly competent people behind it, but Mercury offers similar properties to ML, Haskell, Rust, etc. in that you can refactor it quite heavily, constantly simplifying things over time while preventing too much of it becoming brittle and hard to maintain.

                                                                                                      I think perhaps these days Haskell might have been a better choice given the fact that it has developed a larger community, but at the time Prince was started Haskell wasn’t really in the position it was today.

                                                                                                      In practice it seems like the logic programming side of things doesn’t seem to be used all that much. It’s kind of surprising given our domain - layout, which involves a bit of constraint solving. Most of our code tends to be functional/imperative rather than declarative because often it’s better to implement your own domain-specific solving. The lack of meta-programming features in Mercury can also tend us towards making codegen scripts which is a bit of a pain.

                                                                                                      The heavy use of persistent data structures can be a pain in tight loops, but generally it’s ok for our high level layout code. In the past we’ve used C for that, but we are now transitioning that low level code to Rust because we know how error prone C can be. It also means we can better take advantage of some of the ecosystem our friends at Mozilla are making, and contribute back as well!

                                                                                                      So yeah, in general Mercury seems to be serving us well, and we’ve got no plans to rewrite everything in another language!

                                                                                                      1. 1

                                                                                                        Very interesting! You said the logic programming side of things doesn’t get used much. Looking at web site a while back made me think Mercury was primarily a logic language with some functional stuff in it. What do you mean it’s mostly functional/imperative? Is only a small part in Mercury or does Mercury let you write code that way?

                                                                                                        1. 2

                                                                                                          Oh, it’s like in Haskell - you can thread through IO ‘state’ in order to do imperative effectful stuff. Eg. just as Haskell is functional, but you can write it in an imperative way, Mercury is declarative but you can also write it in a pretty imperative way. It’s a bit clunkier in Mercury though, because you’re explicitly passing that state around, rather than letting type class elaboration do that for you.

                                                                                                          I think the main parts that see a bunch of use are being able to use multiple combinations of argument modes for the same predicate, which kind of gets around Rust’s issue of not being able to abstract over the way you pass arguments. This is quite neat for code reuse, but not a massive win.

                                                                                                  2. 2

                                                                                                    No, it was just an interesting article.

                                                                                                  1. 5

                                                                                                    Fantastic introduction to Idris, thanks for sharing. I really liked how you demonstrated the benefits of the language and tooling. The examples were incremental and approachable for someone like me that has never read or written Idris before.

                                                                                                    1. 3

                                                                                                      It kind of mirrors the approach I’ve seen in Edwin Brady’s book Type Driven Development in Idris. Lots of incremental build up, framing dependent types as a ‘natural extension of regular types’ rather than as something weird or difficult.