Threads for mk12

  1. 7

    I wish this had come out when I was in university. It looks so cool but I don’t really have a reason to use it right now.

    1. 7

      I didn’t think it was hard, but it definitely is different! I recall a moment of “oh duh, “ when I set up a function with the type signature of all the errors it could have. Then I was told about the parent of all errors, anyerror, so my return could be !anyerror without me going through all the functions I called and assembling all the possible errors.

      Little things it takes a hot second to get into it. I’ll be thrilled to use it again for advent of code this year.

      1. 7

        “I don’t think it was hard” feels like exactly the subjectivity that the post opens with.

        1. 3

          I feel like objective measures aren’t that interesting. We have very few facts to work with. Even a language’s mission statement or design goal is a claim. If we went with objective measures, I guess we’d all be using the fastest language that uses the least amount of memory or something. But that’s not what people want. I think people want subjective takes up front because they might invest 2-4 years learning whatever-lang. Zig is fast, we could probably prove it. That’s good to know, but not the “worth it” part.

          Maybe if we had a subjective recommendation engine at scale, we could smooth out the opinions? Recommendation engines, metacritic, amazon reviews, glassdoor and others are all smoothed out opinions, collected. I’m not disagreeing, I just wonder if we are missing some tool or site? What else is it? Random stories through search engine hits?

          1. 3

            You make good points.

            I think easy vs hard is largely a factor of your background and breadth of programming experience. Some things will be easy and obvious to me if they embed ideas from languages I’ve used before. Others will be extremely hard if they use ideas from languages I’ve never touched. As I gain more experience and broaden the types of languages I have used, the subset of new languages that are hard to learn begins to shrink.

            Quantifying that for a broad audience is the thing that feels like the hard part. Maybe what we need is a matching system that takes into account the languages you know. :-)

            1. 1

              That would be neat. We sort of already know how to quantify experience (flawed maybe) and background. Maybe it would double as a job site or some other purpose. I guess then the hard part is the recommendations or tagging system (bleh, ontologies).

              If I could visualize or organize my boundaries in terms of a heat map or mind map, I’d love to see it. Maybe I don’t know how to do it but I don’t find value in these tools. The best one I’ve found is a result of tests on Triplebyte (things may have changed, no endorsement). I didn’t directly make the map (and it was simplified) but when I saw the results, I thought it was pretty fair. It would be great to have this be an output of exercism or something: “you learned the concept of closures from Erlang” but as evidence, not self-assessment.

        2. 5

          Note that you can also just omit the error type and it will be inferred precisely. This is usually better than using anyerror, and works most of the time (not e.g. for recursive functions, or writing a function pointer type).

          1. 1

            I don’t recall if this worked on 0.8 but that was the version I used at the time. Oh, I think I was using a callback or something, where I needed to write the type elsewhere?

        1. 33

          Blazingly Fast

          I think we should stop using this on everything.

          1. 26

            Or use it on literally everything. Blazingly fast coffee. Blazingly fast summer vacations. Blazingly fast third degree burns. Blazingly fast global warming. Blazingly fast sex. 🔥🔥🔥

            1. 14

              Blazingly fast spinlocks and deadlocks .

              1. 8

                Blazingly fast bankruptcy

                Blazingly fast federal investigation

                Blazingly fast PR rejection

                1. 1

                  “Blazingly fast” “federal ” is an oxymoron.

                  1. 0

                    I was certainly surprised by the speed at which a certain Congressmember was indicted after being elected amid lots of lies. Might be a new record.

              2. 1

                While we’re at it let’s do supercharged too.

              1. 1

                Does anyone know how https://lobste.rs/s/zvrtsw/how_hype_will_turn_your_security_key_into relates to this? That post was arguing that “passkey” branding will lead to non-resident keys being useless. I have three hardware security keys (“Security Key NFC” by Yubico) and it’s a really important feature to me that I can use them on an unlimited number of websites.

                1. 3

                  I thought of that article as well and it correctly points out that the issue is one of terminology and marketing hype: FIDO has settled on “a passkey is a resident key” when it should really be “a passkey is any possible authenticator that a user chooses to use(resident key, non-discoverable credential, etc.)”.

                  Couple that with marketing of certified keys claiming to support “unlimited keypair” storage (with that being true only with non-discoverable creds) and now we have an upcoming browser feature to autocomplete logins with ONLY resident keys.

                1. 4

                  Looking forward to watching this after work! I’ve enjoyed all the Systems Distributed talks so far.

                  1. 2

                    Thanks Mitchell! Encouraging to hear that!

                    1. 6

                      Just watched it! Also read TIGER_STYLE.md.

                      This style resonates with me, even though my work is not a distributed system and not mission critical. I think a lot of it is broadly applicable – on design, assertions, tech debt, naming, …

                      I like the framing of asserting on positive and negative space. I’d never thought of it that way. Also the idea of asserting a lower and upper bound on loop iterations makes a lot of sense.

                      Some questions:

                      • You mentioned asserting postconditions in functions. Does that mean you also enforce a single return path? Or use wrapper functions to do the asserting?

                      • It’s an interesting contrast with points of the Go philosophy. You mentioned assertions in the talk. I also saw “Do not abbreviate variable names” in your doc, which Go is well known for. How far do you take that? Do you think Go’s approach has merits in other circumstances, or it’s just subjective?

                      • How is Zig like for implementing your wire format encoding/decoding? Curious as I imagine comptime would be useful there (and since I work on an IDL/wire format at my job, FIDL).

                      1. 7

                        (I work on TigerBeetle)

                        For 1), Zig’s defer assert works really well.

                        For 2), we take it pretty far, but we also spend a lot of time to find actually good, catchy, short names for the core concepts. Naming-wise, “big endian” naming actually feels more unusual to me, and it is freaking great, I immediately switched my personal style to it, so, eg, I now use item_count rather than n_items, because that aligns with item_whatever so well.

                        For 3, zigs great, but not due to comptime. Our deserialization is std.mem.sliceAsBytes, which just casts raw bytes to whatever we want to deserialize (after verifying a strong u128 checksum, mind you). The two features of Zig we use here are:

                        • ability to specify pointer alignment and track it in the type system
                        • ability to precisely control layout of structs

                        The fist one, pointer alignment, is really cool! Just the other day I was writing a small “cast bytes to thing” code in both Rust and Zig, and I realized that my Rust code is UB only when Zig version didn’t compile because of the alignment error.

                  1. 2

                    I remember reading somewhere a while ago an observation about lexical scoping. In general, code can be easier to understand if you structure it in a way that makes certain things impossible. E.g. the article uses let xs = xs; to make further mutation impossible. When deciding where to define a variable or function:

                    • With a narrow scope, you immediately know it can’t be used outside that scope.
                    • With a broad scope, you immediately know it can’t rely on anything in narrower scopes.

                    In other words, if you imagine a directed graph of definitions, you can reduce the mental burden by ruling out incoming edges at the expense of outgoing edges, or vice versa.

                    Does anyone know what I’m talking about/where I might have seen this?

                    1. 2

                      I’ve been having fun writing my own static site generator for my blog: https://github.com/mk12/blog/tree/new. Some interesting parts:

                      • Having make only rebuilds pages that actually changed. This was tricky because posts link to the next/previous post, but I don’t want editing one post to force rebuilding all of them.
                      • Wrote my own templating engine because I didn’t like any of the existing ones.
                      • Wrote a Unix stream socket client/server so that some parts can be implemented in different languages without having to spawn a new process for every call.

                      Like with the author’s blog, most of this is completely unnecessary to get words on the Internet. But I enjoy it.

                      1. 2

                        Learning BQN! It’s my first language from the array paradigm so I need to give time to my brain to process it :)

                        1. 1

                          I had lots of fun learning BQN with last year’s Advent of Code (repo). I only did up to day 11 but I got most of them to run in under 20ms. I hope to get back to it at some point. My takeaway from it was that there are massive gains to be had (in conciseness, simplicity, code reuse, performance) in exchange for modeling everything as arrays of raw data, rather than neatly layered, carefully named abstractions you might aim for in other languages. It reminds me of “place oriented programming” Rich Hickey derides in The Value of Values. If Clojure is anti-PLOP, then array languages like BQN are even more so, taking it to its logical extreme.

                        1. 3

                          This is so exciting! I’ve been searching for something exactly like this. Looking forward to trying it out.

                          One thing I use Make for is custom static site generators. Sometimes there is some functionality I want that’s not available in the language I’m using, e.g. KaTeX math which is a JS library. Rather than spawn a new node process for every piece of inline math, I run the JS script as a server that listens on a Unix domain socket. Using Make I can automatically start and stop the server as needed, by making the socket an “intermediate” file and having the server automatically stop when the file is removed (importantly Make always does this, even on error). You can see how I do it here and here. It would be cool if this would work in Knit as well, or if it had more general support for “post-requisites” so that the server wouldn’t need to watch for removal.

                          1. 3

                            I run into this all the time. I don’t have a good solution. A related problem is when you have a Car type that doesn’t need to be exported, so you rename it to car. In particular, if your package is a program rather than a library, all types should be lowercased like this. You are allowed to shadow it like car := car{}, but that’s fairly limiting because now if you write car2 := car{} you get “car (variable of type car) is not a type”.

                            1. 1

                              Now I’m sad :(. I guess knowing I’m not alone helps though, thanks!

                            1. 6

                              I think, paradoxically, we’re going to see more ultra-terse languages, so that the AI can store more context and you can save money on tokens.

                              1. 3

                                That might not require changing the languages themselves!

                                For example, if you can have a $LANG <-> $L translation, where $L is a “compressed” version of $LANG optimized for model consumption, but which can be losslessly re-expanded into $LANG for human consumption, that might get you close enough to what you’d get from a semantically terser language that you’d rather continue to optimize for human consumption in $LANG.

                                1. 1

                                  So all those years of golfing in esolangs will pay off?? I’ve thought about this too, and you might be able to store more code in your context window if the embeddings are customized for your language, like a Python specific model compressing all Python keywords and common stdlib words to 1 or 2 bytes. TabNine says they made per-language models, so they may already exhibit this behavior.

                                  1. 3

                                    Or perhaps there will be huge investment in important language models like python, and none for Clojure. I have a big fear around small languages going away - already it’s hard to find SDKs and standard tools for them.

                                    1. 1

                                      I don’t think small languages will be going anywhere. For one thing, they’re small, which means the level of effort to keep them up and going isn’t nearly as large as popular ones. For another, FFI exists, which means that you often have access to either the C ecosystem, or the system of the host language, and a loooot of SDKs are open source these days, so you can peer into their guts and pull them into your libraries as needed.

                                      Small languages are rarely immediately more productive than more popular ones, but the sort of person that would build or use one (hi!) isn’t going to disappear because LLMs are making the larger language writing experience more automatic. Working in niche programming spaces does mean you have to sometimes bring or build your own tools for things. When I was writing a lot of Janet, I ended up building various things for myself.

                                  2. 1

                                    Timely that I’ve started learning https://mlochbaum.github.io/BQN :)

                                    1. 1

                                      Perl’s comeuppance!

                                    1. 3

                                      I like Zig’s “preference for low-dependency, self-contained processes” too, but I would balance it with the principle of “maintain it with Zig”. My interpretation of it is you don’t need to rewrite all your dependencies in Zig, rather you can adopt it incrementally and use it alongside your existing code where it provides benefit. Of course you can try that with Rust too but it seems culturally to encourage the RIIR mentality.

                                      I’m not so sure about the part at the end, wanting Zig to be your task runner as well as build system (sort of like npm scripts). If you really want to avoid bash / write your scripts in a low-level language, why not make them actual programs that you build and execute?

                                      1. 3

                                        If you really want to avoid bash / write your scripts in a low-level language, why not make them actual programs that you build and execute?

                                        That is what zig build <target> -- <args> does (along with executing that progrma with the provided <args>) - I think what’s being argued for is a nicer sugar for this to encourage building and writing accompanying task tooling in Zig?

                                        1. 1

                                          Can you make them compile time programs?

                                        1. 4

                                          I believe shell can be so much better. I’m not convinced that needing arrays is a sign that shell doesn’t fit your problem anyway. Shell is about interacting with the operating system and gluing together processes with ease. I’m hopeful that www.oilshell.org will succeed and raise our expectations for shell programming.

                                          1. 2

                                            Arrays are nice for passing flag parameters.

                                            1. 2

                                              The argument list is an array in POSIX sh.

                                              1. 1

                                                Sorta? A lot of programs will interpret the value in --foo 1 2 3 as three values separated by spaces, but other programs interpret it as one value which contains spaces. There’s no easy way to indicate a string must be interpreted as an array, which also means things like no multidimensional arrays.

                                                1. 4

                                                  There is one and only one array in POSIX sh: $@, which is accessed by $*, $@, or $1, $2, …, and modified by shift and set.

                                              2. 1

                                                Also handy for reading a whole bunch of parameters out of something like sqlite in one go

                                                IFS=$'\t' read -r -a arr -d '' < <(sqlite3 -batch -tabs -newline "" data.db "query")
                                                
                                              3. 1

                                                Yeah. My experience with shell programming mostly falls in the, “only if nothing else is available,” mental category. It is tedious doing shell-like things in Python but much more readable.

                                              1. 16

                                                I don’t think I really understood rust’s ownership semantics until I figured out what all the variants of self meant:

                                                1. self
                                                2. mut self
                                                3. &self
                                                4. &mut self

                                                Specifically the first two, which ensure if you call that class method it will be the last method you can call on that class instance! That was when it really clicked for me, and I started thinking of methods or functions consuming their parameters. This can be very useful if you’re trying to make invalid state unrepresentable, or even invalid state transitions unrepresentable! Like if you have a method like .close() or something you can make it fn close(self) so that if anyone tries to do something like:

                                                object.close();
                                                object.do_something_else();
                                                

                                                the borrow checker will block this at compile time! This transformed ownership semantics in my mind from an annoying bookkeeping thing I did to appease the compiler into a powerful tool I could use to keep myself from writing bugs.

                                                1. 4

                                                  To expand on this I learned this technique (and more advanced variants) from this post on “the typestate pattern”: http://cliffle.com/blog/rust-typestate/

                                                  It wasn’t too long ago I speculated here on this very website that programming design patterns were just a cope for the language not having sum types. Shortly thereafter I ran into an obvious shortfall with rust’s sum types, which is that they can’t restrict what functions can be called depending on the enum value! There is an RFC for this, but until then you have to use the design pattern in the blog post.

                                                  I’ve been making use of it while writing some scanner code, so scanner.advance() consumes self then returns a Result<Scanner<Advanceable>, Scanner<EOF>> instance. You can’t call scanner.advance() on a Scanner<EOF> instance. I will say it makes your code very verbose and kind of unreadable with tons of pattern matches, but it’s already uncovered some bugs that would have languished for a while.

                                                  1. 3

                                                    I speculated [..] that programming design patterns were just a cope for the language not having sum types.

                                                    This insight can be made broader: design patterns are a coping mechanism for missing language features in general.

                                                  2. 2

                                                    Note that 1/2 and 3/4 are not really analogous in the way that your list might suggest. (Maybe you already know this! Just thought I’d point it out.) The first two are identical from the caller’s perspective, and differ only by the callee giving itself permission to mutate self. The last two are really different types: (3) is a shared reference, and (4) is a unique reference.

                                                    You can see this if you try writing mut self in a trait:

                                                    trait Foo {
                                                        fn foo(mut self);
                                                    }
                                                    

                                                    Result:

                                                    error: patterns aren't allowed in functions without bodies
                                                     --> src/lib.rs:2:12
                                                      |
                                                    2 |     fn foo(mut self);
                                                      |            ^^^^^^^^ help: remove `mut` from the parameter: `self`
                                                      |
                                                      = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release!
                                                      = note: for more information, see issue #35203 <https://github.com/rust-lang/rust/issues/35203>
                                                      = note: `#[deny(patterns_in_fns_without_body)]` on by default
                                                    
                                                    1. 1

                                                      Oh yeah that’s a good point. One thing that also confuses me about mut self is that it seemingly isn’t transitive in the same way that const or rather its absence is in C++.

                                                      1. 2

                                                        Yeah, it’s &mut self which has that property and corresponds to the absence of const, not mut self.

                                                        Here is how I’d compare them:

                                                                 Rust            |          C++
                                                        -------------------------|-------------------------
                                                        impl Foo {               | struct Foo {
                                                            fn a(self) {}        |     void a() const&& {}
                                                            fn b(mut self) {}    |     void b() && {}
                                                            fn c(&self) {}       |     void c() const {}
                                                            fn d(&mut self) {}   |     void d() {}
                                                        }                        | };
                                                        

                                                        The syntax used in a and b has been available since C++11. It gives similar behavior to your object.close() Rust example: it forces you to instead write std::move(object).close(). Of course, the C++ compiler isn’t as helpful in making sure you don’t use object again after.

                                                  1. 1

                                                    I mostly used a heavily customized (Neo)vim for about a decade. I eventually came to the realization that I just don’t like modal editing. In my opinion the benefits of modal go away when you have all the modifier keys under your thumbs (I use the Kinesis Advantage 360). I still occasionally use Vim when I’m in a terminal, but my daily driver is VS Code. They’re knocking it out of the park with the monthly releases; it just keeps getting better.

                                                    1. 12

                                                      Very reasonable observations. I find more and more developers, even the “younger generation” get burned on these hyper-specialized build systems and fall back to Make more and more often. I think it’s a good thing. Make is clunky but, as the poster notes, it does the job you ask it to do and you know it will continue to do it ten years from now.

                                                      1. 10

                                                        Make also has a bunch of problematic things. The biggest one is that it has no way of expressing a rule that produces more than one output but it also has no way of tracking staleness other than modification times. It also can’t express things that may need more than a single run. You can’t build LaTeX reliably with Make, for example, because it does a single pass and must be rerun until it reaches a fixed point. You often end up with fake files to express things that will run once, such as patching files.

                                                        The annoying thing is that many of the complex replacements don’t solve these problems.

                                                        1. 8

                                                          GNU make supports rules that produce more than one output. See “Rules with Grouped Targets” on this page.

                                                          1. 6

                                                            I’ve recently started using just which - as per their docs - “avoids much of make’s complexity and idiosyncrasies.”. Based on my limited use it looks like a promising alternative.

                                                            1. 3

                                                              It’s a handy tool but it has a major omission in my opinion: no freshness tracking. It always runs all the commands, it doesn’t track whether a task’s dependencies are up to day and running the command can be skipped.

                                                            2. 2

                                                              That’s why there’s latex-mk - it is a program that simply runs LaTeX the necessary number of times. It is also a large set of helpful Make definitions for TeX files so you don’t even need to teach it how to build. It knows about all the LaTeX flavours and related tools like pdflatex, tex2page, bibtex etc. The simplest possible latex-mk file is simply

                                                              NAME = foo
                                                              include /usr/local/share/latex-mk/latex.gmk
                                                              

                                                              Then running make pdf, make ps etc would build foo.pdf, foo.ps etc from foo.tex, but it can be as complex as you want it to be.

                                                              1. 1

                                                                I use latex-mk, but it also has problems. For example, I was never able to work our how to hook it so that it can run steps to produce files that a TeX file includes if they don’t exist.

                                                                1. 1

                                                                  That’s a bit of an odd requirement. What kind of situation requires that? I guess you could run some nasty eval to expand to make targets based on awk or grep output from your LaTeX sources, in GNU Make at least.

                                                                  1. 1

                                                                    Basically every LaTeX document I write pulls in things from elsewhere. For example, most of the figures in my books were drawn with OmniGraffle and converted to pdf with a makefile. I want that to be driven from latex-mk so that it can run that rule if I actually include the PDF (and so I don’t have to maintain a build system that drives a build system with limited visibility). For papers, there’s usually a build step that consumes raw data and runs some statistics to produce something that can be included. Again, that ends up being driven from a wrapper build.

                                                              2. 2

                                                                It’s been a long time since I worked on a LaTeX-only codebase requiring multiple compilation passes. I’m spoiled by pandoc + markdown for most of the documents I must write. I’ve heard that pandoc is a competent layer for a (la)tex -> pdf compiler instead of using pdflatex or xelatex or whatever directly. Have you seen pandoc being used in that way, primarily to avoid the multiple compilation pass madness behind pandoc’s abstraction thereof? I’ve also used tectonic a bit for debugging more complex pandoc markdown->tex->pdf builds, and it abstracts away the need for multiple passes.

                                                                1. 4

                                                                  I’ve been able to use pandoc to compile markdown books, but I struggled to use it well with TikZ or Beamer. LaTeX just has too many dark corners.

                                                                  1. 1

                                                                    I use TeX primarily because most academic venues offer only LaTeX or Word templates and it’s the lesser of two evils. If I didn’t have to match an existing style, I’d use SILE.

                                                                  2. 1

                                                                    The annoying thing is that many of the complex replacements don’t solve these problems.

                                                                    I guess build2 would qualify as one of those complex replacements. Let’s see:

                                                                    The biggest one is that it has no way of expressing a rule that produces more than one output

                                                                    Check: we have a notion of target groups. You can even have groups where the set of member is discovered dynamically.

                                                                    also has no way of tracking staleness other than modification times

                                                                    Check: a rule is free to use whatever method it sees fit. We also keep track of changes to options, set of inputs, environment variables that affect a tool, etc.

                                                                    For example, we have the venerable in rule which keeps track of changes to the variable values that it substitutes in the .in file.

                                                                    It also can’t express things that may need more than a single run.

                                                                    Hm, I don’t know, this feels like a questionable design choice in a tool, not in a build system. And IMO the sane way to deal with this is to just run the tool a sufficient number of times from a recipe, say, in a loop.

                                                                    Let me also throw some more problematic things in make off the top of my head:

                                                                    • Everything is a string, no types in the language.

                                                                    • No support for scoping/namespaces, everything is global (hurts especially badly in non-recursive setups).

                                                                    • Recipe language (shell) is not portable. In particular, it’s unusable on Windows without another “system” (MSYS2, Cygwin, etc).

                                                                    • Support for separate source/output directories is a hack (VPATH).

                                                                    • No abstraction over file extensions (so you end with with hello$(EXE) all over the place).

                                                                    • Pattern rules do not support multiple stems (in build2 we have regex-based pattern rules which are a lot more flexible: https://build2.org/release/0.14.0.xhtml#adhoc-rules).

                                                                    1. 1

                                                                      Agreed on all of the other criticisms of Make. I’m a bit surprised that build2 can’t handle the dynamic dependency case, since I thought you needed that for your approach to handling C++ modules.

                                                                      I’d be interested in whether build2 can reproduce latex-mk’s behaviour. A few interesting things:

                                                                      • latex needs rerunning if it still has unresolved cross references, but not if the number doesn’t go down.
                                                                      • bibtex needs running if latex complained about a missing bbl file or before running latex if a bib file used by a prior run has changed.

                                                                      There are a few more subtleties. Doing the naive thing of always running latex bibtex latex latex takes build times from mildly annoying to an impediment to productive work, so is not an acceptable option. Latex-mk exists, so there’s no real need for build2 to be able to do this (though being able to build my experiments, the thing that analyses the result, the graphs, and the final paper from a single build system would be nice), but there are a lot of things such as caching and generated dependencies that can introduce this kind of pattern and I thought it was something build2 was designed to support.

                                                                      1. 1

                                                                        I’m a bit surprised that build2 can’t handle the dynamic dependency case, since I thought you needed that for your approach to handling C++ modules.

                                                                        It can’t? I thought I’ve implemented that. And we do support C++20 modules somehow (at least with GCC). Unless we are talking about different “dynamic dependencies”. Here is what I am referring to: https://build2.org/release/0.15.0.xhtml#dyndep

                                                                        Doing the naive thing of always running latex bibtex latex latex […]

                                                                        I am probably missing something here, but why can’t all this be handled within a single recipe or a few cooperating recipes, something along these lines:

                                                                        latex
                                                                        if (latext_complained_about_missing_bbl)
                                                                          bibtex
                                                                          latex
                                                                        end
                                                                        while (number_of_unresolved_cross_references_is_not_zero_and_keeps_decreasing)
                                                                          latex
                                                                        end
                                                                        
                                                                  3. 3

                                                                    Am a big fan of Make. It is clunky, hard to debug, but it sits just at the right level of abstraction. I’ve seen more and more posts of people realizing it’s useful beyond the original use case of compile C. There is room, I think, for a successor that addresses its flaws (see David’s comment) and expands to cover modern use cases (distribution, reproducibility, scheduling, orchestration). The challenge is in finding a compact set of primitives to support that and keep it simple, ie. not Bazel.

                                                                    1. 5

                                                                      Ninja, maybe? That’s my hope at least. I like its approach of “do exactly what it’s told, and use a higher level tool to tell it what to do”.

                                                                      1. 5

                                                                        You may want to read Build Systems à la Carte or watch the talk about it by Simon Peyton Jones (audio warning: it’s quite bad). Shake seems to be what you’re looking for, unfortunately you have to write the Shakefile in Haskell and have GHC installed which can be a bit steep as requirement.

                                                                        Circling back to JS, I had a half idea to use the Shake model described in that paper to implement it in JS so I could replace Jake, which is a good tool but shares many of the problems that Make has.

                                                                        1. 3

                                                                          remake has made a huge difference for me, in terms of making Makefiles far more debuggable.

                                                                          1. 1

                                                                            Oh, remake sounds amazing, it was not on my radar, thanks!

                                                                      1. 3

                                                                        It’s weird to see a Rust UI post that doesn’t at least mention Druid and related projects. Raph Levien and others have spent a long time thinking about why Rust GUI is “hard” and the right data model to tackle the problem.

                                                                        1. 1

                                                                          Raph’s “Ergonomic APIs for hard problems” keynote at RustLab 2022 is also worth a look if you haven’t seen it (recording, slides).

                                                                        1. 4

                                                                          if y’all haven’t checked out https://monolisa.dev, it’s a great paid face I’ve used for a year or two, with a lovely script italic that i use for comments.

                                                                          screenshot from @tbray’s fedi thread

                                                                          1. 5

                                                                            I don’t understand why people like these script styles for comments. The cursive “f”, “l”, “r”, and “s” look so jarring to me. But to each their own!

                                                                          1. 3

                                                                            Nothing can tempt me away from Iosevka and my custom build of it. I’m not sure why the author thinks it’s only for CJK-heavy use cases. The website doesn’t even mention CJK. Maybe it is good for that too, but to me it’s just a great programming font.

                                                                            1. 1

                                                                              Custom-build Iosevka is the way to go; I do this too. Get everything just the way you like it. And the general shape is very good too (and I like the thin aspect)

                                                                            1. 18

                                                                              I used Elm for a project and regret it now. It ended up being equal parts Elm and TypeScript because some things were just too hard to do in Elm. Apart from the native-whitelist.json fiasco everyone is familiar with, there are a couple other things I dislike about Elm:

                                                                              • “No runtime exceptions” does not work in practice. It’s a bit like removing panic! from Rust and claiming you’ve solved all runtime exceptions. How would that actually work out? Result and Option would proliferate in function signatures. Every time you access an array you’d have to handle or propagate the out-of-bounds error. If you wrote a data structure that maintains runtime invariants, instead of asserting them, you’d have to leak their failure into the public interface. There were cases in my Elm project where I wanted to assert that a map had a certain key, but instead the language shepherded me to handle the Nothing case with some code that typechecked but would do nothing useful at runtime.

                                                                              • The compiler is too opinionated. For example, tuples cannot contain more than 3 elements. Why? Because four elements is too confusing! Some more zealous contributors wanted to limit it to 2 elements, or to extend it to custom data types. In general, Elm seems more interested in limiting what you can do (cheerfully! with exclamation marks!!) than addressing real world needs.

                                                                              Elm hasn’t seen an update since 0.19.1, which came out in October 2019. That’s a pretty long time to go without updates, but if it works, what’s the point in doing releases for release sake? To some adopters, this can look pretty concerning - especially from the Javascript world where everything is being updated all the time. To other eyes, it might just simply represent stability.

                                                                              It’s hard to take that seriously when it’s still 0.x. There are plenty of open bugs.