Threads for toast

    1. 9

      Wow, this is exactly the reaction I kinda imagined when I saw paperwm.

      Before doing the switch from awesome to paperwm I found Niri and decided that I’ll move directly to wayland+Niri instead. Though I kinda haven’t had the guts to leave all my X helper scripts behind and actually move to wayland yet…

      I guess this is what I needed to actually seriously consider switching to wayland soon. Thanks!

      1.  

        I found paperwm first, and it just really didn’t work for me, leading to me ignoring niri for a fairly significant amount of time by association. This post gave me the nudge to actually try it out, and I’m very pleasantly surprised! Going to be switching to it fully for a while to really get a feel for it, and then decide if I’ll stay definitively.

        1.  

          Curious - what do you like about niri that PaperWM didn’t have – or PaperWM has that niri doesn’t, I guess.

          1.  

            Running smoothly! (+ decoration handling, scaling, shortcut management…) Using PaperWM felt like replacing my touchpad with a cheese grater comparatively (I’m typically a labwc user).

      2. 2

        How exactly is your sqlite data stored? I’d be interested in seeing it! (For example, I’ve been considering using the sqlar format to simulate a filesystem.)

        I’ve been too busy (procrastinating) to redo my personal website again, but this is exactly the type of approach I’m going to use. I think there’s a lot of resurging interest in making small things for personal or small-communal use, just for fun, “enough for what I need”, and I love seeing it. My previous attempt is an unholy POSIX sh script that’s a port of a similar plan9 rc script, and it definitely shows; the static site generation aspect makes it hard to update, as you mention, thus the interest in making something dynamic and simply “synced” rather than VCSd.

        1. 6

          Something people often tend to miss with tailscale is that it’s not an OpenVPN replacement, but rather an IPSec replacement (also see: wireguard). Back in ~2017 ish (before tailscale was founded) I was using it to connect multiple DCs together (previously if you didn’t want to do ipsec on the gateway you’d just run a literal fiber line between the locations). I’m still doing this, but tailscale makes it way easier, and also much more transparent (my homegrown set of scripts to ensure dnsmasq knew what to ask for what weren’t all that clean).

          I should honestly write about this sometime because the kind of ergonomics you can achieve in the typical startup network situation (multiple kubernetes clusters and some dedicated servers thrown into the mix, so a classic “hybrid cloud” setup) are great.

          1. 3

            I mean you are technically correct. But also, It’s totally an OpenVPN replacement in that it solves a whole lot of the same problems and it does so in a way that, compared to OpenVPN, is almost anger inducingly easy. If you’ve ever had to manage or configure OpenVPN you probably know what I mean.

            1. 3

              Oh I don’t disagree! I mostly mean that it’s a superset; i.e. it can do way more than that. It also does even more in that respect with the authentication suite via funnel, so it’s truly the “do everything” VPN.

          2. 21

            I feel like “simple” and JavaScript should never be put together in a sentence, and having different Git configurations can be easily handled using different directories, e.g.: ~/code/work, ~/code/personal

            ~/.gitconfig:

            [includeIf "gitdir:~/code/work/"]
                path = ~/code/work/.gitconfig
            
            1. 1

              I find having to have an extra subfolder a bit onerous, and I have a tendency to clone to /tmp/tmp.…, so I actually configure it based on remotes, since when I want a specific profile, it’s usually due to a specific entity (e.g. work), meaning it’s predictable. It’s a bit more complex, but it’s fairly easy to reason about.

              It looks like this for me:

              # ~/.config/git/config
              # this way I don't configure work things on my personal laptop, for example
              [include]
                path = "local.config"
              
              # ~/.config/git/local.config
              [include]
                path = "entity1.config" # to bundle the URLs
              
              # ~/.config/git/entity1.config
              [url "git@github.com:entity/"] # for example
              # optional, I just find myself doing temporary clones often
                insteadOf = "e1:"
              
              [includeIf "hasconfig:remote.*.url:e1:**"]
                path = "entity1-overrides.config"
              
              [includeIf "hasconfig:remote.*.url:https://github.com/entity1/**"]
                path = "entity1-overrides.config"
              
              [includeIf "hasconfig.remote.*.url:git@github.com:entity1/**"]
                path = "entity1-overrides.config"
              

              One additional advantage of this is you can do host-based overrides. For example, you could have an override for anything in GitHub to use a less-public email address.

              1. 1

                since i have personal and work on github this fails immediately and the directory approach makes the most sense.

                It’s amazing how many different tools exist for managing git profiles and ssh configurations. It seems like git should step up and handle this better natively.

            2. 27

              There does seem to be a bit of a chicken-and-egg problem. A bunch of people (myself included) are eager to work on EU alternatives to American services, but having things that are comparable requires a lot of initial funding and reasonable expectation of usage, which just isn’t there. In the meanwhile, public organizations don’t want to make a move until they see something that inspires confidence, which would require a player to already be set up.

              I do wonder how things would go if the article’s recommendation (divert a % of government funds towards setting these things up) would go. The closest we have right now are the tech sovereignty funds / nlnet / similar, which are great, but much smaller than you’d need to compete with (say) AWS. (OVH and similar try, but if you’ve ever used their cloud offerings rather than just a couple of random dedicated servers, the limitations become apparent pretty quickly.)

              1. 13

                I think what’s needed is a mandate, rather than a subsidy. Given the level of regulatory capture in democracies around the world including in Europe it’s hard to see that happening, but without a mandate using Google Workplace, AWS EU or similar will always be the easiest route to follow for the large proportion of people who are less ideological.

                AWS itself is a product of a mandate that all communication about mechanics should occur via API rather than email.

                1. 15

                  What AWS actually is in practice right now is a way to bypass a bunch of regulations. SoC Type 2? We’re on AWS, it’s there already. HIPAA? They have a document. So on and so forth. It completely abstracts away company structure as well, leaving only the product, which means the software architects don’t need to learn about, say, network theory, and can just have a checklist that says, “we get our own VPC”, for example.

                  I recently won a public contract at work that ended up being worth multiple million euros (less than it was supposed to, but I had no part in the charge negotiations). It was originally planned to be hosted on OVH, but due to a bunch of extraneous requirements from them, it became strictly impossible to do (not in any reasonable amount of time, anyway), so it got moved over to AWS Paris instead. This is how it tends to go in my experience.

                2. 1

                  Agreed. And in 4 years the tides may shift again and everyone will have forgotten how close to the brink we were, and therefore the demand for purely EU services might never materialize.

                3. 13

                  Great news! I’ve been running the 4.0b1 since it became available and honestly, you mostly can’t tell the difference while using it (benchmarks vary, but in terms of UX it’s about the same as ever), in a good way. You can absolutely tell the difference for installation (the self-installing binary) when it’s not in a repo, and the ctrl-r glob is notable.

                  1. 4

                    I’m curious, how much has the binary size changed? In my experience, tools written in Rust such as rg are about 20× larger than their C counterparts.

                    1. 17

                      The static build of fish for linux-amd64 from the releases page is 14M. My local build of fish 3.7.0 is 1.7M, but it also includes /usr/share/fish which is 9.1M. Since the static build includes all of that data, the fair comparison is between the addition of those, which brings us to 14M vs 10.8M.

                      In summary: bigger, but only slightly (~25%).

                    2. 2

                      I think the default behavior for alt-backspace has changed to delete the full argument instead of the token. In ls a/b/c/d | hitting alt-bkspc used to go to /c and instead now it goes to ls

                      just to say I’m working on retraining my muscle memory to use ctrl-w

                    3. 2

                      I consider that I have learned something when my value judgments have changed, and I am capable of changing them through actions. For example, if I get a compiler error, I am able to make the judgment of whether the error is relevant or not, and I can change its relevance by modifying the code in ways that make it more or less relevant. If changes to me turn a previously relevant error irrelevant, or a previously irrelevant error relevant, it is because I have learned something.

                      In a way, this approach is actually epistemically vague because the “something” is generic. I cannot say whether I’ve learned [thing], but only that I’ve learned something. The definition for whether [thing] has been learned is most likely undecidable. I used to (~10 years ago) have a bunch written on that subject elsewhere, but couldn’t find it just now, and don’t want to talk out of my ass on the subject from memory.

                      1. 5

                        Reading this reminded me of Continued Fractions (I was surprised to see that talk was never posted on here, thus the addition). The comment-linked first-hand account mentions them, but not why they didn’t end up being used. Then again, the actual implementation there is a pain, and symbolics were probably much easier to coerce than a non-summed series.

                        1. 4

                          I found the Open Source section interesting because it both goes into more detail and less detail than I expected (n.b. I only read the slides, I did not try to hunt down a recording of the talk). Ultimately, what tends to happen is that different people will have different use-cases, and will thus try to add their own features to the code, but they will not consider the cost of maintenance (as they are not maintainers of the software). The problem Pike demonstrates is thus close to the previous section, only with a different attempt at resolving it.

                          What I find missing is that the view of the “true[sic] open source way” is (perhaps unsurprisingly) very enterprise-brained, as it strongly depends on the project. I would say that the true open source way for most software is actually to allow anything that already fits the high standards, help finalize or ask for improvements for anything that is close to fitting, and then strictly denying what doesn’t fit, with a recommendation to fork. For me, at least, the true open source ecosystem is one where forks are numerous, and synchronization between them is made trivial. There are two reasons this doesn’t typically happen:

                          1. The project is actually an important piece of infrastructure where the cost of utilizing a non-canonical upstream is unduly high. This is a topic of its own, but it’s not actually that common (usually kernels and programming languages fall into this category).
                          2. The maintainers of the project are actively interested in maintaining control over it for out-of-band reasons, and therefore are incentivized to punish / prevent forks. This actually closely fits what Pike is used to working on (virtually everything Google works on), so it does make sense he would hold this view of the true open source way.

                          On the overall, I very much agree with his view, and I do wonder if the talk proper actually goes into this a bit more (relative to the other 4 top level sections). I might try to dig up a recording (and will link it if I do this and find it).

                          1. 52

                            Couldn’t agree more! I think I shared this on lobsters years ago, but my favorite thing in my ~/.zshrc is this little guy:

                            function t {
                              pushd $(mktemp -d /tmp/$1.XXXX)
                            }
                            

                            If I run ‘t blah’, I’ll drop into a temporary directory with a name like /tmp/blah.1tyC. I can goof around without worrying about cleaning up the mess later. When I’m done I can popd back to where ever I was. On the off chance I like what I did, I can just move the folder somewhere permanent. I use this every day; my $HOME would be unnavigable without it.

                            1. 8

                              I like to automate the “popd && rm” part in the alias directly. This bash function enters a subshell inside the tmpdir and when I exit or ^D, it pops back to the previous directory and deletes the tmpdir – with a little safeguard when you mounted something inside it! Had a bad time when experimenting with mount namespaces and accidentally deleted my home directory because it was mounted inside this tmpdir …

                              tmp() {
                                history -w || true
                                t=$(mktemp --tmpdir -d tmpdir-XXXXXX) \
                                  && { $SHELL -c \
                                   "cd '$t' \
                                    && printf '\033[31m%s\033[0m\n' 'this directory will be removed upon exit' \
                                    && pwd \
                                    && exec $SHELL" \
                                   || true; \
                                  } \
                                  && if awk '{ print $2 }' /etc/mtab | grep "$t"; then
                                    echo -e "\033[31maborting removal due to mounts\033[0m" >&2
                                  else
                                    echo -e "\033[31mremoving temporary directory ...\033[0m" >&2
                                    rm -rf "$t"
                                  fi
                              }
                              

                              Here is a more recent one for fish as well.

                              1. 5
                              2. 5

                                I have nearly the same function, but I’m using ~/tmp has the base, precisely because /tmp is often emptied on boot and I know that I sometimes want to go back to these experiments.

                                Using ~/tmp helps keep my home directory clean and makes it obvious that the stuff is easily removable, but if, on an off-chance, I might need one of those experiments again, it’s there, waiting for me in ~/tmp even though I might have rebooted in the mean time.

                                But in general, yes, this is the way to go. I’ve learned it here on lobsters years ago and I’m using it daily.

                                1. 1

                                  I like this idea, I came to the same conclusion as you, but instead of doing ~/tmp, I added “tmp” to my personal monorepo’s .gitignore. But sometimes I would put random downloads in that folder, lol. Thanks for the idea, I feel like having two of these makes sense, one for coding stuff and one for everything else

                                2. 4

                                  This is absolutely fantastic. I’ve been independently using this trick for years (through ^R instead of an alias) and I love it too. I didn’t know mktemp took an argument though, thank you!

                                  1. 3

                                    I like to do programming-language-specific versions of this. So like “newrust” creates a temp dir, puts a Cargo hello world project in there, cd’s in there, and opens my editor. Similarly “newgo”, “newcpp”, etc. Great for playing around.

                                    1. 1

                                      I tend to have two use-cases for these, one that’s very very temporary (single shell instance), and one that should stick around a little longer (i.e. duration of the boot). This works out to something along the lines of

                                      t() {
                                      	mkdir -p /tmp/t
                                      	cd /tmp/t
                                      }
                                      tt() {
                                      	local dir=$(mktemp -d)
                                      	[ -d "$dir" ] || return 1
                                      	cd "$dir"
                                      	trap "rm -rf '$dir'" EXIT
                                      }
                                      

                                      Notably, I use cd because I rarely popd at the end, I usually just close the shell once I’m done with it. I probably should do it anyway though :)

                                      1. 1

                                        Oh man, thank you so much, this is the kind of “small purpose, great impact” tools that I love! Here’s my fish version where the argument is optional.

                                        function t
                                            if test -z $argv[1]
                                                set dirname xx
                                            else
                                                set dirname $argv[1]
                                            end
                                            pushd (mktemp -d -t $dirname.XXXX)
                                        end
                                        
                                        1. 1

                                          What kind of experiments are you doing with folders exactly? I’m curious about the overall workflow where you would like a folder’s contents or not.

                                          1. 7

                                            Personally I often try out small experiments with different programming languages which need a project structure to be set up before they can work properly. For this a new folder is often needed.

                                            1. 2

                                              For me, I just do a lot of stuff directly in /tmp, but this seems nice to keep things organized in case I want to move a directory to somewhere more permanent.

                                              Scenario A: I’m trying to test something in C/Zig and don’t want to pollute my actual project directory. I just make a /tmp/test.{c, zig} and compile in /tmp. I think putting it in a temp dir would be nice, if unnecessary.

                                              Scenario B: I, semi-frequently, will clone git repos into /tmp if I’m just trying to quickly look at something. Occasionally, I clone multiple related repos at the same time. Having a temp dir to keep them together would be nice if I ended up wanting to move them out.

                                              1. 2

                                                For me it’s poking around in exploded jar files, or tarballs, or other archive formats mostly.

                                                Sometimes you don’t just want to list the contents, or maybe there are container formats in container formats, a zip inside a tar, inside a cpio for example.

                                                I want to unpack all these, look around, examine files, etc. without worrying about needing to clean up the mess after.

                                              2. 1

                                                I have the exact same function. It’s so freakin’ useful.

                                              3. 4

                                                My personal highlights:

                                                • the (very buggy and often problematic) built-in LetsEncrypt support is gone (in favor of better support and recommending a reverse proxy)
                                                • cron format is now the same as on typical linux cronds
                                                • unified secrets/environment (and it’s a map now)

                                                Also, while technically not part of the release, it seems it’s gotten easier to run with podman since the last time I’ve tested it. I’ll try setting up a new instance just to see shortly (will reply with how that went).

                                                1. 7

                                                  Update: indeed, it’s a lot easier to get working now, after some work on both the woodpecker and podman side. I have a full installation running well in ~35ish lines of quadlet unit and environment files. Not perfect, but definitely a lot nicer than it used to be.

                                                2. 1

                                                  I’m making some progress on my numeric tower implementation for Janet.

                                                  IME, making something that works only takes 1 or 2 rewrites, while making something that’s nice takes about 8. I’m on the 4th rewrite right now, which is to say that it works, but I’m not happy enough with it yet.

                                                  1. 5

                                                    It interests me that compared to media where ornament, illusion, and excess are valued by some critics (architecture, painting, film, basically any given fine or liberal art) everyone in software seems to agree that simple code is better. It’s somewhere between an axiom and a thought-terminating cliche. Certainly it’s hard to find anyone arguing that code should be complex. And so Socrates asks, innocently: if everyone agrees code should be simple, then all code is simple, right? Because nobody would write any other kind.

                                                    1. 29

                                                      Everyone agrees code should be simple. Everyone agrees it should be as simple as possible, but no simpler. No one agrees on what ‘as simple as possible’ means.

                                                      1. 9

                                                        Writing simple code is difficult. Adding another special case to fix an edge case bug or add one new feature is the easy path. Simplifying code requires thinking hard about the problem domain until we understand the details well enough to see and understand the patterns in the requirements. The day to day work of programming is often about tweaking existing code to adapt to changing requirements, which typically adds complexity; reducing that complexity again might require taking a step back and recognizing that the existing architecture is no longer fit for purpose and re-architecting portions of the code base.

                                                        Simplicity is something we strive for, complexity is something it takes effort to fight against, as the natural evolution of a code base is to increase in complexity.

                                                        There’s also the problem that the value of simplicity isn’t innate, it has to be learned. People who haven’t had to work a lot with someone else’s code or code bases too big fit in your head yet naturally don’t understand the perils of complexity. They’re often quick to reach for things like unnecessary layers of abstraction (aka layers of obstruction). And most people, when given a task in a task tracking system, will focus on getting that task done; most people aren’t, or don’t percieve themselves to be, in a position to unilaterally decide, “implementing this feature in the current architecture would increase complexity too much, it is time to restructure this part of the code”.

                                                        1. 3

                                                          Some of us strive harder than others, though. And we’re certainly not all equally incentivized to restructure and refine and improve existing code. Too often we are paid only to implement features good enough to ship, and then move on to the next ticket. Only when things start to break do we return to that code, and then only long enough to apply the duct tape.

                                                        2. 6

                                                          I think you’re comparing one-person-or-small-group-art with programming-as-part-of-a-large-group.

                                                          The “simple code is better” mantra comes from the world of “professional software development”, where it’s a given that you’re paid to produce code that can be maintained by people coming after you, that you’re part of a larger-than-you software lifecycle.

                                                          Code that is written for fun, like the “advent of code” answers, a ray-tracer one makes over the weekend, a one-man-project video game… none of that has to be simple, people don’t push for simplicity there as heavily in my opinion.

                                                          In fact, the complex, “I solved advent of code in sql” type solutions seemed to garner far more praise than the “I ran ‘numpy.solve_graph’” solutions, so clearly there’s some appreciation for complexity.

                                                          Going back to art, there also is “art done as part of a bigger machine”. The people making hallmark cards, the people drawing keyframes for a disney movie… I have little doubt that those jobs, just like “software development”, value working in a way that fits into the larger company (i.e. simple processes, minimal individualism).

                                                          1. 5

                                                            I don’t know about this. The code that I’ve written for fun with the intent of open-sourcing it has tended to be a lot simpler than code code that I’ve written professionally as part of a team. For solo projects where I don’t have a deadline, I can take the time to polish everything as much as I want, rewrite pieces to simplify them, rewrite bits to unify concepts, and so forth. Professional code tends to be more about writing it and moving on to the next task.

                                                            1. 3

                                                              I don’t think it’s a one person or team thing, the key bit in your post for the second category is that this is code no one is maintaining. You don’t take your advent of code project from this year and use it as a starting point next year. By Christmas, it’s done and you throw it away.

                                                              Even for projects where I’m the sole contributor, I value simple code if I’m going to come back to it later and need to understand it. Especially if it’s not something I’m spending much time on between hacking times, so I want to be able to get back up to speed quickly.

                                                            2. 5

                                                              The value of simple code is something you learn of at a certain level of experience, though… I can’t say I knew simple code is good code 10 years ago when I was just starting out. Especially when most learning materials were focused on object orientation rather than simplicity.

                                                              everyone in software seems to agree that simple code is better.

                                                              I don’t know what “everyone” you’re referring to, but I’ve met plenty of people who were not on board with this, even despite their seniority.

                                                              1. 5

                                                                The term “simple” is just useless. It has two meanings: 1 = simple as easy to use or easy to understand, and 2 = simple as consisting of few parts, basic, primitive. They get mixed up all the time.

                                                                People often create something that is simple(basic), but imply it gives them simple(easy). And this often isn’t true. If you have something simple(basic) that doesn’t handle all the edge cases, you will breed complexity where the edge cases happen. Once you add support for all the edge cases, it may be more robust and simple(easy to use), but it’s not simple(basic) any more.

                                                                This is why people keep reinventing frameworks, CMSes, ORMs, game engines, GUI toolkits, build systems. Every new project starts out as simple(basic), and because it’s written exactly to the author’s needs, and hasn’t been battle-tested much, it also seems simple(easy) to the author. The author proclaims that their framework is the simple, with none of the bloat and complexity of the other framework. Once the new framework matures, fixes “todo: hack”s, and handles more stuff to make more things simple(easy) with it, it stops being simple(basic), and someone starts the cycle again.

                                                                And if you’ve ever worked in a web agency, you’ll know that all the clients that ask for “just a simple website” mean they want a simple(easy) website, but only have a budget for simple(basic).

                                                                1. 5

                                                                  If you have something simple(basic) that doesn’t handle all the edge cases

                                                                  Then you have a big fat bug and failed to fulfil the requirements, of course working around it is going to be more complex than actually fixing the bug.

                                                                  That said, we often overestimate the complexity necessary to handle edge cases. Done well, many edge cases can be merged into the common case. At least if we avoid silly things like throwing an exception on empty lists, when we could instead just do nothing, or return an empty list.

                                                                  This is why people keep inventing frameworks, CMSes, ORMs, game engines, GUI frameworks.

                                                                  Perhaps the mistake is generalising the framework? It would make sense that a special purpose framework can be much simpler than one that is supposed to handle everyone’s use case, so perhaps we should keep home made frameworks in their intended niche?

                                                                  1. 3

                                                                    Done well, many edge cases can be merged into the common case.

                                                                    When you have many similar but not identical cases, merging them into something common becomes an abstraction layer. This itself can become a point of contention — it’s now simpler(easy) to handle all the cases, but every indirection and smart solution moves it further away from simple(basic).

                                                                    For example, golang is loved for being simple(basic), even though ad-hoc concurrency with channels can be difficult to get right (e.g. you can corrupt data if you fail to use atomics and locks where necessary). Languages that make this aspect simple(easy) are not simple(basic).

                                                                    Perhaps the mistake is generalising the framework?

                                                                    Probably. But you will be seeing limitations that your solution keeps bumping against, and seeing patterns of circumstances where it keeps failing, and it’s hard to consistently say “no” to fixing your biggest problem.

                                                                2. 3

                                                                  I’ve seen many instances of people not agreeing on this, but IME it’s typically a misunderstanding. They’ll presume that the simplest implementation is whatever comes into their heads first (the article’s point that this is not the case actually needs to be made), and indeed, that’s often not the best.

                                                                  In other cases, I’ve seen “simple” be conflated with “easy to implement”. For example, I’ve seen an authorization mechanism that’s essentially a stack based virtual method call machine, each one doing a database request. It’s “simple” because to utilize it, you just add a string to an array, and it chugs along from there, but in practice, it’s a disaster.

                                                                  Put differently, the problem we’re facing is indeed epistemic in nature, where the essence of simplicity as agreed upon by people that discuss it is non-transferable to those that are not predisposed to this type of meta-thought, and consequently, communication breaks down. A classic problem of misconception with extremely high cost.

                                                                  We can theorize about what would happen if this kind of information was transferable, but we can also wonder what would happen if pigs could fly, so in the end, this is just how it is. I think the kind of outreach the article tries to do, as well as explicit on-the-job training when possible, is one possible way out.

                                                                  1. 3

                                                                    In other cases, I’ve seen “simple” be conflated with “easy to implement”. For example, I’ve seen an authorization mechanism that’s essentially a stack based virtual method call machine, each one doing a database request. It’s “simple” because to utilize it, you just add a string to an array, and it chugs along from there, but in practice, it’s a disaster.

                                                                    This very article conflates simple with “easy to implement”:

                                                                    Simpler code is faster to write & maintain. That’s cheap.

                                                                    I’ve found many times in my career that the “simple” thing was harder to implement initially, and therefore not the cheapest way. It has probably proven to be the easiest to maintain in every case, but not so frequently “faster to write”.

                                                                    1. 2

                                                                      This very article conflates simple with “easy to implement”:

                                                                      No I do not. In fact, I’m saying exactly what you just said. From the article:

                                                                      See, the simplest solution to a problem is rarely the first one that comes to mind. In the short term it’s this solution, not the simplest, that is cheapest to implement.

                                                                      Though simple code and good tests are cheap in the long term, they significantly slow down the start of any project, and quite visibly so. Promises of speed up are always a couple weeks ahead, and actual return on investment can take even longer.

                                                                      1. 2

                                                                        Sorry I missed that! I think my eyes jumped over that after I read the bullets (that I quoted in my earlier reply) to read the testing section. The testing section (particularly property testing) was the part of the essay that I found most interesting, and I hurried through the rest.

                                                                  2. 3

                                                                    And so Socrates asks, innocently: if everyone agrees code should be simple, then all code is simple, right? Because nobody would write any other kind.

                                                                    There are too many forces in opposition to simple code. Skill, time and (by extension) money. But also the unstoppable march of feature creep. More requirements means more complexity usually. And the most elegant, simplest designs are often the hardest to integrate new features into, especially if the features need cross-cutting information.

                                                                    1. 2

                                                                      “if i had more time, i would have written a shorter letter” – wayne gretzky, probably

                                                                      1. 2

                                                                        I remember taking a class on civil engineering in college. One of the anonymous quips that was quoted was:

                                                                        Any idiot can build a bridge. It takes an engineer to build a bridge that barely stands.

                                                                        It’s one that I often think of while working on a software system.

                                                                        To me, simple code means that the code structure uses the minimum possible set of structural elements to handle all reasonable inputs – where it’s been reduced until the point that taking any single one of the remaining elements away will make the whole thing collapse and fail on the majority of inputs. That also means not special casing anything, since if you take away the special case handling it will only fail for some edge cases.

                                                                        1. 1

                                                                          Complicated code is the wrong tool but it often does the job.

                                                                        2. 1

                                                                          This is already on my TODO project list, albeit not as part of a C compiler, but as part of a project manager for Janet (I’m not particularly happy with jpm as it is currently). If that experiment goes well, I will most likely consider other integration scenarios though!

                                                                          1. 29

                                                                            I’m gonna learn how to (consistently) finish what I start.

                                                                            I’ll start with a non-exhaustive list to track all of the things that I’ve started and not finished, so I can explicitly prune the ones I don’t intend to work on, and at least have something to go back to whenever I do have the energy to work on something.

                                                                            1. 15

                                                                              If you figure out how, please let me know :’(

                                                                              1. 8

                                                                                I’ve already made some progress in 2024 on it, so I’ll share a little bit here to help out.

                                                                                The core insight is that I’m much more likely to finish something if I see consistent progress being made as I work on something, while my “typical” approach has been to ruminate on something for ~3 years and then “draw the rest of the owl” in a single weekend (see: most bunker labs posts as one example). The reason said typical approach “works” is it essentially makes it so all of the potential barriers to finishing the thing are gone (as I’ve already “finished it in my head”), so I can just go straight to the final product. This obviously doesn’t work for a large subset of problems (i.e. anything but the kind of research PoC I “usually” tend to put out).

                                                                                Once I got that insight out of the way, the question became “how do I allow for lower issue resolution AoT such that I can make projects that are longer?”. What I’ve tried in 2024 so far has been REPL driven development (to a greater degree than I’ve been doing previously, full on Conjure integration, proper setup, etc) and it definitely helped a bit.

                                                                                In 2025, I’m going to try and “practice finishing”, in the sense of trying to actually either eliminate projects or finish them before moving on to new stuff. I’ve actually been trying to force myself to make a new blog post every month (even if it doesn’t measure up to my usual standards; it’s a different medium), and I also fully completed AoC with a time limit per day (minus the last day to mop up the couple of bits that I had left to do). It’s too early to tell for these yet, but I suspect at the end of next year I’ll see some progress there.

                                                                                Hope that helps at least a bit (it strongly depends on whether the source for your struggles resembles mine). Happy new year!

                                                                                1. 1

                                                                                  If either of you are interested in going nuclear, I can recommend Beeminder which I’ve been using for over a year now: https://www.beeminder.com

                                                                                  In short, you set a goal like say; writing 1 blog post a month and if you don’t reach the goal, you get charged $x. The aim here isn’t to never lose money but to reach a point where you’re sufficiently incentivised (financially) to do whatever your goal is.

                                                                                  It’s pretty nutty at first but once you get into it, it becomes a pretty fool proof setup where I know if I stick something in Beeminder, it’ll get done whether I like it or not.

                                                                                  The founders have been around for over a decade as well and if you derail (don’t hit your goal) for a legitimate reason like you were sick, there are no questions asked around refunds.

                                                                                  It’s less of a traditional business and more of an economics experiment let loose but it works for a lot of people. There’s plenty of theorising on their blog as well: https://blog.beeminder.com/defail/

                                                                                2. 4

                                                                                  I had the same problem (that I would be working on too many things and would start new things before finishing existing things) so I instituted a system like OP in Things.app.

                                                                                  I only use projects for things that will require long and sustained effort and I put them into areas called Now/Next/Later. The idea is to work only on things in Now and to finish stuff before pulling in new things.

                                                                                  Each Things project has metadata and a slug that is also the tag for all mails relating to it and the subfolder in my projects directory etc.

                                                                                  Here’s how it looks: https://share.cleanshot.com/wHfd1shv

                                                                                  1. 2

                                                                                    One thing I’ve found helpful is I explicitly list out next steps of my project in Obsidian as a checklist.

                                                                                    What I realized is that planning the project requires executive function, which can be worn down by the time I get time to push a project forward at the end of the day. These steps can be very simple or more complex. The goal is to get them down somewhere so that your brain doesn’t have to carry them around anymore. I usually write these early in the day if needed, when I have the most clarity.

                                                                                    Essentially, I’m coaching myself.

                                                                                    1. 1

                                                                                      Oof - this is a (helpful!) tough observation to hear, because I know this (intellectually) from reading Getting Things Done, and now recognize that I’ve been dropping the ball in a way that I should have already known. Thank you for the prompt to pay more attention to my organizational systems and repair them to a more useful state!

                                                                                      1. 3

                                                                                        I only realized it when coaching some coworkers. I tried it for myself and it really helps me.

                                                                                        Just tried doing a little dev over winter break without it and I definitely missed it! Easy to get lost in mazes of my own creation inadvertently.

                                                                                  2. 4

                                                                                    Good luck, you’ll have beaten me to it :)

                                                                                    A mental trick I learned is to explicitly decide to be done with something, and to let that count as finished too.

                                                                                    1. 1

                                                                                      I think it all boils down to determination. If you believe in yourself and commit to finishing everything you start, you will quickly arrive at the conclusion that you shouldn’t start anything.

                                                                                    2. 60

                                                                                      Lua is overrated. I’m always disappointed by it. It reminds me of Go in having a cool underlying runtime/interop but a terrible language on top.

                                                                                      1. 29

                                                                                        Unfortunately, I have to agree. It’s got great runtime semantics: lexical scope, proper closures, real first-class functions and tail-calls, and the way it handles closures internally is quite elegant. Most importantly for an embedding language, it’s got a really minimal build system that makes it really easy to integrate and its C API is quite nice. But the language has a lot of footguns, some of which can be avoided with better surface syntax. I usually reach for fennel.

                                                                                        1. 14

                                                                                          You might be interested in Janet. The author is still Calvin Rose (the guy who wrote fennel), the VM and runtime is reminiscent of Lua’s (I find the C API to be much better, and the fiber concept is an interesting alternative to lua’s coroutine implementation), while being much less footgun-y as a language.

                                                                                          1. 2

                                                                                            I’m curious, what did you find to be better about the C API? In my experience it was a lot more difficult to drive Janet from C than driving Lua. It did seem to handle the other direction pretty well though.

                                                                                            1. 1

                                                                                              My use-cases tend to be the other direction, indeed: I wrote Jurl and janet-date (http client, I’m actually quite proud of the high level API design, even if the internal implementation could use some work, and proper datetime support).

                                                                                              Lua’s stack juggling when writing modules, lack of strong support for documentation and defining sources, and how you have to coerce the types to do what you want isn’t great. Janet’s abstract types are a really great way to encapsulate C objects in a way that feels native (in Janet), and the small but capable internals library means that you get to skip some of the hardest work in writing bindings (argument validation).

                                                                                              Anyway yeah, I admit I haven’t actually embedded Janet all that often (most cases thereof for me are jpm-built CLIs, so Janet is still the driver in a lot of ways) so I don’t have strong opinions in that direction.

                                                                                            2. 1

                                                                                              This looks really good!

                                                                                          2. 10

                                                                                            To each their own, I happen to really love it. Not gonna try to convince you otherwise, but would love to know what scripting/embeddable language you prefer. Not picking a fight or flamewar, just curious. Besides Lua, I enjoy Janet a lot.

                                                                                            1. 5

                                                                                              Common Lisp is an excellent language, and is easily embeddable using the ECL implementation:
                                                                                              https://ecl.common-lisp.dev/main.html

                                                                                            2. 5

                                                                                              Lua is is not perfect but it’s not terrible either.

                                                                                              The fact that a neat little Lisp like Fennel maps so nicely onto Lua precludes it from being terrible. It’s also too small to be terrible. You need a larger language surface to have a chance at being truly terrible.

                                                                                              1. 5

                                                                                                I’ve never been drawn to Lua, but I really like Go–in particular, it’s minimalism and simplicity makes it really easy to get stuff done. If Lua is Go-like, I suppose I should give it a second look.

                                                                                                1. 3

                                                                                                  For what its worth I enjoy both Go and Lua and for similar reasons: both are languages that fit in my head, not too much syntax or magic, the code “does what it says” for the most part. Easy to read other people’s code.

                                                                                                2. 5

                                                                                                  Yeah… Another big wart: variables being global by default. The local variable attribute (local x <const>) in Lua 5.4 is kinda strange.

                                                                                                  Using doubles as ints, then introducing 64-bit integer types in Lua 5.3 seems to introduce a lot of optimization hurdles. It’s nor favored by the LuaJIT creator.

                                                                                                  1. 4

                                                                                                    a terrible language on top.

                                                                                                    just curious why do you think so? your github is full of Go projects, wat?

                                                                                                    1. 6

                                                                                                      You can have projects written in language you don’t like? Like it’s more believable when someone has maintained many Go projects than not, if it comes to opinion if language is good.

                                                                                                      1. 3

                                                                                                        Yes definitely. I’m very qualified to say Go is bad.

                                                                                                        1. 1

                                                                                                          Care to expand your “a terrible language on top” ? I’m really interested in your opinion.