Threads for axelsvensson

  1. 2

    I dislike that content needs to be escaped. It is not ergonomic, nor is it necessary. How about optional escaping for delimiters, and never any escaping for content? See

    1. 4

      This, along with yellow on white, is one of the big reasons why I wrote my own terminal emulator. My solution was more than just changing the palette, but also using different palettes on different combinations. So if you print blue on black, it will use a different blue than if it is the same code on a white background, in order to ensure both can easily be seen.

      Of course, I wrote it for me, so I just did what works for me without worrying about what palette others want or backward compatibility etc.

      1. 2

        Interesting, how are the effective colors calculated?

        1. 1

          For the basic 16 colors, I put in some hand checks. Like if(foreground=x && background=y) foreground=z special cases. For 24 bit color, I quantize them back to a smaller palette and do a HSL contrast check which isn’t perfect but 24 bit color is useless and any application that uses it probably doesn’t deliver value to me anyway so i don’t really care if they don’t work quite right.

      1. 0

        It is fantastic, but not generic, and not fast.

        Why do it at the AST level? I would think it would be possible to produce a practically as good diff at the text level. Which would work on any text, possibly faster.

        1. 13

          Based on the first line of the post, it does not appear the author was attempting to solve your problem:

          I’ve always wanted a structural diff tool, so I built difftastic.

          I’m not sure if you’re attempting to provide constructive criticism. The author said he always wanted to design a more comfortable seat, and you told him he failed to build a faster car.

          1. 4

            Also, it does not seem like a well-posed problem – “non-line based diff tool and format for text in general (that could work on words or characters)”

            i.e. It’s one of those things where you’re probably imagining something that not only doesn’t, but can’t exist, and then if you actually tried to build it, you would realize that’s not the problem, and you would end up building something else

            1. 2

              I don’t know why a non-line based diff format is so hard to imagine. When you see red and green words in the terminal, you are looking at one form of it: Color escape codes. It works on words and characters, and the human readability is excellent. It’s probably the most universal diff format in terms of support, except it’s just an output format.

              1. 2

                I agree, @anordal. I am working on a diff format that I hope will eventually facilitate a common language between diff tools like difftastic, and generic tools for patching and visualization. It’s explicitly not just an output format. The idea is simple but it still has difficult trade-offs. I’d be delighted if you want to help out:

                1. 1

                  Not the parent, but it seems to me that looking for sub-line diffs is more poorly specified than line-based. Once you allow sub-line diffs, the question becomes how you decide whether to prefer a partial line edit to treating the change as deleting a line? It seems to me that there are a lot of different metrics you could use, with wildly different results.

                  1. 3

                    The two problems are exactly equivalent: Find an edit that transforms one sequence to another. Whether that sequence consists of lines, lexer tokens or characters, you can get several possibilities where selecting the “best” one is difficult, especially since the minimum edit isn’t always the best one in practice.

              2. 3

                “Couldn’t I do this with a text based diff?” is a sufficiently common question that I’ve also discussed it in the FAQ:

                People have been building and optimising text-based diffs for decades, so there are plenty of great options for that use case already. I personally like git-diff with the patience algorithm.

              3. 3

                I think you’re casting the problem as something akin to sequence alignment, if I’m understanding correctly? My mental model is that, without lines, you treat the two strings you’re diffing as something akin to DNA sequences and you’re trying to find the optimal alignment that minimizes Levenshtein distance?

                If not, please elaborate. If so, though, cool—that’s how I used to think about diffing too! I, like OP, was pretty unhappy with my git diffs. But I think the barrier to diffs that happen at the “character level” is the algorithmic complexity. I think even the fast heuristic algorithms that do this without “word splitting” is something like O(m * n), where m and n are the lengths of the two strings. DNA databases do speed this up with “word splitting” heuristics, but that just means your algorithmic complexity is O(m * n), where m and n are the number of words.

                In other words, I think doing this at the “line level” vs. doing this at the “AST level” vs. doing this at the “character level” comes down to a time vs. quality tradeoff. You probably can produce amazing diffs without considering lines or ASTs, but it probably takes a while.

              1. 2

                Indentation is one way to avoid escaping. Dynamic quote sequences is another way. Compared to indentation, the advantage of dynamic quote sequences is that you don’t even need indentation, and can therefore cut-and-paste freely with no special editor support for indentation-aware pasting. The disadvantage is that you need to come up with a quote sequence, which can feel less clean than indentation. I tried explaining it here:

                1. 1

                  Interesting. I think I use a version of “dynamic quote escaping”. For example, I often will save delimited data as “table |” or “table ,” to indicate that the following data is pipe or comma delimited.

                  Would you say it is equivalent to: encode(string) => fn(encodedString, escapeRules)

                  Where encode scans the string to be encoded, chooses a safe delimiter(s), and then returns the untouched string along with the delimiting rules?

                  1. 2

                    Not sure I understand the question well enough to answer; what is fn?.

                    You could certainly create a string quoting function that takes an arbitrary string, scans it to choose a safe delimiter, and uses that to return a quoted form of the string without any escaping. Such a quoting function implemented in plpgSQL could for example function as follows:

                    => "$$abc$$"
                    => "$a$ab$$cd$a$"
                1. 30

                  aka “It’s time to make programming easier by changing reality”

                  I feel like, in this case, we could also make programming easier by changing programming. The root cause of this isn’t leap seconds per se, but the fact that the de-facto-standard computer timekeeping system doesn’t understand them, and we hacked it up in such a way that they completely break everything.

                  If UNIX time counted actual, rather than idealized, seconds, most things would become easier. (That is, for each tick of a naïve clock, the current UTC second is labelled with the numerically next integer). Converting the current time without current leap second data would be wrong. But clocks don’t need to care about this, only frontend systems do, and in 2022 those run lots of things that need to be updated more frequently than every six months.

                  1. 8

                    The article doesn’t bring it up, but the problems with leap seconds don’t end with better programming. Since the leap seconds are only announced about 6 months in advance, the number of seconds between now and a UTC timestamp more than 6 months into the future is unknown. Therefore, if Unix time counted actual seconds as you suggest, it would be impossible to convert UTC to Unix time for such future timestamps. That would mean that calendars and other application that need to represent future timestamps couldn’t use Unix time.

                    As I see it, the root cause of this problem is that civil time, when used to express plans more than 6 months into the future, is undefined. Better programming can’t fix that.

                    1. 18

                      The date and time of an event in several years can’t be defined in terms of seconds from now, but you can easily define it in terms of date, time and timezone.

                      1. 1

                        You cannot easily define the time and date of an event in the future in terms of date, time, and timezone! This has nothing to do with UNIX timestamps being stored in seconds.

                        If you care about the elapsed time, you cannot count the amount of actual time that will pass from now to some date even a year from now. Not with precision at the level of seconds or less. X amount of time into the future doesn’t map to a fixed date, time, and timezone because we’re redefining time constantly with these leap seconds.

                        FB is right, kill the leap second.

                        1. 9

                          This goes beyond leap seconds. With a fixed date, time, and timezone, the timezone can change, and does with some regularity.

                          Unless we kill political control of timezones, this will still need to be taken into consideration.

                          1. 1

                            To some extent that’s true, but not generally.

                            The definition of UTC-5, modulo leap seconds, doesn’t change. In that sense removing leap seconds does allow you to compute future times just fine. If I have a device that I need to check 5 years from now, I know exactly what time that will be UTC-5, modulo leap seconds.

                            Now if you mean, timezone in the sense of EST/EDT, then plenty of time zones have not changed in well over a century and it’s hard to see them ever changing. Perhaps ET may change by fixing it to EST or EDT, but generally, as countries become more developed they stop making these changes because of the disruption to the economy. Check out

                            So yes, political control of timezones is actually being killed as the economic consequences of changing them becomes severe. Things are slowly freezing into place, aside from leap seconds.

                              1. 4

                                Basically, “18:30 on 2038-01-19 in the host system timezone” is the only more or less well-defined concept of a future date that is useful in practice. When that time comes, a system that is up to date with all natural and political changes can correctly detect that it came.

                                Applications that deal with arbitrary time intervals in the future like “2.34 * 10^9 seconds from now” should use a monotonic scale like TAI anyway, they don’t need UTC.

                                1. 2

                                  Eh, scheduling meetings a year or two in advance can happen, and it could be well defined and useful. But it’s important to note that the further into the future something is happening, the less the accuracy matters, unless it’s astronomy at which point you have TAI or UT1 depending on context.

                                  1. 1

                                    Except that there is no safe way to compute TAI on most systems.

                                    1. 1

                                      A GPS reciever costs $1000 at most. If you need precise accuracy, it’s what you’re going to use, and it’s just GPS_time - 7s to get to TAI. Big companies run their own NTP pools for reliability, and if you have your own pool, you can run it at TAI.

                                  2. 2

                                    I’ve seen that! It’s what I meant about ET changing its definition. It’s far from done sadly :( The house seems to have abandoned the bill

                                    In any case, the problem is with redefining time zones not dropping them.

                              2. 3

                                Can you elaborate on this, I’m really curious why is it so? I was under the impression that if we say a meeting will happen on August 1st, 2050, at 3.30pm CEST, in Bern, Switzerland, not many things can make this ambiguous. If Switzerland stops using CEST, I’ll probably just switch to the replacement timezone. The reason I’m confused is that I don’t see how leap seconds play any role.

                                1. 4

                                  It is ambiguous because extra seconds of time may be inserted between now and then. So no one can tell you how long from now that time is (in seconds)

                                2. 2

                                  In what situations do you need to know the exact number of seconds to a future (civil) time more than a year in the future?

                              3. 14

                                As I see it, the root cause of this problem is that civil time, when used to express plans more than 6 months into the future, is undefined.

                                Civil time is not “undefined”. Definitions of local civil time for various locations may change, but that’s not the same thing at all as “undefined”.

                                I also don’t generally agree with “better programming can’t fix” – the issue simply is programmers demanding that a messy human construct stop being messy and instead become perfectly neat and regular, since we can’t possibly cope with the complexity otherwise. You slip into this yourself: you assume that the only useful, perhaps the only possible, representation of a future date/time is an integer number of seconds that can be added to the present Unix timestamp. The tyranny of the Unix timestamp is the problem here, and trying to re-orient all human behavior to make Unix timestamps easier to use for this purpose is always going to be a losing proposition.

                                1. 7

                                  As I see it, the root cause of this problem is that civil time, when used to express plans more than 6 months into the future, is undefined. Better programming can’t fix that.

                                  This is true to an extent, but I think it’s true independently of leap seconds. The timezone, and even the calendar, that will apply to dates in the future are also undefined.

                                  I also think it’s not the whole story. It seems intuitively reasonable to me that “the moment an exact amount of real time from this other moment” is a different type from “the moment a past/future clock read/reads this time”, and that knowledge from the past or future is required to convert between the two. I think we’ve been taking a huge shortcut by using one representation for these two things, and that we’d probably be better off, regardless of the leap second debate, being clear which one we mean in any given instance.

                              1. 13

                                I’m quite skeptical of the real world value of 24bit color in a terminal at all, but the biggest problem I have with most terminal colors is they don’t know what the background is. So they really must be fully user configurable - not just turn on/off, but also select your own foreground/background pairs - and this is easier to do with a more limited palette anyway.

                                I kinda wish that instead of terminal emulators going down the 24 bit path, they instead actually defined some kind of more generic yet standardized semantic palette entries for applications to use and for users to configure once and done to get across all applications.


                                1. 4

                                  I’m quite skeptical of the real world value of 24bit color in a terminal at all

                                  I have similar misgivings, but I admit to liking the result of 24-bit colour. It’s useful! I just don’t like how it gets there.

                                  Something that is a never-ending source of problems with the addition of terminal colours in the output of utilities these days is that in almost every case they are optimized for dark mode. I don’t use, nor can I stand, dark mode. It is horrible to read. But as a result, the colour output from the tools is unreadable. bat is the most recent one I tried. I ran it on a small C file and I literally couldn’t read most of the output.

                                  Yes, you can configure them but when they are useless out-of-the-box, the incentive is very low to want to configure everything. And then, I could just… not configure them and use the standard ones that are still just fine.

                                  Terminal colours are really useful. I find 24-bit colour Emacs in a terminal pretty nice. It’s the exception. Most other modern terminal tools that produce colour output don’t work for me because they can’t take into account my current setup.

                                  Having standard colour pallettes that the tools could access would be much better.

                                  1. 4

                                    I’ve started polling my small sample size of students and they almost unanimously prefer dark mode. I suspect this is most people’s preferred which is why it’s the default of most tools.

                                    Personally I prefer dark because I have a lot of floaters in my eyes that are distracting with light backgrounds. For many years I had to change the defaults to dark.

                                    That said, I like to be able to toggle back and forth between light and dark. When I’m outside in the sun, or using a projector, light mode is critical. This is made difficult by every tool using their own color palette rather than the terminal’s. Some tools can be configured to do so, and maybe that should be their default.

                                    1. 5

                                      I suspect this is most people’s preferred which is why it’s the default of most tools.

                                      Back when I was in undergrad (~25 years ago), light mode was what everyone used. Then again, it was always on a CRT monitor and was the default for xterms everywhere. If you got a dark theme happening, it attracted some attention because you knew what you were doing. People did it to show off a bit. (I did it too!)

                                      Then I got older and found dark backgrounds remarkably difficult to read from. I haven’t used them for well over 15 years. I simply cannot read comfortably on a such colour schemes, which why I have to use reader view or the zap colours bookmarklet all the time.

                                      I’m not saying dark mode is bad, but I am saying it’s probably trendy. I suspect things will swing in a different direction eventually, especially as the eyes of those who love it now get older. (They inevitably get worse! Be ready for it.) So the default will likely change. In which case, maybe we should really consider not hard-baking colour schemes into tools and move the colour schemes to somewhere else, as you mention. This is the better way to go. As I mention elsewhere in the thread, configuring bat, rg, exa, and all these modern tools individually is just obnoxious. Factor the colour schemes out of the tools somehow. It’s a better solution in the long run.

                                      1. 1

                                        I too find light displays easier to read.

                                        From memory, the first time I heard of TCO-approved screens was when Fujitsu(?) introduced a CRT screen with high resolution, a white screen, and crisp black text. This was considered more legible and more ergonomic.

                                        (TCO is Tjänstemännens Centralorganisation, the main coordinating body of Swedish white-collar unions. Ensuring a good working environment for their members is a core mission.)

                                        1. 2

                                          What I find helps the most is reducing the blue light levels - stuff like f.lux works well.

                                          I’m also looking into e-ink monitors, but damn, they’re pricey.

                                    2. 3

                                      Yeah, I’m a fan of light mode (specifically white backgrounds) on screen most the time too, and actually found colors so bad that’s a big reason why I wrote my own terminal emulator. Just changing the palette by itself wasn’t enough, I actually wanted it to adjust based on dynamic background too (so say, an application tries to print blue on black, my terminal will choose a different “blue” than if it was blue on white, having the terminal emulator itself do this meant it would apply to all applications without reconfiguration, it would apply if i did screen -d -r from a white screen to a black screen (since the emulator knows the background, unlike the applications!), and would apply even if the application specifically printed blue on black since that just drives me nuts and I see no need to respect an application that doesn’t respect my eyes).

                                      A little thing, but it has brought me great joy. Even stock ls would print a green i found so hard to read on white. And now my thing adjusts green and yellow on white too!

                                      Whenever I see someone else advertising their new terminal emulator, I don’t look for yet another GPU render. I look to see what they did with colors and scrollback controls.

                                      1. 2

                                        I got fed up with this and decided to do something about it, so after what felt like endless fiddling and colorspace conversions, I have a color scheme that pretty much succeeds at making everything legible, in both light and dark mode. It achieves this by

                                        • Deriving color values from the L*C*h* color space to maximize the human-perceived color difference.
                                        • Assigning tuned color values as a function of logical color (0-15), whether it’s used for foreground or background, and whether it’s for dark or light mode.
                                        • Assigning the default fg/bg colors explicitly as a 17th logical color, distinguished from the 16 colors assignable by escape sequences.

                                        As a result, I can even read black-on-black and white-on-white text with some difficulty.

                                        Here it is:

                                        1. 2

                                          I had the same problem with bat so I contributed 8-bit color schemes for it: ansi, base16, and base16-256. The ansi one is limited to the 8 basic ANSI colors (well really 6, since it uses the default foreground instead of black/white so that it works on dark and light terminals), while the base16 ones follow the base16 palette.

                                          Put export BAT_THEME=ansi in your .profile and bat should look okay in any terminal theme.

                                          1. 2

                                            As I said, I could set the theme, but my point was that I don’t want to be setting themes for all these things. That’s maintenance work I don’t need.

                                            1. 1

                                              I definitely agree that defaulting to 24 bit colour is a terrible choice for command line tools, but when it’s a single environment variable to fix, I do think some (bat) are worth the minor, one-off inconvenience.

                                        2. 3

                                          I agree 100%. I think the closest thing we have to a standardized semantic palette is the base16 palette. It’s a bit confusing because it’s designed for GUI software too, not just terminals, so there are two levels of indirection, e.g. base16 0x8 = ANSI 1 = red-ish. It works great for the first eight ANSI colors:

                                          base16  ANSI  meaning
                                          ======  ====  ==========
                                          0x0     0     background
                                          0x8     1     red-ish/error
                                          0xb     2     green-ish/success
                                          0xa     3     yellow-ish
                                          0xd     4     blue-ish
                                          0xe     5     violet-ish
                                          0xc     6     cyan-ish
                                          0x5     7     foreground

                                          The other 8 colors are mostly monochrome shades. You need these for lighter text (e.g. comments), background highlights (e.g. selections), and other things. The regular base16 themes place these in ANSI slots 8-15, which are supposed to be the bright colors, which breaks programs that assume those slots have the bright colors.

                                          The base16-256 variants copy slots 1-6 into 9-14 (i.e. bright colors look the same as non-bright, which is at least readable), and then puts the other base16 colors into 16-21. It recommends doing this maneuver with base16-shell, which IMO defeats the purpose of base16. base16-shell is just a hack to get around the fact that most terminal emulators don’t let you configure all the palette slots directly; kitty does, so I use my own base16-kitty theme to do that, and use base16-256 for vim, bat, fish, etc. without base16-shell.

                                        1. 10

                                          After every command that takes more than 10 s, I display the elapsed and finish times. This requires bash 4.4 or later.

                                          timer_file=`mktemp -t bash-timer.$$.XXXXXXXXXX`
                                          begin_timer () {
                                              date +%s%3N > $timer_file
                                          end_timer () {
                                              local begin=$(cat $timer_file)
                                              if [ -n "$begin" ]; then
                                                  local end=$(date +%s%3N)
                                                  local elapsed=$[$end-$begin]
                                                  if [ "$elapsed" -ge 10000 ]; then
                                                      local ms=$[$elapsed%1000]
                                                      local s=$[($elapsed/1000)%60]
                                                      local min=$[($elapsed/60000)%60]
                                                      local h=$[$elapsed/3600000]
                                                      printf '\e[35m%i:%0.2i:%0.2i\e[2m.%0.3i %s\e[0m\n' $h $min $s $ms "`date -d @$[$end/1000] '+%F %k:%M:%S'`"
                                              echo > $timer_file
                                          PROMPT_COMMAND="$PROMPT_COMMAND end_timer;"
                                          before_exit () {
                                              rm $timer_file
                                          trap before_exit EXIT
                                          1. 1

                                            Why do you store the timestamp in a file rather than an env var? Maybe relatedly: how does this not break if you do something like sleep 20 &; sleep 15 &? I’d expect them to share $timer_file so the second command overwrites the filename for the former.

                                            (Also, thanks for this and your exit code comment, I have been idly wanting both features in my shell for years.)

                                            1. 1

                                              PROMPT_COMMAND executes before printing the prompt, so background jobs will not be timed. I wanted to use an env var but had trouble getting it to work, so if you do, please let me know!

                                          1. 17

                                            A really useful thing is to always display the error code unless it’s 0. Got used to it real quick and don’t know how I lived without it.

                                            prompt_show_ec () {
                                             # Catch exit code
                                             # Display exit code in red text unless zero
                                             if [ $ec -ne 0 ];then
                                              echo -e "\033[31;1m[$ec]\033[0m"
                                            PROMPT_COMMAND="prompt_show_ec; $PROMPT_COMMAND"
                                            1. 4

                                              Good point! I also have a prompt function for it, in zsh (screenshot:

                                              ### EXIT
                                              ### prints last exit code
                                              ### but only if non-zero
                                              colored_exit_code() {
                                                echo "%(?..${nl}%F{8}exit %F{1}%?)%f"
                                              1. 1

                                                Indispensable for prototyping and shell scripting.

                                                I also like some of the oh my zsh themes and the like that will color code this for you as well.

                                              1. 3
                                                alias g="git"
                                                alias la='ls -lA --color=auto'
                                                alias sc='screen -xRS'
                                                alias sl='screen -list'
                                                # With this function you can explore the filsystem,
                                                # and display contents of both directories and files
                                                # without going to the beginning of the line to
                                                # switch between ls and less.
                                                l() {
                                                    if [ -z "$2" -a -f "$1" ] ; then
                                                        less "$1"
                                                        ls -l --color=auto "$@"
                                                # Create and enter a directory
                                                function mkcd { mkdir -p "$1"; cd "$1"; }
                                                1. 2

                                                  extra niceties with git

                                                  ga -> git add

                                                  gap -> git add -p

                                                  gb -> git branch

                                                  gc -> git checkout

                                                  gp -> git push

                                                  gbb -> git for-each-ref --sort=committerdate refs/heads/ --format=%(committerdate) %(refname:short)

                                                  (last one prints your branches sorted by last commit date, great for finding the “recent branches”

                                                  1. 1

                                                    I have sl aliased to ls to catch typos lol

                                                    1. 1

                                                      If you install the sl package, you get a steam locomotive blocking your terminal for a few seconds.

                                                    2. 1

                                                      I happen to use basically the same l and mkcd functions. Tip: I would be tempted to put && between mkdir and cd.

                                                      Here is an extended mkcd function that also allows to carry files while changing directory. I use fish, so it’s in fish:

                                                      function mkcd --description 'create, move zero or more files into and enter directory'
                                                          set -l argc (count $argv)
                                                          if test $argc -eq 0
                                                              echo "Usage: $_ [carry files…] destdir/"
                                                          mkdir -p $argv[-1]
                                                          and if test $argc -gt 1
                                                              mv $argv
                                                          and cd $argv[-1]
                                                    1. 3

                                                      Nice write-up for a journey of abstractions! Reminds me of two things:

                                                      • How in Common Lisp CLOS, objects are updated lazily after a class redefinition. Ostensibly you can control this somehow, but right now I can’t find how.

                                                      • How Erlang, famous for hot-reloading, has an interesting way to control the upgrade in a performant way: Calls to functions that are qualified with their module name refer to the latest version of the code, whereas calls with just the function name refers to the same version of the code as the caller. The latter can be compiled with no upgrade check.

                                                      1. 3

                                                        About common lisp:


                                                        Common lisp has hot-reloading baked in the standard, for example there are 2 ways of defining “global” variable: one that will re-assign the value to the variable when re-run and one that keeps the variable’s value if it already exists.

                                                        1. 2

                                                          I worked a couple years side by side with the author of the Learn you Some Erlang book linked in the post, which is how I learned this technique! Erlang’s approach really is a lot more sophisticated, but some of it becomes unnecessary if you only ever have a single OS thread as in my IRC server.

                                                        1. 1

                                                          In case you haven’t seen this, my squeezebox keyboard has independent columns and lots of dimensions that can be tailored to a particular hand. Next iteration should be ready soon.

                                                          1. 1

                                                            I have seen it! I think we had a conversation last summer about it. I especially like the extreme curvature. I’ve added a mention of it in the document.

                                                          1. 2

                                                            I use at least half a dozen layers on a daily basis. Link below to my layout, most of those layers have been in use for about a decade. Having modifiers on home row has become a necessity.

                                                            One important thing that made this work, was unordered modifiers. That is, when you press a chord, it only matters what set of keys are included and what key was last, e.g. A+R+J is equivalent to R+A+J.

                                                            I also made it so that I can “roll” between chords, i.e. when you release a key, only the keys that were pressed before it are included in the set of modifiers. This way, you can touch type almost normally.


                                                            1. 1

                                                              Once you’ve tried hot-pluggable switches, there’s no going back.

                                                              Honestly I’m not sold on hotswap. Nobody is changing switches every day – especially not with the low reliability of the hotswap sockets. If in two years there’s a new switch with such a dank thicc click that I would consider upgrading switches, it’s easy enough to desolder the old ones and solder new ones in.

                                                              adjustable column staggering

                                                              That does sound kinda cool I guess..

                                                              for example this great invention that turns ordinary key switches into pressure sensitive, analog keys. […] the only product is a development kit that at the time of writing is out of stock

                                                              Woah. That is really interesting!

                                                              The company doesn’t seem active since 2019 except for, uh, filing a US patent. Well, I guess for hobbyists outside of the US this just serves as documentation ;)

                                                              upd: looking at their reddit announcement from back then, turns out most of the magic is in Inductance to Digital Converters (TIL that’s a thing)

                                                              upd: also a competitor has already researched inductive sensing for mech switches back then lol

                                                              1. 1

                                                                The reason I see great value in hot-swappable switches, and similarly hot-swappable columns with PCBs, is that people don’t know what they want until they try it, so making it cheap to swap out components allows exploration. For example, I thought I wanted Cherry Brown, but also ordered Cherry Red Silent just in case. Turns out I wanted the red everywhere except on two keys. Exploration like that would be so tedious without hot-swappable switches.

                                                              1. 7

                                                                I suspect that the next step in keyboard customization is actually in software: a 3D, 3-degree-of-freedom layout system that has a library of existing parts and circuit requirements should be able to take a set of user specifications for an N-key keyboard and produce 1..N custom PCBs with circuits and connectors laid out. It could even reasonably take a stab at producing a 3D printable case.

                                                                Why 3D? Curvature, elevation and separation.

                                                                Why 3 DoF? Once we’ve specified a key’s location, we need to specify rotation on each axis.

                                                                Kailh produces hot-swap socket mounts for MX switches, rated at 100 swaps - low for development, reasonable for a 25 year keyboard.

                                                                PCB fabrication in single digit quantities is reasonably affordable already. Flex PCBs are easily available up to 4 layers deep. Why have a custom-modular keyboard when you can get a custom-integrated keyboard?

                                                                1. 4

                                                                  A custom-integrated keyboard has a lot of advantages, and would provide more flexibility at design-time than a custom-modular would at use-time. If you know what you want, your needs won’t change, and your needs can be met today, custom-integrated is the way to do.

                                                                  The advantage with custom-modular is greater flexibility at use-time. I don’t know what curvature, tenting angle, staggering etc is best for my hands, and even if I did it might be easier to learn by adjusting bit by bit. I also might not be able to find e.g. an easy way to integrate a TrackPoint with what’s available on the market today. Printing an entire new keyboard every time you want to change it will get expensive.

                                                                  So perhaps you start off with custom-modular, and when you’ve gone a few years without wanting to change anything you can get a custom-integrated version that looks sleeker.

                                                                1. 3

                                                                  While I like the idea behind columnar modules, with the advent of PCB’s like NeoKey Socket you can place keys at an individual level, allowing even greater flexibility in the design of the keyboard. If a flexible, smaller footprint version of the NeoKey Socket was made, that would be even better.

                                                                  1. 3

                                                                    NeoKey Socket looks great! However, it does require soldering.

                                                                    I tried to envision something that could help manufacturers cater to users who don’t want to build their own, yet want a lot of customization options. I think column modules can offer that, while the NeoKey Socket seems to be targeted at those who do want to build their own.

                                                                  1. 1

                                                                    By mistake, I opened the document to realize it’s barely readable on small mobile screens. I say “by mistake”, because the page is hosted on google docs, which I’m trying to avoid as much as possible. The topic is interesting, that’s why instinctively clicked on the link, but couldn’t follow due to the limitations I mentioned earlier.

                                                                    1. 4

                                                                      Sorry, didn’t mean to “trick” you. Self hosted PDF for you:

                                                                      1. 4

                                                                        Thank you for sharing PDF! Sorry for coming of dismissive. As I said, I’m interested in the topic, but it was really hard/impossible to read on my phone, due to wide margins and the amount of indenting (that is a problem of google docs, I’m not sure anyone can fix that).

                                                                    1. 17

                                                                      Interesting take. (context: I bought the original Ergodox kit (not EZ) in round 3 on Massdrop and eventually developed my own keyboard design based on my experiences with it.)

                                                                      Not enough keys

                                                                      This is especially funny to me since when I last remapped my Ergodox I found it had way more keys than necessary, and I didn’t even bother adding keycodes to the number row since I found the numpad on the fn layer to be dramatically faster and more accurate.

                                                                      Learn to use layering! I think some people are suspicious of the fn key because many laptops implement it in a completely useless way, putting it way off in the far corner and putting the fn numpad in an awkward, badly placed position. Using a well-designed fn layer is like night and day compared to that.

                                                                      Lack of labels

                                                                      Putting labels on a layered keyboard is pretty silly IMO–the labels will only ever tell you what’s on the base layer, and that’s the one that’s easiest to learn. The part that takes longer to learn is the other layers, and you need a separate cheat sheet for that anyway!

                                                                      You could theoretically produce keycaps which have legends for both the base layer and the fn layer, but IMO this is a really bad idea since the point of a reprogrammable keyboard is to allow you to move things around at your whim, so if your keycaps say that the arrow keys are on fn+WASD but you want them under ESDF where your hand naturally rests then you just have to put up with labels that are wrong; much worse than labels that are just not there to begin with.

                                                                      Basically it’s just fundamentally impossible to have all three of: reprogrammable, labeled, layered.

                                                                      Context shifting

                                                                      This one can be a pretty big problem if you move around a lot; like if you keep your Ergodox on your desk but still want to hack on your couch or something. (or, in the pre-pandemic days, at a coffee shop). IMO the biggest flaw of the Ergodox is that it’s a pain to take with you when you’re not at your desk, which is why when I designed by own based on my experiences with the Ergodox, I made mine small enough to fit in a large pocket and able to be placed on top of my laptop’s internal keyboard when I’m on the couch.

                                                                      frankly, I’m not sure that a multi-week dip in productivity is going be be offset by whatever gains I might make by using it long-term.

                                                                      This is a common refrain you also hear when people talk about learning improved layouts like Colemak or Dvorak instead of Qwerty.

                                                                      IMO it’s quite misguided; the advantage of a better keyboard or better layout is not productivity, it’s comfort. If you were spending multiple weeks of relearning just in hopes that you’d get a bit faster in the end I’d agree for most people it’d be a waste of time, but if you’re doing it because you want to avoid potentially career-ending stress injury, that’s a completely different story.

                                                                      1. 5

                                                                        I’ll second the parts about layers and labels. When I started using the thumbs to shift layers I got a lot faster and my hands moved around a lot less. Which was the point of me getting an Ergodox. Layers just made using my keyboard so much more comfortable. I’m also considering removing my number keys for the same reason. On labeling, I went from a weird way of not quite hunting and pecking and not quite touch typing to a full touch typer in a pretty short time. I went with blank keycaps when I got my Ergodox EZ and it forced me to learn to type without looking at my keyboard. Plus I like the tiered keys you get with an EZ if you get blank caps.

                                                                        1. 3

                                                                          Right, like… I think a lot of people don’t understand that they already use layers on their conventional keyboard! It just happens to be a single layer for mostly capital letters, but some special punctuation as well. Turns out while having one layer shifting key is good, having two is even better! Three is a bit extreme, but it should be an option too; everyone has different needs.

                                                                        2. 3

                                                                          Learn to use layering! I think some people are suspicious of the fn key because many laptops implement it in a completely useless way, putting it way off in the far corner and putting the fn numpad in an awkward, badly placed position. Using a well-designed fn layer is like night and day compared to that.

                                                                          I’m using Moonlander and I’m really struggling to get into using layers. So far, I have only two effective layers, base one and one for window manipulation. In the base one, I managed to cram as much as possible, and using the thumb cluster so that each key either behaves as Cmd/Ctrl/Alt when pressed and Enter/Space/Backspace/application launcher (Emacs, Terminal, Quicksilver). But I feel I’m not reaching the right comfort and things could be done differently (especially since I have short fingers and reaching corners/top line is a struggle). What are good examples of layers?

                                                                          1. 3

                                                                            The best is probably moving all movement related keys (arrows, home, end, page up, page down) to a convenient layer. For me, holding a activates motion layer, and ijkl are arrow keys). This simultaneously makes the keys you use all the time most convenient, and frees up a bunch of space in the base layer.

                                                                            1. 2

                                                                              It never occured to me to use an existing key to switch to a layer. I’ll definitely incorporate this approach for both keyboard and mouse navigation.

                                                                              I have a layer with arrow keys in place of j, k, l, ; (as I didn’t want to move my hand), but never use that layer much, since I also have arrow keys in the bottommost row. Maybe I turn those off, forcing myself to use layers more…

                                                                            2. 2

                                                                              I use at least half a dozen layers on a daily basis. Link below to my layout, most of those layers have been in use for about a decade.


                                                                              1. 1

                                                                                Interesting! How do you switch layers here? Do you go with “hold key for the layer” or “tap for the layer” approach?

                                                                                1. 1

                                                                                  Hold. The “mods” layer is really the hold modifiers, rather than a layer. So pretty much every key is dual-function.

                                                                              2. 2

                                                                                What are good examples of layers?

                                                                                I’ve been using this layering with minor tweaks since about 2014:

                                                                                I use Emacs nearly exclusively which informed the layout, with one concession to more conventional use (since this is also the default layout for the keyboards I sold) I put arrow keys on the fn layer; I would have omitted the arrow keys altogether if it were just a layout for myself exclusively.

                                                                                Just another example of how everyone’s got different needs and that you should expect to do a lot of tweaking to find what’s best for you. Another example is how I use shift-insert to paste, so insert is on the fn layer; it would definitely not be there for most people.

                                                                                Edit: for clarification, the final layer is not accessed with a modifier key; it’s modal and accessed by pressing and releasing fn+esc and disabled by tapping fn on its own.

                                                                                1. 2

                                                                                  I have a layer for playing games: WASD and the surrounding keys are intact but the right hand is a numpad and common modifier keys like space, left ctrl, left shift etc are moved closer. Many games use numpad keys for secondary controls assuming the player is using a full sized keyboard and changing the arrangement of the modifier keys reduces travel and discomfort. Perhaps there are workflows in the software you use that feel awkward to type? You can create layers to make that repetitive motion easier.

                                                                                2. 2

                                                                                  Not enough keys

                                                                                  it had way more keys than necessary

                                                                                  Both :< Some of my ergodox keys are unmapped (almost all bottom layer, for example), but I don’t have enough keys. The problem is, Russian language annoyingly has just enough more letters that English (33 vs 26) that it works ok without any kind of special input method on a full-sized keyboard (layout). While it feels ok for my brain that [ and { are in the separate layer, having a couple of Cyrillic letters in a layer feels very jarring.

                                                                                  1. 3

                                                                                    Russian language annoyingly has just enough more letters that English (33 vs 26) that it works ok without any kind of special input method on a full-sized keyboard

                                                                                    I had a similar problem when I started learning Thai; my 42-key Atreus layout had been designed around having precisely the right number of keys for typing English, and Thai has 44 consonants and 15 vowels, so I had to switch back to my Ergodox for that. Nowadays the Atreus has 44 keys, which makes it a better fit for latin languages which need AltGr/compose but it’ll never be a good fit for Thai.

                                                                                1. 2

                                                                                  Two things that struck me about this pattern after thinking about it for a while:

                                                                                  1. How would you compare this to Redux? It seems there are a lot of touch points, except state in React/Redux being ephemeral
                                                                                  2. Wouldn’t blocking I/O temporarily stop the processing of commands? This is something I’d worry about if I was making eg a bunch of calls to REST APIs. I guess async I/O is a way to handle this, but then you wouldn’t be able to maintain linearity across I/O boundaries. (I guess the part about reactors being able to emit commands sort of hinted at this?)
                                                                                  1. 2

                                                                                    Question 1:

                                                                                    The architecture is very similar to Redux. The reasons for using Redux for a UI vs using MIP for a server-side application might be partly different since the constraints and trade-offs are different, but a lot of the advantages are also similar.

                                                                                    I’ve actually found it advantageous to combine server-side MIP with a similar pattern client-side. Share the business logic code (the “reducers”) between client and server. When the UI initializes, get the latest state from the server. Send UI actions as commands to the server-side MIP application, and stream accepted commands to the client to update the UI state. Suddenly you have solved both persistence and realtime collaboration with one move.

                                                                                    If at any point you are dissatisfied with the server round-trip delay for UI updates, you already have the perfect architecture for solving that:

                                                                                    • Let the client calculate the latest state under the assumption that the commands sent to the server will be accepted, but also save the state implied by the latest command the server has accepted.
                                                                                    • Backtrack as needed on timeouts or when the server contradicts the assumption.
                                                                                    1. 2

                                                                                      Question 2:

                                                                                      If you make a bunch of calls to REST APIs, you never wait for the replies before processing the next command. Commands need to be deterministic. Emitting commands from reactors is a way to make something indeterministic look deterministic:

                                                                                      Operations involving calling out to external services become less trivial. Since they cannot be considered deterministic, they can’t be implemented as a processor. Rather, they will have to become reactors, that may or may not produce more commands that are fed back to the application.

                                                                                      Example: A command is sent to the application to perform a ping against some server. The result of the ping is not deterministic and therefore cannot be used to calculate the next application state, so doing the ping inside a projector is useless. You do it in a reactor, and when you have the result, you emit a command back into the application that basically says “the result of the ping was X”. This result-declaration-command can be used by processors. When the application is restarted, the actual ping is not performed since reactors are ignored by the reprocessor, but the perceived result of the ping, saved in a command, is the same. Doing this asynchronously is perfectly fine; if the external service call takes longer, it just means that the result-declaration-command might arrive later in the command sequence, but determinism is still preserved.

                                                                                      I believe calling external services was the only reason you’d worry about blocking I/O. Still, let’s address that part:

                                                                                      Decide what kind of durability guarantees you want, and make a fair comparison between MIP and another option. Processing commands in sequence does not mean that you need to do everything on a single thread. For example, each of the following tasks can run on its own thread:

                                                                                      • Assign an index number to incoming commands
                                                                                      • Serialize incoming commands into a log buffer
                                                                                      • Write the log buffer to file
                                                                                      • Run projectors/reducers to create the new state
                                                                                      • Run reactors, possibly on several threads
                                                                                      • Send replies to clients, possibly on several threads, optionally (depending on what durability guarantees you want) after making sure the corresponding command is persisted.

                                                                                      Note that the only operation that is I/O-dependent here is writing to disk sequentially. Decades of database performance tuning has had “minimize disk seek operations” as a mantra, and here we have approximately no seeks at all. It’ll run fast. In practice you’re unlikely to need more than one thread. The above multi-thread model is just a way out if you ever find yourself needing more compute-heavy operations that take an amount of time within the same order of magnitude as disk writes.

                                                                                      1. 2

                                                                                        Thanks for elaborating! Good point about single-threading not being an explicit requirement, as long as commands are synchronous

                                                                                    1. 3

                                                                                      Would “Command sourcing” be a fitting name for it? It looks interesting. What kind of problem would this be a good solution for?

                                                                                      1. 2

                                                                                        I think “Command sourcing” is a great name, it brings “Event sourcing” to mind as both analogy and contrast. Perhaps it’s a better name than “Memory Image Pattern”.

                                                                                        The kind of problems MIP is a good solution for include:

                                                                                        • Complex business logic
                                                                                        • Tight deadlines
                                                                                        • Low latency or realtime requirements
                                                                                        • Frequently changing requirements that traditionally would result in laborious database schema changes in production
                                                                                        • Frequently changing requirements that makes it hard to know what kind of data is useful
                                                                                        • A need to handle historical data and metadata
                                                                                        • Any combination of the above

                                                                                        But it’s probably not a good solution if you have any of the following:

                                                                                        • A compute-intensive or very data-intensive application
                                                                                        • Requirements for very high throughput
                                                                                        • Requirements to purge/forget data
                                                                                        • Very complex integration interfaces