Threads for gallabytes

  1. 8

    Although I have known about Kakoune for a while, I only recently found out that it’s licensed under Unlicense, which I find unsettling for legal reasons.

    Otherwise, I haven’t really used it much. How does it compare to vis? I have grown very fond of it for terminal editing, to the degree that I usually uninstall vim on all my machines to replace it with vis.

    1. 5

      I stopped off at vis (and sam) along the way from Vim to Kakoune. vis was fairly nice, but ultimately I found it really, really wanted you to move around and make selections with structural-regular-expressions language, and I never quite got the hang of it (quick, what’s the difference between +- and -+?)

      In contrast, Kakoune supports basically the same operations as SREs, but encourages you to move around and make selections with immediate, visual feedback — and it’s still easily scriptable, thanks to the “interactive keys are the scripting language” model the article describes.

      It’s a bit of a shame that Kakoune’s licensing civil disobedience excludes people who just want a nice text editor, but even if you can’t use Kakoune’s code I hope you (or other people) will steal its ideas and go on to make new, cool things.

      1. 6

        It’s a bit of a shame that Kakoune’s licensing civil disobedience excludes people who just want a nice text editor,

        Huh? I just looked at the UNLICENSE file; unless I’m missing something, it just drops Kakoune into the public domain. SQLite has the same thing going on.

        1. 3

          The issue is allegedly that it’s not possible to do that under every legal system. Germany seems to be an example where that could cause issues. CC0 handles this better by adding a “fallback” clause in case that it’s not possible.

          1. 4

            Legal systems are not laws of nature. If no one would ever take you to court or fine you for violating a law, that law does not apply to you. Unlicense, WTFPL, etc are great examples of this - extremely strong signals from author that they will not take any actions against you no matter what you do with the content under that license.

            1. 1

              Unlicensed, WTFPL, and even CC0 are banned by Google due to opinions by their legal team. While I don’t trust Google for a lot of things, I think it’s safe to trust their legal team thought about this and had their reasoning.

              1. 4

                But Google’s risk appetite should be pretty different than yours. The legal system hits everybody different.

                1. 1

                  What do you mean by this? Google’s legal team is going to be playing liability games in a paranoid way that is obviously irrelevant for anyone not engaged in corporate LARP.

                  Like, actually, no appeals to authority, no vague paranoia, what would actually go wrong if you used WTFPL or CC0 in Germany for a personal project?

                  1. 1

                    CC0 is fine in Germany, UNLICENSE is the problem.

                    But otherwise, you’re right. In most cases, nobody cares what license is being used (other than ideological reasons). A small hobby project might just as well have a self-contradictory license, and it wouldn’t be a practical problem. But depending on the scope of what is being done, there are always legal vultures, just like with patent trolls or people who blackmail torrent users, that might find a way to make some money from legal imperfections.

                    I’m not a legal expert, so I try not to bet on these kinds of things. If CC0 and UNLICENSE are functionally equivalent, signal the same message (“do what you want”) but one is less risky that the other, I’ll take the safer option.

          2. 2

            What does SRE stand for, in this case?

            1. 5

              “Structural regular expressions”, I’d wager?

              1. 5

                structural-regular-expressions

          1. 31

            Modern cars work … at 98% of what’s physically possible with the current engine design.

            Ignoring the fact that ICEs are thermodynamically constrained to something closer to 20% efficiency, “current engine design” is quite an escape-hatch. Computer software and hardware designs, similarly, are subject to “current designs”, and there’s no reason to think that SWEs are somehow less inclined to improve designs than mechanical engineers.

            Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance

            There is no objective “possible performance” metric. There’s only “current implementation” vs “potential improvements gained by human attention”.

            Everything is unbearably slow

            No. kitty is fast. ripgrep (and old grep…) is fast. Mature projects like Vim and Emacs are getting faster. JITs and optimizing compilers produce faster code than ever. Codecs, DSP are faster than ever.

            Yes, tons of new software is also being created and power law effects guarantee that most of it will be low-effort and unoptimized. The fact that you can scoop your hand into an infinite stream and find particulates, means nothing.

            Text editors! What can be simpler? On each keystroke, all you have to do is update a tiny rectangular region

            If that’s all you expect from a text editor then your computer can do it very quickly. But you chose to use a text editor that does much more than that per keystroke.

            Build systems are inherently unreliable and periodically require full clean, even though all info for invalidation is there

            “All info” is not there. Most builds have implicit state created by shell scripts, filesystem assumptions, etc. That’s why conforming to bazel or NixOS is painful (but those projects are examples of people working to improve the situation).

            Machine learning and “AI” moved software to guessing in the times when most computers are not even reliable enough in the first place.

            :)

            Spin another AWS instance. … Write a watchdog that will restart your broken app … That is not engineering. That’s just lazy programming

            That’s how RAM ECC works, that’s how MOSFET works. Failure is an inherent property of any physical system.

            I want state-of-the-art in software engineering to improve, … I don’t want to reinvent the same stuff over and over

            I agree with that. Clojure and urbit are efforts in that direction. Taking code-reuse and backwards-compatibility seriously, allows us to build instead of repeat.

            But dispense with the nostalgia for DOS and the good old days. CompSci artists ignore cost/benefit. Engineers consider economics (cost/benefit, not just “money”, all costs).

            The bulk of new activity in any vigorous market will be mostly trash, but the margins yield fruit. High-quality software is being built at the margins. The disposable trash is harmless, and serves a purpose, and will be flushed out as users adjust their relative priorities.

            1. 5

              Mature projects like Vim and Emacs are getting faster.

              Ignoring the fact that older software had to be optimized in order to run on older computers, a bit of this is survivorship bias. People don’t use the old bad programs anymore or they got fixed.

              Older software crashed a lot and it corrupted your documents. Today, even if it crashes you probably won’t lose anything. To an extent, this is the result of consciously trading performance for correctness.

              1. 1

                Ignoring the fact that older software had to be optimized in order to run on older computers,

                You’re making assumptions. Emacs and especially Vim have many unoptimized components. Vimscript, the core language of Vim, doesn’t even produce an AST, it re-parses every source line over and over (including while-loops/for-loops). Only recently has this got attention. Fixing it takes human time, and now the human time is being spent on it because relative priorities arrived there.

                survivorship bias. People don’t use the old bad programs anymore or they got fixed.

                The converse is that every program that touches a computer should be optimized before it reaches a user. That makes no sense.

                1. 1

                  You’re making assumptions.

                  No! You! :)

                  I didn’t mean to imply that they are perfectly optimized. I meant that they had to perform more optimization to run on older computers than modern software might have to make to run on newer computers.

                  The converse is that every program that touches a computer should be optimized before it reaches a user. That makes no sense.

                  I don’t think every program should be optimized. I don’t follow what you are saying here.

                  1. 1

                    I don’t think every program should be optimized.

                    Then it does not make sense to discount good software as mere “survivors”. It is a contradiction. Good software takes time, bad software gets filtered out over time. In the interval, there will be bad software, but that is because more time is needed for good software to take its place.

                    1. 1

                      Good software takes time, bad software gets filtered out over time. In the interval, there will be bad software, but that is because more time is needed for good software to take its place.

                      I agree. The comment about survivors is about how not every piece of software from an era is as good as the software from that era that we use today. I.e., the survivorship bias fallacy:

                      https://en.m.wikipedia.org/wiki/Survivorship_bias

              2. 3

                Actually, measured by latency (which is almost certainly the benchmark you care about in a terminal emulator), kitty is moderately fast and extremely jittery, just like alacritty. Both Konsole and Qterminal perform substantially better every time I benchmark them, especially if you have a discrete GPU instead of integrated graphics.

                1. 1

                  Fast software exists, that’s my point. You’ve provided more examples of fast software.

                2. 1

                  So, I think you maybe be misinformed about the cars thing. ICEs and turbines can get between 37 and 50ish percent efficiency by theoretical maximum, and real-world engines get very close to that.

                  This is important when you look at the claimed efficiency of computers: a modern multi GHz CPU should be capable of billions of operations a second, and for any given task it is pretty easy to to make a back-of-the-envelope calculation about how close to that theoretical ideal we are–that’s one of the ways efficiency in traditional engineering fields is calculated.

                  We are seeing efficiencies of tens of orders of magnitude smaller than seems reasonable. There are reasons for this, but the fact is inescapable that anybody saying “we really use computers inefficiently” is not wrong.

                  Also, on the restarting the app bit–it is one thing to use ECC to compensate for cosmic ray bit flips, or to mirror data across multiple hard drives in case one or two die, as a way of doing reliability engineering. It is something else entirely to, say, restart your Rails application every night (or hour…) because it leaks memory and you can’t be bothered to track down why.

                  1. 3

                    It is something else entirely…

                    Is that ‘a difference in scale becomes a difference in kind’?

                    They seem like the same kind of thing to me, just at very different points on the effort/value continuum.

                    1. 2

                      Statistical process control is a tool we can use to answer that.

                      If the problem is predictable enough to count as routine variation inherent to the system, we should try to fix the system so it happens more rarely. (And I’d argue the memory leaks that force you to restart every hour belong to that category.)

                      If the problem is unpredictable and comes from special causes, we cannot generally adjust the system to get rid of them. Any adjustments to the system due to special causes only serves to calibrate the system to an incorrect reference and increases the problem. (This is where I’d argue cosmic radiation belongs.)

                      Another way of looking at it is through the expression that “you cannot inspect quality into a product”, meaning that continually observing the system state and rejecting the system when it goes out of bounds is a way to ensure only systems within limits are running, but it is very expensive compared to ensuring the system stays within bounds to begin with. It is only acceptable for cosmic rays because we can’t work the cosmic rays out of the system, so we are regrettably forced to rely on inspection in that case.

                      1. 2

                        Memory errors are not unpredictable; for a given stick of ram, the rate of bit flips is not that hard to figure out (takes quite awhile to get good numbers for ECC sticks).

                        We adjust this by adding error correction codes.

                        RE memory leaks: a memory leak isn’t worth chasing if it’s not causing trouble. I have inherited an app that uses the “restart every hundred requests” strategy and I cannot fathom that ever being on my top ten problems. My users don’t care, and it isn’t expensive. I dislike the untidiness, and would probably fix it if it was a side project.

                        1. 1

                          Indeed. Volatile RAM is a horrible hack (continually refresh capacitors, waste power) compared to NVM. The cost calculation is clear there, so few complain about it. But the cost calculation of “human attention” is less clear to puritans who think that lack of Discipline and Virtue is what prevents a utopia of uniformly better software.

                      2. 1

                        ICEs and turbines can get between 37 and 50ish percent efficiency by theoretical maximum, and real-world engines get very close to that.

                        I said “closer to 20% [than 98%]”. I didn’t bother to look up the actual number. 50 is closer to 20 than 98.

                        a modern multi GHz CPU should be capable of billions of operations a second, and for any given task it is pretty easy to to make a back-of-the-envelope calculation about how close to that theoretical ideal

                        • Why do you assume that the current hardware design is the theoretical ideal?
                        • CPU saturation as a performance metric assumes that the instructions are meaningful, not to mention TFA is concerned about an over-abundance of instructions in the first place.
                        1. 1

                          I said “closer to 20% [than 98%]”. I didn’t bother to look up the actual number. 50 is closer to 20 than 98.

                          I am not sure that you are interpreting those numbers correctly. There are two numbers: the ~37-50% efficiency allowed by physics, and the 98% efficiency in achieving that theoretical efficiency. The former is a measure of how good an ICE can ever be at accomplishing the goal of turning combustion into usable mechanical energy, the latter is a measure of how well-engineered our engines are in attaining that ideal–only the latter we have any control over.

                          Why do you assume that the current hardware design is the theoretical ideal?

                          There may well be a more efficient means of computation out there! In the mean time, it seems reasonable to look at the theoretical max performance of real silicon we have on hand today.

                          1. 0

                            There are two numbers: the ~37-50% efficiency allowed by physics, and the 98% efficiency in achieving that theoretical efficiency.

                            That’s why I said “Ignoring…”. Also mentioning “thermodynamic limit” is a pretty clear signal that I’m aware of the difference between physical limits and engineering tradeoffs. OTOH combustion itself is a design choice, and that is a hint that the distinction isn’t so obvious.

                            You chose to comic-book-guy that part of the comment instead of focusing on the part that didn’t start with “Ignoring”.

                    1. 4

                      A very interesting dig into details, but I wonder how specific it is to x86 family code? Does ARM or other targets also benefit from this?

                      1. 3

                        It should - it does better mostly by requiring fewer overflow checks. If anything this probably makes less of a difference on most x86 chips due to smarter branch prediction algorithms.

                        It should even help on non-pipelined chips (where the relative cost of branching is smaller because everything is slower) just by having fewer instructions.

                      1. 1

                        Are there any code samples for the project we can view?

                        1. 2

                          Uh, no. We’re not currently interested in developing it ourselves, and I’m not sure our codebase would be very helpful (ie it’s a giant pile of Haskell that we’re slightly ashamed of). Happy to advise anyone who does want to implement it though, and provide access to our code if they really want to see it.

                          1. 1

                            Fair enough ,thanks for sharing anyway.😄

                        1. 2

                          I know someone will burn me at the stake for asking this, but any chance of an emacs port?