1. 4

    While I agree with the ideas presented here, in particular the comments on IDEs (or as I like to call them, Interactive Computing Environments, to avoid confusion with “regular” IDEs), I do wonder why these ideas keep getting forgotten. We had Smalltalk, we had Lisp Machines and we have Unix shells, but the tendency always seems to go towards a rigid cookie-cutter-style of programming. I don’t like the idea that people are “too stupid” to understand or use it, and I don’t know how much of it is just that people were used to whatever reached the market first, no matter how annoying it is and how much time people spend fighting it. One component is certainly external (often proprietary) dependencies. Or is it education that de-prioritizes these kinds of thinking?

    1. 8

      It’s the insistence on doing everything via text.

      1. 5

        There are two issues, in my opinion, both shaped by own experience using Smalltalk and trying to teach it to others.

        The first is that you can’t get a flow like the one in this article without learning new tooling on top of the language, and that ends up being a big issue. If I know Emacs (or Visual Studio Code, or Vim, or any of the even vaguely extensible editors), I can use that same tool to handle basically every language, focusing just on the new language. To get a good flow in Smalltalk (or, I believe, a Lisp machine, but notably not contemporary Common Lisp or Schemes), you have to learn the IDE. In Smalltalk, this is especially bad, because the traditional dev flow effectively uses a source database instead of source files, so (until recently) you couldn’t even use things like diff or git.

        The second thing is that this kind of dev flow, in my experience, thrives when you’re doing something novel. Nowadays, most dev work I do is “just” assembling tons of existing libraries in familiar patterns. That’s not a problem, and I don’t think it’s laziness; it’s about predictability and repeatability, and I mostly view it as a sign that the industry is maturing. It lets me do much more with much less effort and much lower risk than doing everything bespoke. But it does mean that if, for example, I want to write a Smalltalk backend for a website in 2021, I’m going to have to write a bunch of stuff (e.g., OAuth connectors, AWS APIs, possibly DB drivers, etc.) that I’d get for free in virtually any other language, which in turn are new places things can go wrong, where I won’t be able to ask or pay someone else for support, and which likely don’t have anything to do with making my software sell. This applies pretty intense brakes to using novel environments even if you believe you’d be happier in one. This is basically the same as your point on external dependencies, but I think looking at it one step back from a repeatability and reliability perspective makes it more obvious why it’s such an issue.

        1. 7

          As someone who has dabbled in Common Lisp w/ SLIME, another limitation of that development style I have noticed is keeping track of state and making sure things are reproducible from the code, and not some unreachable state you have arrived at from mutating things in the REPL. There is a similar issue with Jupyter notebooks

          1. 4

            In Smalltalk, this is especially bad, because the traditional dev flow effectively uses a source database instead of source files, so (until recently) you couldn’t even use things like diff or git.

            While I certainly agree with the lamentations on using modern VCS tools – ten years ago, I spent four months writing a tool that could split a single multi-megabyte XML source database configuration file into multiple files for more atomic versioning and review and combine those files for deployments — I feel like the file paradigm is one that advanced IDE users may be OK abstracting away. I use IntelliJ and other JetBrains products, and Eclipse before them, that have “search by symbol” features, generally used to search by a class, object, trait, interface, etc. name. There are some projects I’ve worked on where the only time I really have to care about files is when I identify the need to create a new package or module necessitating a new directory. Otherwise, my IDE handles the files almost entirely as an abstraction.

            This was difficult to wrap my head around it but because of my experience with Smalltalk in college, I understood it more quickly than my peers and it accelerated my development productivity by a little bit. I’ll readily admit that I’m slower on file-based IDEs or text editors without some kind of fuzzy finder (I’ve been using Elementary Code in a VM for one project and dreadfully missing CtrlP or the like) but it is my preference to treat encapsulated code as an object instead of as a file. I think if more people preferred this, and Smalltalk would have been more popular for other reasons, perhaps a solid VCS for source databases may have emerged; one that didn’t rely on disaggregating the database into the filesystem paradigm.

            1. 1

              I feel like the file paradigm is one that advanced IDE users may be OK abstracting away. I use IntelliJ and other JetBrains products, and Eclipse before them, that have “search by symbol” features, generally used to search by a class, object, trait, interface, etc. name. There are some projects I’ve worked on where the only time I really have to care about files is when I identify the need to create a new package or module necessitating a new directory.

              While it’s true that individual users might be OK with this, there’s two factors to consider. One is that you operate with the knowledge that when your tools do stop working, you can always drop down a level to the “real” files to find out what’s actually going on. The second is that you can collaborate with others who use Vim and Emacs; your choice to use IntelliJ does not force your teammates to adopt your same tools.

            2. 2

              I’m going to have to write a bunch of stuff (e.g., OAuth connectors, AWS APIs, possibly DB drivers, etc.) that I’d get for free in virtually any other language

              Those seem to be largely available in Pharo via existing libraries e.g.:

              1. http://forum.world.st/Zinc-amp-IdentityServer4-td5106594.html#a5106930
              2. http://forum.world.st/AWS-SDK-for-Pharo-td5096080.html
              3. http://forum.world.st/Databases-td5063151.html#a5063498
              1. 1

                Here’s a quick reality check, using two examples that have come up in my own work:

                1. Does PayPal have an official SDK for Smalltalk?

                2. Is there a Smalltalk version of the AWS Encryption SDK?

                Spoiler: The answer to both is no.

                1. 4

                  I don’t think that’s the right question. The right question is whether these things have an SDK that is easy to use from Smalltalk. Unfortunately the answer is still ‘no’. Even in Smalltalks that have a way of calling other languages, the integration is usually painful because the Smalltalk image abstraction doesn’t play nicely with the idea that some state exists outside of the image.

            3. 4

              We had Smalltalk, we had Lisp Machines and we have Unix shells

              One of these is not like the others.

              PowerShell is closer due to being object-based, but it’s still very clunky.

              1. 3

                I don’t think that being object-based is necessary – it makes it cleaner and more efficient. Following this article, you do have a dialogue with the computer (even if it is rather simple), users can and do modify their environment (shell and PATH) and in the end, it is simple, perhaps too simple.

                1. 1

                  I claim that being object-based is “necessary” in the sense that you’re meaningfully far away from the Smalltalk ideal if your system is built around text. Obviously, there’s a gradient, not a discrete transition, but being object-oriented is one of the major factors.

                  Additionally, Unix (shells) is dis-integrated, both in ideals and in implementation. Another major design decision of Lisp/Smalltalk is integration between components - something the Unix philosophy explicitly spurns.

              2. 2

                I think different tools are just good at different jobs. I don’t write in-the-large network services in Smalltalk just like I don’t write tax filing products in spreadsheets.

                This is not to say that Smalltalk or spreadsheets are less – far from it! If I want to bang out a business projection I don’t reach for Rails or Haskell or Rust, I grab a spreadsheet. I think there are similarly many situations where something more Smalltalk-like is the ideal tool, but your day-job as a programmer in tech is not full of those situations and we haven’t given enough knowledge of what computers are capable of to those who would use computing as a tool for their own ends.

              1. 20

                This is something I try, over and over, to explain to people, and I’ve never, ever succeeded in doing it in print or a talk. I always get a response along the lines of, “oh yeah, I love TDD, that’s how I write [OCaml/Go/C#/whatever],” and that’s effectively the end of the conversation on their end: “neat, this guy likes Smalltalk because it has really good TDD”, is about all they hear, and the rest washes off like tears in rain.

                “Experiencing Smalltalk” is a great title for an article like this because you really need to actually experience it, ideally using it yourself, to get it. Smalltalk the language is…fine. It gets a lot right, it gets a lot wrong, languages like Self and Slate have tried to improve it, but at any rate, it gets the job done with minimal fuss. People who just look at its syntax and semantics are right in 2021 that many other languages deliver the same or better.

                But that misses the point. The thing that differentiates Smalltalk is its entire development flow, which is radically different from literally anything else I’ve ever used: write broken code, run it, repeatedly fix the code as you slowly walk through methods and whole classes that either didn’t work or didn’t even exist when you initiated the execution, and end up with a working spike that had its first successful run the second you’re done writing the last line of code. A very few languages, like Factor and Common Lisp, come very close, but as of 2021, Smalltalk is the only environment I’ve ever used that still delivers it.[1]

                I don’t write Smalltalk anymore, and I don’t see that changing (mostly just because I’m old and have kids and spend what little time I do coding for fun on things like Factor), but the experience of developing in it remains absolutely without peer.

                [1]: I’ve been told that the actual Lisp Machines of the 80s did have this same flow, but I’ve never used one–and I definitely don’t think SBCL in 2021 matches the dev flow of Pharo or Squeak Smalltalk.

                1. 1

                  The thing that differentiates Smalltalk is its entire development flow, which is radically different from literally anything else I’ve ever used: write broken code, run it, repeatedly fix the code as you slowly walk through methods and whole classes that either didn’t work or didn’t even exist when you initiated the execution, and end up with a working spike that had its first successful run the second you’re done writing the last line of code.

                  This describes my experience writing Emacs pretty closely. However, I know that many people who know Emacs intimately still say that Smalltalk is different, so I have to conclude that there’s more to it, and that it’s just very difficult to describe what exactly the difference is in words. I expect it has to do with a more seamlessly integrated debugger that reifies the call stack and things. I suppose there’s only one way to really find out.

                1. 2

                  Are there other examples of SQLite being used as a website backend database in production? What kind of scale could you reach with this approach? And what would be the limiting resource?

                  1. 9

                    Expensify was based exclusively on sqlite for a long time, then they created a whole distributed database thing on top of it.

                    1. 6

                      Clojars used SQLite for a good 10 years or so, only recently moving away to Postgres for ease of redeployment and disaster recovery. The asset serving was just static assets, but the website and deployments ran against SQLite pretty well.

                      1. 3

                        If I remember correctly, the trouble that Clojars ran into had more to do with the quality of the JVM-based bindings to SQLite than they did with SQLite itself, at least during the portion of time that I was involved with the project.

                        1. 2

                          Yeah, looking back at the issues, “pretty well” is maybe a little bit generous. There were definitely settings available later on which would have helped the issues we were faxing around locking.

                      2. 4

                        I can’t remember whom but at least one of the well funded dynamoDB style distributed database-y products from the mid 10s used it as the storage layer.

                        So all the novel stuff that was being done with data was the communication and synchronisation over the network, and then for persistence on individual nodes they used sqlite instead of reinventing the wheel.

                        1. 6

                          That was FoundationDB, purchased by Apple in 2013, then gutted, and then returned as open-source in 2018. I’m a bit annoyed, because it was headed to be CockroachDB half a decade earlier, and was taken off the market with very little warning.

                          1. 1

                            Thanks!

                        2. 3

                          You probably will get really fast performance for read-only operations. The overhead of client/server and network stack could be more than10x times of function calls from same address space. The only real limitation might be single server, since you cannot really efficiently scale sqlite beyond single system. But when you reach that scale, you usually needs much more than sqlite.

                          1. 2

                            The sqlite website claims to run entirely on sqlite.

                            They also have this page, though most of those aren’t websites: https://sqlite.com/mostdeployed.html

                          1. -3

                            It seems to be a common theme of prog-lang-started-by-child-prodigy projects that they adopt features where I simply can’t fathom how they are going to maintain and develop them in the mid-to-long-term.

                            Perhaps I’m the only one who is concerned by the complexity these party-trick features seem to involve?

                            (The other option is that this stuff is really that easy and all the hundreds of full-time C/C++ compiler engineers are just idiots for not doing it.)

                            1. 33

                              There are more details on why and how this works here: zig cc: a Powerful Drop-In Replacement for GCC/Clang

                              The other full time C/C++ compiler engineers are not idiots; they just have different goals since they work for companies trying to turn a profit by exploiting open source software rather than working for a non-profit just trying to make things nice for everyone.

                              1. 6

                                The other full time C/C++ compiler engineers are not idiots; they just have different goals since they work for companies trying to turn a profit by exploiting open source software rather than working for a non-profit just trying to make things nice for everyone.

                                This feels like a big statement, and that’s fine, but would you mind elaborating? Which companies do you mean? What goals do they have that are incompatible with something like zig cc?

                                1. 5

                                  I think the point there was just that e.g. Apple has no particular interest in making using clang to cross-compile Windows binaries easy. They wouldn’t necessarily be against it, but it’s not something that aligns with their business interests whatsoever, so they’re very unlikely to spend any money on it. (Microsoft actually does value cross-compilation very highly, and has been doing some stuff in that area with clang, and so is almost a counterexample. But even there, they focus on cross-compilation in the context of Visual Studio, in which case, improving the CLI UI of clang again does not actually do anything for them.)

                              2. 40

                                Am I the only one who is concerned by the complexity these party-trick features seem to involve?

                                (The other option is that this stuff is really that easy and all the hundreds of full-time C/C++ compiler engineers are just idiots for not doing it.)

                                This mindset is one of the major causes why modern software sucks so much. The amount of tools that can be improved is humongous and this learned helplessness is why we keep having +N layer solutions to problems that would require re-thinking the existing toolchains.

                                I encourage you to read the Handmade Manifesto and to dive deeper into how Zig works. Maybe you’re right, maybe this is a party trick, but the reality is that you don’t know (otherwise you would take issue with specific approaches Zig employs) and you’re just choosing the safe approach of reinforcing your understanding of the status quo.

                                Yes, there are a lot of snake oil sellers out there, but software is not a solved problem and blanket statements like this one are frankly not helping anybody.

                                1. 1

                                  I think you are wrong and the exact opposite is the case:

                                  We can’t have nice things because people don’t learn from their predecessors.

                                  Instead they go out to reinvent flashy new stuff and make grandiose claims until it turns out they ignored the inconvenient last 20% of work that would make their new code reliable and complete – oh, and their stuff takes 200% more resources for no good reason.

                                  So yeah, if people don’t want to get suspected of selling snake oil, then they need to be straight-forward and transparent, instead of having these self-congratulatory blog articles.

                                  Build trust by telling me what doesn’t work, and what will never work.

                                  1. 17

                                    Here’s what doesn’t work https://github.com/ziglang/zig/labels/zig%20cc.

                                2. 7

                                  Clang could provide the same trivial cross compilation if it were a priority. Zig is mostly just using existing clang/llvm features and packaging them up in a way that is easier for the end user.

                                  1. 21

                                    “just”

                                    1. 4

                                      Perhaps not obvious, but I meant the “just” to be restricted to “mostly just using existing clang/llvm features”. I’m in no way denegrating Andrew’s efforts or the value of good UX.

                                  2. 5

                                    Another option is that it’s easy if you build it in at the start and much more difficult to add it later. It’s like the python 2 to 3 migration. It wasn’t worth it for some projects, but creating a new python 3 project is easy. Path dependence is a thing.

                                    1. 2

                                      I think the hard part is adding these kinds of features after the fact. But assuming it’s already in place, I feel like this is actually not a very hard thing to maintain?

                                      I think a lot of complexity with existing tools is around “oh we’re going to have this be global/implicit” and that permeating everywhere, so then when you want to parametrize it you have to play a bunch of tricks or rewrite everything in the stack to get it to work.

                                      But if you get it right out of the door, so to speak, or do the legwork with some of the dependencies… then it might just become a parameter passed around at the top level (and the lower levels already had logic to handle this, so they don’t actually change that much).

                                      case in point: if you have some translation framework relying on a global, your low-level will read that value and do a lookup, and the high level will not handle it. If you parameterize it, now your high-level stuff has to pass around a bunch of translation state, but the low-level (I guess the hard part, so to speak?) will stay basically the same. At least in theory

                                      I do kinda share your skepticism with the whole “let’s rewrite LLVM” thing… but cross compilation? Having a build system that is “just zig code” instead of some separate config lang? These seem good and almost simpler to maintain. I don’t think C compiler engineers are idiots for not doing X, just like… less incentivised to do that, since CMake isn’t a problem for someone who has spent years doing it.

                                      1. 2

                                        I agree with you. This doesn’t make any sense for Zig to take on. Andrew shared it with me as he was working on it and I thought the same thing then: what? Why does a compiler for one language go to this much trouble to integrate a toolchain for another language? Besides being severely out of scope, the problem space is fraught with pitfalls, for example with managing sysroots and dependencies, maintaining patched forks of libcs, etc. What a huge time sink for a group who should ostensibly have their hands full with, you know, inventing an entire new programming language.

                                        The idea of making cross-compilation easier in C and C++ is quite meritous. See Plan 9 for how this was done well back in the naughts. The idea that it should live in the zig build tool, however, is preposterous, and speaks rather ill of the language and its maintainers priorities. To invoke big corporate compiler engineers killing open source as the motivation is… what the fuck?

                                        Sorry Andrew. We don’t always see eye to eye, but this one is particularly egregious.

                                        1. 7

                                          No, this makes a lot of sense. Going back to the article, Go’s toolchain (like Plan 9’s) is good at cross-compilation, but “I recommend, if you need cgo, to compile natively”. This sort-of works for Go because cgo use is low. But Zig wants to encourage C interoperability. Then, Zig’s toolchain being good at cross-compilation is useless without solving C’s cross-compilation, because most of Zig will fail to cross-compile because of C dependency somewhere. By the way, most of Rust fails to cross-compile because of C dependency somewhere. This is a real problem.

                                          Once you solved the problem, it is just a good etiquette to expose it as CLI, aka zig cc, so that others can use it. The article gives an example of Go using it, and mentions Rust using it in passing.

                                          I mean, yes, zig cc should be a separate project collaboratively maintained by Go, Rust, and Zig developers. Humanity is bad at coordination. Big companies are part of that problem. Do you disagree?

                                          1. 2

                                            The best way, in my opinion, to achieve good C interop is by leveraging the tools of the C ecosystem correctly. Use the system linker, identify dependencies with pkg-config, link to system libraries, and so on. Be prepared to use sysroots for cross-compiling, and unafraid to meet the system where it’s at to do so. Pulling the concerns of the system into zig - libc, the C toolchain, statically building and linking to dependencies - is pulling a lot of scope into zig which really has no right to be there. Is the state of the art for cross-compiling C programs any good? Well, no, not really. But that doesn’t mean that those problems can jump domains into Zig’s scope.

                                            I am a believer that your dependency’s problems are your problems. But that definitely doesn’t mean that the solution should be implemented in your domain. If you don’t like the C ecosystem’s approach to cross compiling, and you want to interoperate with the C ecosystem, the correct solution involves going to the C ecosystem and improve it there, not to pull the responsibilities of the C ecosystem into your domain.

                                            Yes, other languages - Go, Rust, etc - should also be interested in this effort, and should work together. And yes, humanity is bad at cooperation, and yes, companies are part of that problem - but applying it here doesn’t make sense. It’s as if I were talking about poaching as contributing to mass extinction, and climate change for also contributing to mass extinction, and large corporations for contributing to climate change, and then conclude that large corporations are responsible for poaching.

                                            1. 3

                                              There’s another path to achieving C interop, which is by using whatever feels more convenient but staying true to whatever ABI boundaries. In terms of Zig, this is achieved in a few ways: It uses its own linker (currently LLD) which is useful when you don’t have a local system linker (pure linux/windows install) and still works with existing C code out there. It uses paths for dependencies, leaving it up to the user to specify how they’re found (e.g. pkg-config). It links to system libraries only if told explicitly but still works without them - this is also useful when building statically linked binaries which still work with existing C code.

                                              For cross-compiling, sysroot is a GCC concept. This doesn’t apply to other environments like clang (the C compiler Zig uses), or the defaults of Mac/Windows. Zig instead uses LLVM to emit any supported machine code (something which requires having multiple compilers for in GCC), bundled the build environment needed (lib files on windows, static libc on linux if specified, nothing if dynamically linking), and finally links them together to the appropriate output using LLD’s cross-linking ability.

                                              Having this all work seamlessly from whatever supported system is what makes it appealing. For example, andrew (creator of Zig) has showcased in the past cross-compiling the compiler on an x86 machine to aarch64, then using qemu to cross-compile the compiler again from the aarch64 vm back to x86, and it works. This applies also to other operating systems, which is a feature that isn’t present in current cross compiling tools, even clang.

                                              For the issue of problem domains, this is not something you could address by trying to fix existing C tools. Those already have a defined structure as andrew noted above given they have different goals and are unlikely to change it. This could be why Zig takes upon solving these problems locally, and pulls the responsibility of what it wishes to provide, not the entire C ecosystem. I believe its partially of similar sub-reasons why Go has its own build system but also claims to compile to different environments.

                                              I also agree that different ecosystems could pitch in for what seems to be a universally helpful tool, but as its been going on today, maybe they have different design goals. Where another path such as using the existing C ecosystem (for various situational reasons) makes more sense than the idealistic one Zig has chose to burden.

                                              1. 1

                                                It links to system libraries only if told explicitly but still works without them - this is also useful when building statically linked binaries which still work with existing C code.

                                                System libraries can also be static libraries, and there’s lots of reasons to link to them instead. We do build statically linked programs without the Zig tools, you know!

                                                For cross-compiling, sysroot is a GCC concept. This doesn’t apply to other environments like clang

                                                Clang definitely uses sysroots. Where does it find the static libs you were referring to? Or their headers? The answer is in a sysroot. Zig may manage the sysroot, but it’s a sysroot all the same.

                                                There’s more to take apart here, but on the whole this is a pretty bad take which seems to come from a lack of understanding about how Linux distributions (and other Unicies, save for macOS perhaps) work. That ignorance also, I think, drove the design of this tool in the first place, and imbued it with frustrating limitations which are nigh-unsolvable as a consequence of its design.

                                                1. 3

                                                  The explicitly provided system libraries is not about dynamic vs static linking, its about linking them at all. Even if you have the option to statically link libc, you may not want to given you can do its job sometimes better for your use case on platforms that don’t require it (e.g. linux). The closest alternative for C land seems to be -ffreestanding (correct me if i’m wrong)? This is also an option in zig, but it also gives the option to compile for platforms without having to link to any normal platform libraries.

                                                  Clang has the option to use sysroots, but it doesn’t seem to be required. In zig’s case, it uses whatever static libs you need by you explicitly linking to them rather than assuming they exist upon a given folder structure in the same directory. Zig does at least provide some methods of finding where they are on the system if you don’t know there they reside given the different configurations out there. I’d say this differs from a sysroot as its more modular than “system library directory”.

                                                  Without a proper explanation, the idea that this approach “stems from lack of understanding” or has “frustrating limitations which are nigh-unsolvable” don’t seem make such sense. As we’re both guilty of prejudice here, i’d relate your response to one of willfully ignorant to unknown systems and gate-keeping.

                                                  1. 1

                                                    Clang has the option to use sysroots, but it doesn’t seem to be required.

                                                    Your link does not support your statement. I don’t think you understand how cross-compiling or sysroots actually work.

                                                    Again, it’s the same with the rest of your comments. There are basic errors throughout. You have a deep ignorance or misunderstanding of how the C toolchain, linking, and Unix distributions work in practice.

                                                    1. 4

                                                      Given you haven’t actually rebutted any of my claims yet, nor looked into how clang supports using sysroots, we probably won’t be getting anywhere with this. Hope you’re able to do more than troll in the future.

                                                      1. 1

                                                        Clang totally uses sysroot, see here. (Long time ago, I wrote the beginning of Clang’s driver code.) I don’t know where to begin, but in fact, ddevault is correct about all technical points and you really are demonstrating your ignorance. Please think about it.

                                                        1. 3

                                                          Please re-read by post above which literally says “clang supports using sysroots”, a claim that agrees with yours. My original point a few messages back was about how clang doesn’t need sysroot in order to cross-compile, which still stands to be disproved, as its just short for a bunch of includes.

                                                          Once again, just as ddevault, you enjoy making claims about others without specifying why in an attempt to prove some point or boost your ego. Either ways, if this is your mindset, there’s no point in further discussion with you as well.

                                        2. 1

                                          What features in this case?

                                          1. 1
                                        1. 11

                                          The bit about copyright violations is particularly bad, too.

                                            1. 17

                                              That seems less an “other side” and more “so what?”, especially in his response to jwz’s response, but it’s indeed interesting to have more context.

                                              1. 6

                                                Interesting, but I’m inclined to be on jwz’s side

                                                1. 6

                                                  “Your security arguments turned out to be incorrect. So, stop?” Did they though? Did they REALLY?

                                              1. 7

                                                I really want to try kakoune, but the idea of starting over with a new editor and editing paradigm just seems like so much effort and time before I’m productive.

                                                1. 3

                                                  I feel the same way. I use NeoVim and have tried to keep it as stock config as possible, but I think I’ve already tweaked it enough to be different enough. So learning Kakoune would be against vim everywhere and my customization.

                                                  1. 3

                                                    If you like the vi/vim experience but want some similar features to Kakoune then vis might be worth a shot. (Also see differences from Kakoune).
                                                    I use it as my main editor and structural regular expressions, multi-cursor, etc are all quite intuitive while not leaving the traditional vi-like modal editing world IMO.

                                                    Plugins are also written in Lua, if that’s your thing.

                                                  2. 2

                                                    YMMV of course, but it only took ~2 weeks after switching from vim for me to become reasonably productive in kakoune.

                                                    1. 3

                                                      What was the biggest hurdle for you when acclimating to Kakoune?

                                                      1. 3

                                                        Not OP, but as someone else who went from Vim to Kakoune, I think the biggest shift for me was thinking in terms of repeatedly narrowing the selection and then doing one single command on the selection, rather than doing a command sequence and e.g. assigning to a macro or the like. The better I got at selection narrowing, the easier and more natural everything felt. Learning slightly different keystrokes was comparatively very easy.

                                                        1. 3

                                                          On day 1 it was 100% unlearning vim muscle memory. After that my biggest challenge was adapting to kakoune’s selection first editing model, which is what inspired me to switch in the first place. It was very worth it though, the incremental nature of the editing in which intermediate results are instantly visible makes complex tasks much more intuitive.

                                                        2. 1

                                                          I’m coming from emacs, which is probably going to be worse, but even two weeks sounds like an enormous amount of time to not be able to code. I can’t justify taking more than a day to switch at work, so I’d have to use both, too.

                                                          1. 2

                                                            It’s not that I wasn’t able to code at all but that I was significantly slower than I was with vim. I quickly gained speed over the first week though and after ~2 weeks I didn’t feel like my inexperience with editor was holding me back for basic editing tasks. More advanced editing tasks weren’t intolerably slow either, just took a bit more thought than they do now.

                                                      1. 14

                                                        Tired of Terminal UIs based on ncurses or blessed? With Fig, you can use modern web technologies like HTML, CSS and JavaScript instead.

                                                        I suppose I’m not in the target audience as I really don’t see using web technologies as a feature over TUIs.

                                                        1. 5

                                                          Eh, the idea seems brilliant to me, honestly; there are a few tools I just don’t use often enough to fully remember their CLIs, so having an ad hoc, simple GUI for those would be a huge boon, letting me stick with the CLI tool, but not (necessarily) have to read the man pages each time. Having that outside the terminal so I can see the command line being built also makes sense. But I’m with you that full-blown HTML for the UI seems a bit heavy to me.

                                                          1. 1

                                                            When I saw it, I thought of a co-worker who’s wondered about how to span the gulf between scripts/utilities we can readily write and run, and utilities that non-programmers doing video production can handle.

                                                        1. 44

                                                          Most IDEs create entire worlds where developers can create, but creating requires configuration. It takes time to adjust that world, to play god, to create shortcuts and hotkeys, to get used to different command structures and UI.

                                                          The authors seem to be describing Emacs.

                                                          With code completion, Git control, and even automatic deployment systems, modern IDEs are a Swiss Army Knife of features

                                                          Do people just not know what can be done in Emacs? Git control???

                                                          1. 6

                                                            Emacs is amazing, but has a horrible issue with discoverability. “Distros” like Doom or Spacemacs have tried to address that, but, regardless of my personal feelings on them, they haven’t really broken into the mainstream enough to be noticed. (And as someone who used Emacs for years, I think the discoverability bit is very real.)

                                                            1. 4

                                                              Spacemacs

                                                              I have used Spacemacs for a while, but as someone who frequently opens and closes the editor completely - it just boots way too slow, even IntelliJ is faster.

                                                              1. 3

                                                                emacsclient --alternate-editor=''

                                                                Really, Emacs in daemon mode is faster than anything else. Even vim is slow to start in comparison. I don’t understand why Emacs is the only editor which does this. I guess, you need to be reeeealy slow for someone to actually implement this feature :-)

                                                                1. 5

                                                                  I’ve tried emacs daemon mode, I never found it very pleasing. There were a number of papercuts I was running into… I need to figure out again what that was.

                                                                  1. 2

                                                                    Yeah, I actually have the same experience, getting useful setup of daemon mode took me several tries. Like, you need to pass an empty string as an argument to --alternate-editor to have it auto-spawn the daemon if it isn’t running yet. That’s right next to sending SIGUSR1 to du in my chart of horrible CLI interfaces :)

                                                                2. 1

                                                                  People who aren’t used to having a fully-featured editor boot, over SSH, in the blink of an eye don’t really know what they’re missing. One less interruption in the flow of thought.

                                                                  Never really got used to emacs, but boot time is one reason i switched to vim.

                                                                3. 2

                                                                  I think the discoverability bit is very real.

                                                                  Yes, this is the word I should have used. Good GUIs can really help with discoverability.

                                                                  1. 1

                                                                    What do you mean by discoverability? Shoudln’t the help-system address that? C-h m to see what commands are bounds, C-h f to see what they do, M-x customize-group to see what you can change.

                                                                    1. 3

                                                                      C-h and friends are great, but you kind of have to know what the options are to get the help you want. For example, you can do a lot of Git manipulation in dired, but you need to know that exists, or at least to think of maybe dired might have some stuff in that area, to even know what you should be asking for. Experienced users, sure, I think this is sufficiently discoverable, but new users (and I mean that fairly broadly), it’s trickier.

                                                                    2. 1

                                                                      Emacs is amazing, but has a horrible issue with discoverability.

                                                                      I do want to specifically point out that the parent comment emphasized the mention of “Git control”, when one of the aspects of the Emacs ecosystem a bystander is most likely to have heard praise for is magit (another being org-mode). So it’s not so much a matter of whether it’s possible to discover magit’s existence in a vacuum – it’s that having access to one of the best interfaces to Git, period, is something often evangelized to non-Emacs-users as a point in Emacs’ favor.

                                                                  1. 9

                                                                    Is it really that complex to write Makefile for such simple project? And in many projects it can became even simpler as there are implicit rules in GNU Make for C/C++/TeX/etc.

                                                                    OUTPUTS = variables.pdf
                                                                    
                                                                    .SUFFIXES: .svg .pdf
                                                                    .PHONY: all
                                                                    
                                                                    all: ${OUTPUTS}
                                                                    
                                                                    clean:
                                                                    	rm -rf *.pdf
                                                                    
                                                                    .svg.pdf:
                                                                    	inkscape "$<" --export-text-to-path --export-pdf="$@"
                                                                    
                                                                    1. 12

                                                                      The makefile is simple, but knowing which incantations to use to make it simple is harder.

                                                                      1. 3

                                                                        Yeah, I was a bit surprised at Julia writing off make as arcane when she’s covered things like ptrace and kernel hacking elsewhere. Poorly documented is a better descriptor for make, and other people have done a decent job of fixing that.

                                                                        Maybe she will see this and write another article on how make isn’t too arcane. :)

                                                                        1. 3

                                                                          I’m (obviously) always interested in learning about things I thought were arcane and complicated are not actually that complicated! I’ve never read anything about make that I thought was approachable, but I haven’t tried too hard :)

                                                                          I think the thing that throws me off about Make is that by default it seems to decide how to build something based on its file extension, and I find this pretty hard to reason about – what if I have 2 different ways to build PDFs files? (which I do).

                                                                          1. 2

                                                                            I’m no Makefile expert, but the things with the file extensions are implicit rules and are basically there to save you from having to copy/paste the same basic rule template for each file. You can always override them with your own explicit recipe since those take precedent:

                                                                            special.pdf: special.svg
                                                                            	do-this-command     # indent via TAB!
                                                                            	then-that-command   # ...
                                                                            

                                                                            For simple one-off uses, I usually ignore all the details of the implicit rules and just spell everything out explicitly like that. I usually don’t even bother with automatic variables like $^ and $@ since I can never remember which is which. I’ll just copy/paste and search and replace all the recipes, or write a quick Python script to generate them. It’s hard to get much simpler than that. Just remember that the first target in the Makefile is the default if you type make with no targets.

                                                                            1. 2

                                                                              I found the GNU Makefile manual to be very good about describing how make works.

                                                                              The two different ways you make PDF files—do both methods use the same input files? If not, implicit rules can work just as well if the inputs are different (at work, I have to convert *.lua files to *.o files, in addition to *.c files to *.o).

                                                                            2. 3

                                                                              FWIW I wrote 3 substantial makefiles from scratch, and “gave up” on Make. I use those Makefiles almost daily, but they all have some minor deficiencies.

                                                                              I want to move to a “real language” that generates Ninja. (either Python or the Oil language itself :) )

                                                                              Some comments here but there are tons of other places where I’ve ranted about Make:

                                                                              https://www.oilshell.org/blog/2017/10/25.html#make-automatic-prequisites-and-language-design

                                                                              I think my summary is that the whole point of a build system is to get you correct and fast incremental and parallel builds. But Make helps you with neither thing. Your makefile will have bugs.

                                                                              I never condensed my criticisms in to a blog post, but I think that’s the one-liner.

                                                                              1. 1

                                                                                I hear you, and I know that make is the standard make-type things are a bit more legitimate than the article I’m referencing.

                                                                                That said, the amount of inexplicable boilerplate in something like this (from up in the thread) are why I’m super-empathetic to people who find Makefiles too complicated. And that’s ignoring things like the tab-based syntax or the fact that Makefiles effectively yoink in all of sh (or, honestly, these days, bash) as a casual comprehension dependency.

                                                                                Makefiles can be very simple; I absolutely grant you that. They generally aren’t. And even when they are, they require a certain amount of boilerplate that e.g. ninja does not have.

                                                                              2. 2

                                                                                That’s about how I’d write it for a good hand-written yet scalable Makefile, but it’s worth noting that you can go even simpler still here:

                                                                                pdfs/variables.pdf: variables.svg
                                                                                	inkscape variables.svg --export-text-to-path --export-pdf=pdfs/variables.pdf
                                                                                

                                                                                Easy enough that you can do it from memory. This is also the style that I go for when writing a quick script to generate a Makefile.

                                                                              1. 3

                                                                                PowerPC and POWER have the same instruction, lwm (for load word multiple), but it interestingly only does the range variant—likely due to having twice the number of registers, but the same word size, meaning enumeration wasn’t viable.

                                                                                1. 2

                                                                                  It’s also usually slower than just a whole bunch of lwzs specified manually. Ditto for stmw vs stw: I only use the multiple word variant when space is at a premium, which is to say almost never. Even in the 601 days the speed difference was warned about.

                                                                                1. 8

                                                                                  Although I have known about Kakoune for a while, I only recently found out that it’s licensed under Unlicense, which I find unsettling for legal reasons.

                                                                                  Otherwise, I haven’t really used it much. How does it compare to vis? I have grown very fond of it for terminal editing, to the degree that I usually uninstall vim on all my machines to replace it with vis.

                                                                                  1. 5

                                                                                    I stopped off at vis (and sam) along the way from Vim to Kakoune. vis was fairly nice, but ultimately I found it really, really wanted you to move around and make selections with structural-regular-expressions language, and I never quite got the hang of it (quick, what’s the difference between +- and -+?)

                                                                                    In contrast, Kakoune supports basically the same operations as SREs, but encourages you to move around and make selections with immediate, visual feedback — and it’s still easily scriptable, thanks to the “interactive keys are the scripting language” model the article describes.

                                                                                    It’s a bit of a shame that Kakoune’s licensing civil disobedience excludes people who just want a nice text editor, but even if you can’t use Kakoune’s code I hope you (or other people) will steal its ideas and go on to make new, cool things.

                                                                                    1. 6

                                                                                      It’s a bit of a shame that Kakoune’s licensing civil disobedience excludes people who just want a nice text editor,

                                                                                      Huh? I just looked at the UNLICENSE file; unless I’m missing something, it just drops Kakoune into the public domain. SQLite has the same thing going on.

                                                                                      1. 3

                                                                                        The issue is allegedly that it’s not possible to do that under every legal system. Germany seems to be an example where that could cause issues. CC0 handles this better by adding a “fallback” clause in case that it’s not possible.

                                                                                        1. 4

                                                                                          Legal systems are not laws of nature. If no one would ever take you to court or fine you for violating a law, that law does not apply to you. Unlicense, WTFPL, etc are great examples of this - extremely strong signals from author that they will not take any actions against you no matter what you do with the content under that license.

                                                                                          1. 1

                                                                                            Unlicensed, WTFPL, and even CC0 are banned by Google due to opinions by their legal team. While I don’t trust Google for a lot of things, I think it’s safe to trust their legal team thought about this and had their reasoning.

                                                                                            1. 4

                                                                                              But Google’s risk appetite should be pretty different than yours. The legal system hits everybody different.

                                                                                              1. 1

                                                                                                What do you mean by this? Google’s legal team is going to be playing liability games in a paranoid way that is obviously irrelevant for anyone not engaged in corporate LARP.

                                                                                                Like, actually, no appeals to authority, no vague paranoia, what would actually go wrong if you used WTFPL or CC0 in Germany for a personal project?

                                                                                                1. 1

                                                                                                  CC0 is fine in Germany, UNLICENSE is the problem.

                                                                                                  But otherwise, you’re right. In most cases, nobody cares what license is being used (other than ideological reasons). A small hobby project might just as well have a self-contradictory license, and it wouldn’t be a practical problem. But depending on the scope of what is being done, there are always legal vultures, just like with patent trolls or people who blackmail torrent users, that might find a way to make some money from legal imperfections.

                                                                                                  I’m not a legal expert, so I try not to bet on these kinds of things. If CC0 and UNLICENSE are functionally equivalent, signal the same message (“do what you want”) but one is less risky that the other, I’ll take the safer option.

                                                                                        2. 2

                                                                                          What does SRE stand for, in this case?

                                                                                          1. 5

                                                                                            “Structural regular expressions”, I’d wager?

                                                                                            1. 5

                                                                                              structural-regular-expressions

                                                                                        1. 3

                                                                                          I like that this post uses just awk to extract the address for the DISPLAY variable. I’ve gotten frustrated with tutorials that use some combo of ‘cat’, ‘grep’, and then ‘awk’.

                                                                                          1. 3

                                                                                            Heh, that’s fair. I saw that in one single post aeons ago; my suspicion is everyone since is just copying it without thinking about it.

                                                                                            For fellow crustaceans: the line that’s usually suggested is

                                                                                            export DISPLAY=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2; exit;}'):0.0
                                                                                            

                                                                                            But the argument to awk is the file to consume, grep is something you can do with awk line filtering, and even the external :0.0 can be integrated with the awk print statement, so you can also just do

                                                                                            export DISPLAY=$(awk '/nameserver/ {print $2 ":0.0"; exit;}' /etc/resolve.conf)
                                                                                            
                                                                                            1. 2

                                                                                              Yea, you caught me, I just copied it from someone else, but I also appreciate the simplicity of just using awk :)

                                                                                          1. 12

                                                                                            I’m an extremely heavy user of WSL2 (and previously WSL), and have years of daily driving Linux and Windows on native installations as desktops, so please take this comment coming from that vantage point:

                                                                                            I think that running a full-blown desktop environment in WSL2 misses the point. The main benefit of using WSL2 over a virtual machine or a native installation of Linux, IMVHO, is that you prefer or mostly need to work in Windows, but you have some Linux use-cases you’d like to integrate into your flow. If you don’t want the integration, you can get a more genuine experience just running a full-screen VM. If you are mostly in Linux, you’ll get higher performance with a native installation. WSL2 really shines when you want to run a couple of apps, but mostly live in Windows.

                                                                                            To me, running a full desktop environment via WSL2 gets into a weird area where you’re paying WSL2 costs, but reaping very few benefits. Windows, ideally, already is the desktop manager; if you can use an X server that integrates with it (I prefer to use X410 for its turnkey HiDPI support and transparent clipboard integration, but I’ve also had good experiences with others in the past), you can have a truly Linux-and-Windows-living-together experience. If you instead run a full-blown desktop environment, it feels…maybe not quite as bad as in a full-on VM, but pretty close to, say, early Mac OS X running Classic: the splits between the two environments are painfully obvious, and I end up wanting to have an editor for Linux and an editor for Windows, and an email client for Linux and for Windows, etc., which puts me back into “why didn’t I just use a full-blown VM” land.

                                                                                            This is very much in the mind of the beholder, and if you like and want to run a Linux desktop via WSL2, then do it in good health. But if you’re just getting into WSL2 (and you should!), I’d encourage you not to start there. Try using WSL2 just for the genuine Linux stuff first, and only turn to the DE if you’re both unhappy with that experience, and can’t install Linux natively.

                                                                                            1. 3

                                                                                              The two big things from WSL (maybe in WSL2?) that I find useful are wslpath, to convert paths between the two, so I can hand a file from my Linux home directory to a *NIX program and the ability to run .exes directly from bash. There are also some fun things in the WSL image activator that let you chain together shell scripts in the shbang line. I have a shell script saved as wps that looks like this:

                                                                                              #!/bin/sh
                                                                                              FILE=`wslpath -a -w $1`
                                                                                              shift
                                                                                              exec powershell.exe -File ${FILE} $@
                                                                                              

                                                                                              I can then put #!/path/to/wps at the start of .ps1 files that I want to run with the Windows version of PowerShell and run them directly from WSL’s bash. This is very useful for things like starting and stopping Azure VMs. I’ve recently switched to using the Windows Terminal from Konsole in WSL + X11, but I still use vcXsrv so I can run things like gitg (locally or via X forwarding).

                                                                                              1. 1

                                                                                                I like the suggested use-case from @david_chisnail, but for the most part I agree with you @gecko. My personal laptop was a Windows machine with WSL, then dual-booted Linux/Windows, and now it’s just Linux. This was an experiment to see if I could have a mostly-Linux work laptop, even though I needed Windows for Rhino3D CAD software.

                                                                                                I wanted to see if I could, for all intents and purposes, have a visually Linux laptop running on Windows. This tutorial is the steps that I took, and I really enjoyed the results. One thing that’s not included in the tutorial (but I found extremely useful) was calling Windows .exe files inside .desktop files from within WSL. You could then use rofi, or the dmenu, to call Windows applications from within a WSL2 desktop GUI (but obviously they sill ran outside of VcXsrv).

                                                                                                This whole setup is really unsustainable, mainly because of the fact that you don’t have a persistent session with VcXsrv, so anything that isn’t recoverable in a Tmux session gets trashed if your computer goes to sleep (disconnecting from VcXsrv). It’s a total monstrosity, but it’s nice to have the gnome-terminal and tiling, I think.

                                                                                                Excited for Microsoft to integrate a wayland compositor to actually do this stuff by default!

                                                                                              1. 12

                                                                                                topic drift… but anyone tried https://pijul.org/, a new VCS, that claimed to solve the exponential merge time?

                                                                                                1. 7

                                                                                                  Pijul is being rewritten: https://discourse.pijul.org/t/is-this-project-still-active-yes-it-is/451

                                                                                                  I only know, because I was trying to send a patch to a project hosted on nest.pijul.com, but I think Nest is also in a bit of a broken state at the moment. At least, I couldn’t make it work.

                                                                                                  Maybe Pijul works fine locally. It’s been in the rewrite state for quite a bit, though, so the current code is not really maintained.

                                                                                                  1. 4

                                                                                                    Yeah, I want to like pijul, but I was put off by the fact that they refuse to open-source Nest. That means I can’t self-host, and it was a warning sign that the development style in general is not as open as I’d like. That they’ve been in closed rewrite for over a year is another yellow flag.

                                                                                                    I’ll check back in with it in a couple years.

                                                                                                    1. 4

                                                                                                      I couldn’t care less about Nest (although I completely get those who do!), but the fact that they’re on their third rewrite, none of which have existed long enough to actually get stable or usable, is a much bigger issue IMVHO.

                                                                                                      1. 4

                                                                                                        I’d be fine if they rewrote it ten times, as long as the process was open! Or at least transparent.

                                                                                                        1. 2

                                                                                                          Haven’t they been rewriting it for over 2 years now?

                                                                                                        2. 1

                                                                                                          Third already? Where did you get that from?

                                                                                                          1. 1

                                                                                                            They had two older implementations in OCaml and Scala, and are currently rewriting in Rust. Am I missing something?

                                                                                                            1. 2

                                                                                                              Disclaimer: I’m one of the authors.

                                                                                                              We’ve indeed tried different languages before settling on Rust, and we have had preliminary things in OCaml and Scala. But we’ve had a public version for years now, written in Rust.

                                                                                                              This comment makes me think that there actually three different classes of opinions:

                                                                                                              • those who think the development is “too opaque”, even though they never contributed anything when it was more open, and don’t know anything about the reasons why the new version is not public yet.
                                                                                                              • those who think there have been too many released versions (or “rewrites”), meaning that the development is actually too open for them.
                                                                                                              • I can also guess that some don’t really mind not having a half-broken prototype version to complain about, as long as they get a working, open source version in the end.
                                                                                                        3. 1

                                                                                                          Something I’ve read before about Pijul, is that you can’t self-host without having unlimited access to the Nest’s source code. This is actually false: you can totally setup another machine with Pijul installed, and push/pull your patches between the two machines. No need for a web interface for that, SSH is enough.

                                                                                                          1. 1

                                                                                                            Sure, and fair enough. I like having a web interface for browsing my shelved projects and examine history (and, rarely, showing to other people), though, and my concerns about openness remain.

                                                                                                            1. 3

                                                                                                              As the author of the Nest, I have written a number of times that I believe the Nest should and will be open in the long term, but Pijul is a large open source project to manage already, including its many dependencies (Rust crates Sanakirja and Thrussh, for example). At the moment, opening the Nest would simply not be feasible in terms of work time and extra maintenance. In the future, when things settle down a little on the Pijul front, this situation is likely to change.

                                                                                                    1. -1

                                                                                                      I love the look of haskell, but damned if I can ever use it for more than a week.

                                                                                                      1. 3

                                                                                                        I’ve generally found that practically minded Haskell-inspired languages and/or frameworks hit a really good balance for my sanity. On the .NET side, for example, F# can be excellent (and its trivial escape valve to C# is perfect when you need a more typical way to reason about a problem). And even in pure C#, there’s always language-ext to give you ad hoc Haskell-like purism. On the JVM side, it’s a bit trickier: Kotlin would fill in nicely for C#, but I don’t know a good F# equivalent, and the Kotlin variant of language-ext is Arrow, which I absolutely loathe. But I’m sure there’s something equivalent you could do there, too. And I could point to similar libs for things like Ruby (the dry libraries), JavaScript (Immutable.js and the like), and so on.

                                                                                                        I’m otherwise with you, though: I’ve tired to use Haskell in anger many, many times, and the closest I ever get is being angry I ever tried to use Haskell.

                                                                                                        1. 2

                                                                                                          Care to elaborate on why you loathe Arrow?

                                                                                                          1. 2

                                                                                                            Docs are poor, upgrades have been rough, performance impact has been disproportionately high compared to something like language-ext, the way they use Kotlin destructuring for bind borders on syntax abuse…it basically feels like someone tried to staple-gun Haskell onto Kotlin and ended up with something that’s alien to both. (The recent push towards using the IO monad everywhere also makes it comparatively harder to use Arrow for only parts of our app, IMHO.)

                                                                                                            1. 1

                                                                                                              Thanks for elaborating! I haven’t used Kotlin, but based on what I know about the language it does seem a bit odd to pair it with Haskell.

                                                                                                          2. 2

                                                                                                            On the JVM side, it’s a bit trickier:

                                                                                                            A baby-step is to use the Atlassian’s library Fugue.

                                                                                                            A “head-dive” is to start using the Frege programming language.

                                                                                                            1. 1

                                                                                                              JVM has Scala

                                                                                                          1. 6

                                                                                                            Modern font rendering is so complicated I’m nostalgic for bitmap fonts.

                                                                                                            Intuitively there’s something very wrong with font formats being designed around running the fonts in a vm.

                                                                                                            1. 14

                                                                                                              Intuitively there’s something very wrong with font formats being designed around running the fonts in a vm.

                                                                                                              One thing I have to remind people of from time to time is that computers are used in more languages than just English. Languages that use the Arabic alphabet (Arabic, Farsi, Urdu, etc.), for example, are extremely hard—arguably impossible—to properly do armed with only bitmaps. Letters have different sizes and shapes depending on their position in the word and what comes before and after, and justification is done not by adjusting the spaces between words, but by adjusting individual letter widths. Yeah, you can do this as a bitmap with no VM, but then you need a huge number of glyphs, and each program has to natively reimplement Arabic rendering logic. Using a VM for such a script is a huge boon, saving tons of disk space, code points, and duplicated logic. And while Arabic script is extreme, scripts like Hudum, Chinese, and Devanagari have similar issues.

                                                                                                              1. 5

                                                                                                                One thing I find fascinating but somewhat under-studied is that Latin script has been typeset in print for something like 600 years, and typewriters have existed for over 100. To some extent then, what Latin alphabet text looks like has been influenced by the needs of machines for a long time. Contrast with what pre-print formalized handwriting styles sometimes look like, and ponder trying to come up with bitmap fonts that look like that.

                                                                                                                Text is fundamentally shaped by the tools used to write it. Try writing with a fountain pen instead of a ball pen and after all the scuffs and smudges as you re-learn the process, your handwriting will come out different. Try writing with a Chinese/Japanese brush or a cuneiform stylus-in-mud and you’ll get different letter shapes again. Scripts throughout European history are often based off of Roman Latin script… or rather, the formal Latin script that people of the time had available, which was generally carved into stone.

                                                                                                                1. 5

                                                                                                                  You might be interested in this very deep dive into fonts developed in the Papal states in the 15th century:

                                                                                                                  https://ilovetypography.com/2016/04/18/the-first-roman-fonts/

                                                                                                                  1. 2

                                                                                                                    Whew that is WAY too deep for me, but it’s still absolutely fascinating. Thank you!

                                                                                                                2. 1

                                                                                                                  On a related but maybe not as important a note - what bitmap fonts (if any) support emojis?

                                                                                                                  1. 2

                                                                                                                    GNU unifont

                                                                                                                    1. 1

                                                                                                                      Thanks for the link, I did not know about this!

                                                                                                                3. 11

                                                                                                                  I mean, I definitely get the same feeling, but it’s not at all clear to me that there’s a better method for encoding the ideal way to rasterize a given vector image at every possible raster size, without excessively bloating the filesize.

                                                                                                                  If you do naive interpolation, you either get jaggies or blur. Embedding explicit hinting data for every raster size is a non-starter for web fonts.

                                                                                                                  1. 5

                                                                                                                    Typography is a pretty complicated subject, so (no offense) your intuition doesn’t hold much water. (My creds: I was working in font digitization from 1988-90 during the “font wars”, have friends who’ve stayed in the field longer, and have had a pretty strong amateur interest ever since.)

                                                                                                                    The virtual machine for font hinting has been around since Apple developed TrueType circa 1988-90. One of the rationales was that there were several pre-existing proprietary hinting systems (such as Adobe’s “Type 1” fonts), and they wanted those vendors to be able to migrate to TrueType, so having lots of flexibility in the hinting system was necessary. Also, a VM allows for new innovations, instead of just supporting one hinting algorithm.

                                                                                                                    And I want to second what @gecko said — Roman-alphabet typography is dead easy compared to most other scripts. During some meetings between Apple and Sun about font technology in Java2D (circa 1998), after a lot of back and forth about ligatures, one of the Sun guys said “but really, how many people care about ligatures and contextual forms?” One of the Apple folks gave him a very sharp look and said “How about every literate person in India and Pakistan?”

                                                                                                                  1. 4

                                                                                                                    Not a crazy takeaway from such a thorough and detailed article, but, TIL Apple Notes uses protobufs.

                                                                                                                    1. 2

                                                                                                                      such a thorough and detailed article

                                                                                                                      Thank you!

                                                                                                                      TIL Apple Notes uses protobufs.

                                                                                                                      I found it odd that they opted not to go with a binary plist which is also used within the Notes database in other columns in the same table. I have a draft post I need to finish some day that gets more in the weeds as to exactly how protobufs are used and what it took to figure out the structure since, obviously, Apple doesn’t make this public.

                                                                                                                      I believe Apple technically calls them NSKeyedArchiver objects, but hey, if it looks like a protobuf and parses like a protobuf…

                                                                                                                      Edited to fix run-on sentence.

                                                                                                                      1. 2

                                                                                                                        TIL Apple Notes uses protobufs.

                                                                                                                        Apple actually uses protobufs all over the freaking place; they just don’t advertise it. The newer Pages/Numbers/etc. formats are basically protobufs all the way down, for example. I have no idea why they ended up so protobuf-heavy, but seeing it pop up in Notes is unsurprising.

                                                                                                                      1. 2

                                                                                                                        Is it me or does an article on mainframe processor design (as cool as it is) feel out of place on a site called “serve the home”?

                                                                                                                        1. 2

                                                                                                                          I mean…

                                                                                                                          We are unlikely to review a z15 at STH.

                                                                                                                          So they know. But I think looking at mainframe architecture is very interesting anyway, and I suspect the readers of that site agree: it’s an interesting peek into a very different way to build a computer.

                                                                                                                        1. 6

                                                                                                                          This is a really neat write-up!

                                                                                                                          I’ll admit I’ve been rather avoiding Kubernetes and am just barely beginning to get cozy with things like docker-compose and the like, and this article is making me think I should reconsider that choice!

                                                                                                                          1. 6

                                                                                                                            I recommend looking into hashicorp’s nomad

                                                                                                                            1. 1

                                                                                                                              I adore Hashicorp software, but it would depend upon the goal of working with k8s, wouldn’t it?

                                                                                                                              If the goal is to deploy a technology as a learning experience because it’s becoming an industry standard, as awesome as I’m sure nomad is, it’s not going to fit the bill I’d think.

                                                                                                                              I’m still blown away all these years later by Terraform and Consul :) Those tools are just amazing. True infrastructure idempotence, the goal that so many systems have just given up on entirely.

                                                                                                                              1. 4

                                                                                                                                To be clear: if your goal is to learn k8s–which is fine; it’s a very marketable skill right now, and I’m 100% empathetic with wanting to learn it for that reason–then I think it makes sense. But for personal use, Nomad’s dramatically simpler architecture and clean integration with other HashiCorp projects is really hard to beat. I honestly even use it as a single-node instance on most of my boxes simply because it gives me a cross-platform cron/service worker that works identically on macOS, Windows, and Linux, so I don’t need to keep track of systemd v. launchd v. Services Manager.

                                                                                                                            2. 4

                                                                                                                              Don’t, just don’t… I am trying to avoid k8s in in homelab to reduce the overhead. Since I don’t have a cluster or any feature in k8s that’s missing in a simple docker (-compose) setup

                                                                                                                              1. 5

                                                                                                                                It depends on what you call your “lab”. A couple of years ago I realized that there’s only one way I master things: practice. If I don’t run something, I forget 90% about it in 6 months.

                                                                                                                                My take on the homelab is to use as much overhead as possible. I run a bunch of static sites, an S3-like server, dynamic DNS and not much else, yet I use more stuff/overhead to run it than obviously necessary.

                                                                                                                                The thing is, I’ve reached a point where more often than not, I’m using the knowledge from the lab at $WORK, even recycling some stuff such as Ansible roles or Kubernetes manifests.

                                                                                                                                1. 6

                                                                                                                                  I believe this to be the differentiation between a homelab and “selfhosted services”. The purpose of a homelab is to learn how to do things. The purpose of selfhosted services is to host useful services outside of learning time. That is not to say that the two cannot intersect, but a homelab, in my opinion, is primarily for learning and breaking things when it doesn’t affect anything.

                                                                                                                                  1. 2

                                                                                                                                    Yup I think this is the key.

                                                                                                                                    I’m already using docker-compose for my actual self hosted services because it’s simple and easy for me to reason about, back up the configuration of, etc etc.

                                                                                                                                  2. 3

                                                                                                                                    Agreed, it certainly comes with a rather large overhead. I use Kubernetes at work and rather enjoy it. So, it’s great having a lab environment to try things out in and learn, so that’s why I bother hosting my own cluster.

                                                                                                                                  3. 3

                                                                                                                                    I started with docker-compose as I began to learn containerized tech, but transitioned to Kubernetes because the company wanted to use it for prod infrastructure. I actually found that K8s is more consistent and easier to reason about. There are a lot of concepts to learn, but they hang together.

                                                                                                                                    Except PersistentVolumeClaims.

                                                                                                                                    1. 2

                                                                                                                                      Thank you for reading. I’m glad you enjoyed it :)

                                                                                                                                      I’ll say, picking up Kubernetes at home is a good choice if it’s something you want to learn. It’s really useful to have a lab environment to try things out and build your knowledge with projects.

                                                                                                                                    1. 1

                                                                                                                                      It was painful. Even more so trying for a compiler to do it. Which is why the Itanic sunk, and the world went RISC rather than VLIW.

                                                                                                                                      1. 3

                                                                                                                                        The Itanium success is more nuanced than just a lack of compiler support. There were successful VLIW systems before and after Itanium, including compilers.

                                                                                                                                        1. 5

                                                                                                                                          What in your opinion caused Itanium to fail, if you don’t mind my asking? I remember at the time seeing a pile of consumer VLIW stuff floating about (Itanium and the i860 before it, MAJC, arguably the Emotion Engine, etc.), but then it felt like they all just disappeared. I was left assuming the entire concept was a no-go, rather than that some had actually been successful.

                                                                                                                                          1. 4

                                                                                                                                            I am not qualified from a technical perspective as to why Itanium failed. I know someone that is and I’ll ask them later today. My comment was in no way trying to argue that Itanium shouldn’t have failed. Or to give it some mythology like LaserDisc or BetaMax. Like you I watched the slow motion train wreck that was Itanium in the market and the press. It wasn’t till after Itanium eventually folded that I did research around VLIW and found that it was the only part of the story.

                                                                                                                                            Intel was trying really hard to make an expensive, server class chip. They wanted what Sun had with the Sparc, but were in the business of making VW Bugs, they wanted to be Mercedes. It was quite customer hostile to do what they did and could only attempt it from their monopoly position. Things were rushed, the target market didn’t get time to prepare, the compiler was immature and AMD64 was kicking their ass and they were trying to steer the market for their own needs not the customer’s.

                                                                                                                                            I don’t think the instruction set wars are that interesting and architectures will have to change, so we will see an increasing diversity of systems, this is a great thing. RISC/CISC is much more fluid, either is just an encoding for computation. Microarchicture is much more interesting and that is where VLIW comes in, somewhat. The instruction set is how computation is communicated to the processor, the changes in semantics allow the processor to make more decisions and smart decisions. VLIW can totally work and get great speedups, but it can’t be branchy, they are very application specific. Most of the DSP holdouts that have succumbed to FPGAs are VLIW based, as was a couple generations of AMD GPUs.

                                                                                                                                            1. 1

                                                                                                                                              VLIW based, as was a couple generations of AMD GPUs.

                                                                                                                                              Remarkably, AMD replaced TeraScale (VLIW) with GCN (RISC), followed by RDNA (RISC).

                                                                                                                                              Like elsewhere, VLIW is being replaced with RISC.

                                                                                                                                        2. 1

                                                                                                                                          Maybe if compilers were designed for this kind of architecture Itanium would be in a different place today.

                                                                                                                                          1. 1

                                                                                                                                            Maybe if compilers were designed for this kind of architecture

                                                                                                                                            But they are not, and they wasted quite a lot of money trying to make good compilers for VLIW. This is seen as proof that VLIW is a mistake, and RISC the way to go.

                                                                                                                                            1. 1

                                                                                                                                              VLIW compilers have been an area of research since the late-70s.
                                                                                                                                              Intel, HP, and others devoted the better part of a decade to Itanium compilers.
                                                                                                                                              The problem is that there isn’t enough parallelism in most applications to justify the Itanium architecture.