1.  

    Getting ready to do Advent of Code in Lua, using my new environment. It’s not really intended to be like an IDE, but it needs all the features anyway, and this seems like a good way to force myself to find all the kinks before someone else does.

    1.  

      That sounds neat/cool! I have other advent coding plans around a weekly art project that will mostly likely be p5 based.

      I’ve been out of touch, how have you ended up on Lua for this environment?

      1.  

        Thanks! I’ve been introspecting for most of 2021 on where I wanted to go with Mu. I’d never had any expectation that it would ever be mainstream popular, but I had hoped to coalesce a small community around it, of the scale of say suckless. In 2021, after 5 years of prototypes I found myself asking myself the hard question: why are so few people making the leap from reading about it and starring/liking/boosting/upvoting it to actually trying it out and playing with it? Which led me to the question of, what should attract them to build things atop Mu? The tough answer to face up to was: nothing. Mu today can’t really do much. In particular, not having network drivers is a killing limitation. All you can build with it are toys, and I didn’t set out to build toys.

        Though I do have an older variant of Mu that runs on a Linux kernel. That should in principle provide easy network access. I always felt ambivalent about relying on the kernel, though. What’s the point of being so draconian about avoiding C in Mu and building everything from scratch, if my stack includes C from the kernel? And then I learned about how the firmware sausage was made. I’d built Mu to advocate against the constant push to build languages atop other languages, but I started to realize that complexity grows not just upward but also downward. I’d built Mu with some idea of minimizing total complexity of a computing stack, but along the way I started to realize that reality has inextricable complexity that’s independent of what we’re trying to use it for. The goal shifted from minimizing total complexity to “finding a narrow waist of reality,” something that provides a consistent experience above it while relying on very little below it. In this new framing it stopped to matter how deep things extended below. Because really, we’re always going to depend on a deep stack going all the way down to electrons and quarks.

        Ok, so networking is hard to recreate and some C is ok but not too much. I started casting about for a minimal stack that’s built directly in C (because minimizing layers of abstraction is still just good engineering). Lua fits the bill quite nicely. Linux kernel + libc + 12kLoC for Lua is arguably as minimal an implementation of networking as I’m going to get. And Lua packs an impressive amount of technical depth in 12kLoC to make its programs go fast. Graphics is still too much, though. The Lua eco-system already has https://love2d.org for anyone wanting to do graphics. I have nothing to offer there. But I hope to make progress on the original goal of Mu, before all the feature creep:

        • Apps are easy for anyone to run. The infrastructure they need is easy to build.
        • Once you start getting used to running an app, it’s easy to poke inside the hood. Small changes can be made in an afternoon.
        • When modifying an app, it’s difficult to end up in irrecoverable situations or bugs that are too hard to track down. (That’s really what all of Mu’s memory safety was aiming for.)

        So I’m going to spend the next 5 years wondering why nobody’s trying this out :D It’s quite possible that text mode is just a bridge too far for everyone else. We’ll see.

        1.  

          To compare Teliva against what seems like obvious competition:

          TIC-80 & Pico-8:

          Why would I go with Telvia vs either of those? Networking is a partial reason, but if I’m going to write networked software, I am likely taking on sufficient complexity that I also have access to a less constrained interpreter elsewhere.

          Janet:

          Janet is a rather nice language, and has somewhat of a head start on getting the large amount of cross platform work to do things deeper in the stack. Granted, Janet is a harder sell than Lua for a small language, and it doesn’t enable bundling source in the same deep way, but in an age where git is everywhere, that can feel like less of a big deal?

          Further thoughts

          So, like, now that the bottom of the software in question isn’t in scope, it strikes me that you’re in much more of a marketing project now, as opposed to the heavily technical project that Mu was.

          I can see some benefits to Teliva, especially if you build it to have an in-browser sandbox, and builds that work on windows as well (at least, if you want this to have penetration outside of the macOS/Linux world). Other things that I think would be handy would be widgets, and support for inline “picture” variables that represent escape-code-based strings (or sequences of curses commands).

          To be honest, it kinda reminds me of what QBasic was, in a way. Taking advantage of that slim middle and making sure that Teliva runs everywhere is what I see as a potential attraction point, along with the source editing.

          Also, for useful apps, many of them will need some sort of persistent state, unless they are strictly API connected. Do you have any thoughts there? (Persistent “variables” come to mind for me?)

          Do you plan on trying to make it easy to share .tlv files? If you can make them easy to host on itch.io (via some base interpreter compiled to WASM a la Love2d), you’ve got a distribution mechanism full of hobbyists that are into this sort of thing, and could have it used in Game Jams (which some people would definitely go for, retro looks are popular for those).

          1.  

            Oh, I totally forgot to mention the most important property of Teliva: sandboxing. It’s kinda implied by the idea of making apps easy to run. I want to be able to easily share my apps with others, and to be able to run apps from others without having to audit their code. There’s 0 sandboxing at the moment, but that’s going to be 90% of the work from here on out. And that’s the big difference with existing HLLs. What we call programming language runtimes today don’t provide any sandboxing primitives. Why not?

            You called the browser a sandbox above, so clearly you were thinking along similar directions :) I consider the browser to be a great example of a failed sandbox. Failed because I have zero confidence in the security of my computer today, and the browser is a big part of that experience. Even though browsers were designed with a well-defined sandboxing model, that fundamental model is obsolete. We used to consider the hard disk the crown jewels to be protected, and websites to be throwaway. The situation is almost entirely flipped today. Particularly if you’re on a Chromebook. Browsers try to keep up by patching that fundamental model, but at huge cost in implementation complexity. When I visit a website today I basically have no good sense of what that website can and cannot do (except that it cannot install software on my computer). Only recently did I realize that when I allow a website to send me notifications, it can do so even after I close my tab.

            The one segment that is perfectly sandboxed today is fantasy consoles like TIC-80 and PICO-8. But they’re sandboxed by the easy expedient of just not being able to do much. Screen, keyboard, mouse/touchpad, that’s about it.

            So, to answer your question, I want Teliva to be:

            • safer for non-technical people to run untrusted programs on than any HLL in existence
            • as fun as fantasy consoles in opening up untrusted programs and poking around at their internals
            • more capable than fantasy consoles in terms of the kinds of apps that can be built (access to local disk, facebook API, etc.)
            • easier to use and freer of footguns than any shell, with a well-defined set of things users might conceivably need to know about. (I have a local alias called stow. I recently discovered I couldn’t hit tab to autocomplete files after typing stow. Turns out there’s a GNU tool called stow that isn’t installed on my system. But zsh does install autocomplete rules for it.)

            In exchange for these, I’m currently setting aside pixel graphics. About the best I can manage is this Game of Life app using Braille characters.

    1. 19

      After nodding along at assertions like this for years, lately I find myself growing impatient with them. The question I now ask is, “supposed to by whom?” It’s worth forgetting the academics (like me) for a minute, and following the money. Some facts:

      • Computers were created by countries warring against each other. The earliest digital computers were created with huge budgets, occupied rooms and were maintained by armies of attendants.
      • The very purpose of software engineering is delivering large-scale projects. We take it for granted – and teach every generation of programmers – that “requirements” are externally provided, and the programmer’s job is to meet the requirements.
      • Software development is a more lucrative career than most. Everyone knows this. Everyone knows that everyone knows this. There’s a huge selection bias in favor of treating software instrumentally. Very understandably, the first question for everyone involved is: What’s the payoff for learning it, teaching it, developing it, consulting on it?
      • The personal computer revolution occurred after 40 years of development. Even if everything that came after was at a human scale (false by a huge margin), we haven’t yet had 40 years of personal computers.

      Putting all this together, software education is almost entirely in the implicit context of building things others want, at scale and for profit. All the tools we rely on were built by industry for large-scale use cases. Is it any surprise that our education is suffused with problems involving bank accounts and customer/sales tables? That we have a hard time articulating more situated uses for computers? That it’s all about getting a job?

      I strongly believe software should be taught to everyone, at least to a basic level before they develop informed consent on whether to learn further. But let’s be clear-eyed about the structural forces opposing this.

      1. 1

        Is anybody excited about this using twtxt?

        1. 3

          I had trouble following all this (you’ve read the Common Lisp spec way more closely than I ever bothered to), but you might be interested in John Shutt’s Kernel language. To avoid unhygienic macros, Kernel basically outlaws quasiquote and unquote and constructs all macros out of list, cons and so on. Which has the same effect as unquoting everything. A hyperstatic system where symbols in macros always expand to their binding at definition time, never to be overridden. Implying among other things that you can never use functions before defining them.

          There’s a lot I love about Kernel (it provides a uniform theory integrating functions and macros and intermediate beasts) but the obsession with hygiene is not one of them. I took a lot of inspiration from Kernel in my Lisp with first-class macros, but I went all the way in the other direction and supported only macros with quasiquote and unquote. You can define symbols in any order in Wart, and override any symbols at any time, including things like if and cons. The only things you can’t override are things that look like punctuation. Parens, quote, quasiquote, unquote, unquote-splice, and a special symbol @ for apply analogous to unquote-splice. Wart is even smart enough to support apply on macros, something Kernel couldn’t do – as long as your macros are defined out of quasiquote and unquote. I find this to be a sort of indirect sign that it gets closer to the essence of macros by decoupling them into their component pieces like Kernel did, but without complecting them with concerns of hygiene.

          (Bel also doesn’t care about hygienic macros and claims to support fully first-class apply on macros. Though I don’t understand how Bel’s macroexpand works in spite of some effort in that direction.)

          1. 2

            To avoid unhygienic macros, Kernel basically outlaws quasiquote and unquote and constructs all macros out of list, cons and so on.

            It’s easy to write unhygenic macros without quasiquote. Does Kernel also outlaw constructing symbols?

            1. 3

              No, looks like page 165 of the Kernel spec does provide string->symbol.

              1. 1

                Doesn’t that seem like a big loophole that would make it easy to be unhygenic?

                1. 2

                  Depends on what you’re protecting against. Macros are fundamentally a convenience. As I understand the dialectic around hygienic macros, the goal is always just to add guardrails to the convenient path, not to make the guardrails mandatory. Most such systems deliberately provide escape hatches for things like anaphoric macros. So I don’t think I’ve ever heard someone say hygiene needs to be an ironclad guarantee.

                  1. 1

                    Honestly I agree with the inclusion of escape hatches if they are unlikely to be hit accidentally; I’m just surprised that the Kernel developers also agree, since they took such a severe move as to disallow quasiquote altogether.

                    So I don’t think I’ve ever heard someone say hygiene needs to be an ironclad guarantee.

                    I don’t want to put words in peoples’ mouths, but I’m pretty sure this is the stance of most Racket devs.

                    1. 3

                      Not true, because Scheme’s syntax-rules explicitly provides an escape hatch for literals, which can be used to violate hygiene in a deliberate manner. Racket implements syntax-rules.

                      On the other hand, you’re absolutely right that they don’t make it easy. I have no idea what to make of anaphoric macros like this one from the anaphoric package.

                      1. 3

                        Racket doesn’t forbid string->symbol either, it just provides it with some type-safe scaffolding called syntax objects. We can definitely agree that makes it more difficult to use. But the ‘loophole’ does continue to exist.

                        I’m not aware of any macro in Common Lisp that cannot be implemented in Racket (modulo differences in the runtimes like Lisp-1 vs Lisp-2, property lists, etc.) It just gets arbitrarily gnarly.

                        1. 2

                          Thanks for the clarification. I have attempted several times to understand Racket macros but never really succeeded because it’s just so much more complicated compared to the systems I’m familiar with.

                          1. 3

                            Yeah, I’m totally with you. They make it so hard that macros are used a lot less in the Scheme world. If you’re looking to understand macros, I’d recommend a Lisp that’s not a Scheme. I cut my teeth on them using Arc Lisp, which was a great experience even though Arc is a pretty thin veneer over Racket.

                            1. 2

                              Have you read Fear of Macros? Also there is Macros and Languages in Racket which takes a more exercise based approach.

                              1. 5

                                Have you read Fear of Macros?

                                At least twice.

                                Nowadays when I need a Racket macro I just show up in #racket and say “boy, this sure is easy to write using defmacro, too bad hygenic macros are so confusing” and someone will be like “they’re not confusing! all you have to do is $BLACK_MAGIC” and then boom; I have the macro I need.

                2. 1

                  To avoid unhygienic macros

                  Kernel does not avoid unhygienic macros. Whereas Scheme R6RS syntax-case makes it more difficult to write unhygienic macros but still possible. It possible to write unhygienic code with Kernel, such defining define-macro without using or the need for quasiquote et al.

                  Kernel basically outlaws quasiquote and unquote

                  Kernel does not outlaw quasiquote and unquote semantic. There is $quote and unquote is merely (eval symbol env), whereas quasiquote is just a reader trick inside Scheme (also see [0]).

                  and constructs all macros out of list, cons and so on.

                  Yes an no.

                  Scheme macros, and even CL macros are meant a) a hook into the compiler to speed things up e.g. compose, or clojure’s =>, or b) change the prefix-based evaluation strategy to build, so called, Domain Specific Languages such as records eg. SRFI-9.

                  Kernel eliminates the need to think “this a macro or is this procedure”, instead everything is an operative, it is up the interpreter or compiler to figure what can be compiled (ahead-of-time) or not, which is slightly more general that everything is a macro, at least because an operative as access to the dynamic scope.

                  Based on your comment description, Wart is re-inventing Kernel or something like that (without formal description unlike John Shutt).

                  re apply for macros: read page 67 at https://ftp.cs.wpi.edu/pub/techreports/pdf/05-07.pdf

                  [0] https://github.com/cisco/ChezScheme/blob/main/s/syntax.ss#L7644

                  1. 1

                    Page 67 of the Kernel Report says macros don’t need apply because they don’t evaluate their arguments. I think that’s wrong because macros can evaluate their arguments when unquoted. Indeed, most macro args are evaluated eventually, using unquote. In the caller’s environment. Most of the value of macros lies in selectively turning off eval for just the odd arg. And macros are most of the use of fexprs, as far as I’ve been able to glean.

                    Kernel eliminates the need to think “this a macro or is this procedure”

                    Yes, that’s the goal. But it doesn’t happen for apply. I kept running into situations where I had to think about whether the variable was a macro. Often, within the body of a higher-order function/macro, I just didn’t know. So the apply restriction spread through my codebase until I figured this out.

                    I spent some time trying to find a clean example where I use @ on macros in Wart. Unfortunately this capability is baked into Wart so deeply (and Wart is so slow, suffering from the combinatorial explosion of every fexpr-based Lisp) that it’s hard to explain. But Wart provides the capability to cleanly extend even fundamental operations like if and def and mac, and all these use the higher-order functions on macros deep inside their implementations.

                    For example, here’s a definition where I override the pre-existing with macro to add new behavior when it’s called with (with table ...): https://github.com/akkartik/wart/blob/main/054table.wart#L54

                    The backtick syntax it uses there is defined in https://github.com/akkartik/wart/blob/main/047generic.wart, which defines these advanced forms for defining functions and macros:

                    def (_function_ ... _args_) :case _predicate_
                      _body_
                    
                    mac (_function_ ... _args_) :case _predicate_
                      _body_
                    
                    mac (_function_ `_literal_symbol_ ... _args_) :case _predicate_
                      _body_
                    

                    That file overrides this basic definition of mac: https://github.com/akkartik/wart/blob/main/040.wart#L30

                    Which is defined in terms of mac!: https://github.com/akkartik/wart/blob/main/040.wart#L1

                    When I remove apply for macros, this definition no longer runs, for reasons I can’t easily describe.

                    As a simpler example that doesn’t use apply for macros, here’s where I extend the primitive two-branch if to support multiple branches: https://github.com/akkartik/wart/blob/main/045check.wart#L1

                    Based on your comment description, Wart is re-inventing Kernel or something like that (without formal description unlike John Shutt).

                    I would like to think I reimplemented the core idea of Kernel ($vau) while decoupling it from considerations of hygiene. And fixed apply in the process. Because my solution to apply can’t work in hygienic Kernel.

                    I don’t making any claim of novelty here. I was very much inspired by the Kernel dissertation. But I found the rest of its language spec.. warty :D

                  2. 1

                    Promoting solely unhygenic macros, is similar as far as I understand, to promote “code formal proof are useless” or something similar about ACID or any kind guarantees a software might provide.

                    Both Scheme, and Kernel offer the ability to bypass the default hygienic behavior, and hence promote, first, a path of least surprise (and hard to find bugs), and allow the second (aka. prolly shoot yourself in the foot at some point).

                    1. 1

                      At least for me, the value of Lisp is in its late bound nature during the prototyping phase. So the useability is top priority. Compromising useability with more complicated macro syntax (resulting in far fewer people defining macros, as happens in the scheme world) for better properties for mature programs seems a poor trade-off. And yes, I don’t use formal methods while prototyping either.

                      1. 1

                        Syntax rules are not much more complicated to use than define-macro, ref: https://www.gnu.org/software/guile/manual/html_node/Syntax-Rules.html

                        The only drawback of hygienic macro that I know about is that is more difficult to implement than define-macro, but again I do know everything about macros.

                        ref: https://gitlab.com/nieper/unsyntax/

                        1. 1

                          We’ll have to agree to disagree about syntax-rules. Just elsewhere on this thread there’s someone describing their various attempts to unsuccessfully use macros in Scheme. I have had the same experience. It’s not just the syntax of syntax-rules. Scheme is pervasively designed (like Kernel) with hygiene in mind. It makes for a very rigid language, with things like the phase separation rules, that is the antithesis of the sort of “sketching” I like to use Lisp for.

                  1. 4

                    Do any shells have a POSIX mode where they reject all extensions? I used to assume /bin/sh would be portable; that hasn’t gone well.

                    1. 1

                      On Linux at least, /bin/sh should invoke /bin/bash in POSIX compatibility mode.

                      1. 5

                        Isn’t that distro dependent? I think /bin/sh should be strictly POSIX. On Debian/Ubuntu it is dash.

                        1. 5

                          It is distro dependent.

                        2. 2

                          One counter-example I’m aware of there: /bin/sh on Linux supports local which is not in POSIX.

                      1. 13

                        This is a stream of consciousness/wall of text that’s mostly a political polemic that feels like it was written 25 years ago, but addressing specific points (there is a LOT to address, but a subset regardless):

                        • A mention of Xbill, but no M$Micro$oft. It could be getting off on a worse start, I suppose.

                        • Secure boot is required to allow you to enroll your own keys, and remove Microsoft’s. And of course, boot your own OS. If it doesn’t, the vendor fucked up, end of story. Having other OSes be signed with those keys is convenience. You’re missing out on a lot of security features otherwise, and it pisses me off when people fail to understand it.

                        • I no longer like ThinkPads. but for very different reasons (comment thread) from the author. His specific complaints feel so small potatoes (i.e. the LEDs), to be honest.

                        • I have a dim view of “repairability” - just because it can be fixed, doesn’t make it worth it, nor does the fact its repairable make up for other sins (i.e. defective design that requires repair). Of course, the decisions for repairability also enact failure points (i.e. sockets and slots, which have mechanical failure), and the long march of VLSI has overall made things more reliable and integrated at the cost of modularity. I’d rather use my nearly 8 year old MBA which has required no maintenance and has a good battery (overall far outliving the typical lifecycle) still over my ThinkPads which have required it and aged like milk left out in the sun. (Most of the time when opening a ThinkPad, it’s to clean out the fans… which before the supposedly reviled Haswell area models, was much more annoying to do.)

                        • Since we’re off-topic in the first place, a business prediction: Framework is either going out of business or committing a cardinal sin that pisses their fanbase off within 5 years.

                        edit:

                        • “That’s really the least of my concern, but since I paid more than 1k€ for this and it was sold for 2.5k€ new, I believe I have the right to be picky.” Wait, when did you pay 1k for this? You got scammed hard if you’re paying over a thousand for a a ThinkPad from ~2014.

                        • “Ok, 3k was a bad idea, I hate not seeing individual pixels and my eyes are getting older so I had to force it to FullHD resolution.” But that’s the point of high-DPI?

                        1. 7

                          I have a dim view of “repairability” - just because it can be fixed, doesn’t make it worth it, nor does the fact its repairable make up for other sins (i.e. defective design that requires repair). Of course, the decisions for repairability also enact failure points (i.e. sockets and slots, which have mechanical failure), and the long march of VLSI has overall made things more reliable and integrated at the cost of modularity. I’d rather use my nearly 8 year old MBA which has required no maintenance and has a good battery (overall far outliving the typical lifecycle) still over my ThinkPads which have required it and aged like milk left out in the sun. (Most of the time when opening a ThinkPad, it’s to clean out the fans… which before the supposedly reviled Haswell area models, was much more annoying to do.)

                          I care strongly about the repairability of the battery - which is really the replaceability, since lithium ion batteries are a perishable item. I really hate having any kind of electronic device where its effective lifespan is limited by how long the battery can keep being recharged, and I prefer to buy electronics with user-replaceable batteries over ones with non-user-replaceable batteries at any opportunity. Repairability of other items on the laptop is a secondary concern, although not an unimportant one in my view.

                          1. 1

                            No such thing as non-user-replaceable for a user with enough determination :)

                            Though something like the Microsoft Surface line where e.g. even the Surface Book is actually a tablet rather than a laptop so you do have to unglue the screen to replace the battery is quite horrible.

                            1. 1

                              When you hit a point where you need to replace cells, it gets awfully close to that. Here’s a decent description:

                              So checking the PCB again I found a curious small circuit which burns a non resettable fuse when problems are detected, trashing the battery.

                              That’s a different kind of fuckery than the glued-in-behind-the-screen thing.

                              1. 2

                                Lithium cell replacements is a massive annoyance that I’ve seen few. (It’s one of the biggest misses with Framework - either standardize a prismatic form factor, or make it easy to pop up raw cylindrical cells.)

                                Regarding in existing laptops: I don’t really care if it’s done with a screwdriver. The cells should last for years (cough unlike a ThinkPad) so it shouldn’t be a common operation.

                          2. 4

                            I started writing a response to this article, but the more I read the more I realized it was pointless. It has all the tired old tropes ranging from “firstly, it’s GNU/Linux” to “Microsoft wants to establish a fascist dictatorship with secure boot!”

                            I will say this; if this is intended to communicate something in the direction of Lenovo – as hinted by at the title – then he certainly succeeded in that goal, although I’m pretty sure what it communicated to Lenovo is quite different from what he intended to communicate, assuming someone from Lenovo even reads this (probably not – I hope not anyway).

                            I really hate this kind of stuff because it’s a bad look on the entire community.

                            “That’s really the least of my concern, but since I paid more than 1k€ for this and it was sold for 2.5k€ new, I believe I have the right to be picky.” Wait, when did you pay 1k for this? You got scammed hard if you’re paying over a thousand for a a ThinkPad from ~2014.

                            That does seem a bit much, although ~€1k does seem to be the going rate if I search W541 on NewEgg. The W range are usually pretty darn expensive; comes with nVidia Quadro cards and fancy stuff like that.

                            1. 2

                              He really, really needed an editor here if he wanted to get his point across. It felt like I was reading three articles with their pages spliced into each other. It might have been the same rant, but more direct to the point.

                              price

                              Hmmm, here W541s seem to be going from 350-700 loonies (former a bit high but reasonable, latter seems nuts tto me), around 250-480€. Not including shipping or tax, FWIW. Electronics are fairly expensive here, so I think he absolutely overpaid.

                              1. 1

                                That does seem a bit much, although ~€1k does seem to be the going rate if I search W541 on NewEgg. The W range are usually pretty darn expensive; comes with nVidia Quadro cards and fancy stuff like that.

                                Yikes. The going rate on eBay US seems to be around $385 for W541s with Core i7-4600, 16GB, 512GB with Quadro K1100M/intel hybrid graphics. The €1k range feels like more than double what you’d spend for one of those + shipping from US + an appropriate power brick. I’d go so far as to call it “scammed hard”. Particularly since OP’s description reads more like an individual sale than a shop like NewEgg.

                              2. 2

                                I have a dim view of “repairability” - just because it can be fixed, doesn’t make it worth it, nor does the fact its repairable make up for other sins (i.e. defective design that requires repair).

                                This is a fairly nuanced viewpoint, and I’ll try to respond in similar vein. It doesn’t have to be either-or.

                                I think the strongest defense of the repairability movement is that it is leveling up our society by raising consumer sophistication. We want consumers to make purchasing decisions based not just on short-sighted feature lists but secondary effects like the holistic UX, total cost of ownership (durability) and commons effects.

                                In this context, repairability adds a new component to a product’s “fitness vector” without replacing any existing concerns. If the design is defective in obvious ways we already have ways companies compete on that.

                                1. 4

                                  I mean, I bought my phone from the “I do not want to buy another phone for >5 years” angle. Repairability is a useful metric, but I think it’s as you say, nuanced.

                                  To be more clear, I feel the right-to-repair movement is picking the wrong battles, because engineering and economics are about trade-offs. Soldering things down actually can increase reliability and reduce the need for repair, and is usually a natural consequence of integration, and things like slots are mechanical points of failure. When was the last time you had defective L2 cache since it all went on-die? Likewise, waterproofing makes it harder to service but reduces the likelihood of needing a repair. (Per friends who work in the e-waste world: Macs after they started soldering things down tended to be far more reliable and almost never broken when they hit recycling than their predecessors or contemporaries. Unfortunately ancedata, would love harder statistics)

                                  Another example: who cares if I need a screwdriver to replace the battery, if replacing the battery isn’t needed because proper power management increases its lifespan to 5-10 years? The interval between replacements is long enough that opening the thing up wouldn’t be a bad idea, and it may have exceeded its expected lifecycle after some replacements anyways. (that is, it’s moved beyond daily driver capacity, because the platform has aged too much and isn’t up to the task anymore/electrically incompatible with upgrades, parts no longer made, or the system has physically worn out and is no longer economical to service)

                                2. 1

                                  When I was in college, I recall being adjacent to a community of ThinkPad enthusiasts. One of the tropes of this community was an emphasis on reusability and swappable components. The ThinkPads were not just beloved like stuffed animals, but also hackable like stuffed animals.

                                1. 13

                                  Genuine comment (never used Nix before): is it as good as it seems? Or is it too good to be true?

                                  1. 51

                                    I feel like Nix/Guix vs Docker is like … do you want the right idea with not-enough-polish-applied, or do you want the wrong idea with way-too-much-polish-applied?

                                    1. 23

                                      Having gone somewhat deep on both this is the perfect description.

                                      Nix as a package manager is unquestionably the right idea. However nix the language itself made some in practice regrettable choices.

                                      Docker works and has a lot of polish but you eat a lot of overhead that is in theory unnecessary when you use it.

                                    2. 32

                                      It is really good, but it is also full of paper cuts. I wish I had this guide when learning to use nix for project dependencies, because what’s done here is exactly what I do, and it took me many frustrating attempts to get there.

                                      Once it’s in place, it’s great. I love being able to open a project and have my shell and Emacs have all the dependencies – including language servers, postgresql with extensions, etc. – in place, and have it isolated per project.

                                      1. 15

                                        The answer depends on what are you going to use nix for. I use NixOS as my daily driver. I am running a boring Plasma desktop. I’ve been using it for about 6 years now. Before that, I’ve used windows 7, a bit of Ununtu, a bit of MacOS, and Arch before. For me, NixOS is a better desktop than any of the other, by a large margin. Some specific perks I haven’t seen anywhere else:

                                        NixOS is unbreakable. When using windows or arch, I was re-installing the system from scratch a couple of times a year, because it inevitably got into a weird state. With NixOS, I never have to do that. On the contrary, the software system outlives the hardware. I’ve been using what feels the same instance of NixOS on six different physical machines now.

                                        NixOS allows messing with things safely. That’s a subset of previous point. In Arch, if I installed something temporarily, that inevitably was leaving some residuals on the system. With NixOS, I install random on-off software all the time, I often switch between stable, unstable, and head versions of packages together, and that just works and easy rollbackabe via entry in a boot menu.

                                        NixOS is declarative. I store my config on GitHub, which allows me to hop physical systems while keeping the OS essentially the same.

                                        NixOS allows per-project configuration of environment. If some project needs a random C++ package, I don’t have to install it globally.

                                        Caveats:

                                        Learning curve. I am a huge fan of various weird languages, but “getting” NixOS took me several months.

                                        Not everything is managed by NixOS. I can use configuration.nix to say declaratively that I want Plasma and a bunch of applications. I can’t use NixOS to configure plasma global shortcuts.

                                        Running random binaries from the internet is hard. On the flip side, packaging software for NixOS is easy — unlike Arch, I was able to contribute updates to the packages I care about, and even added one new package.

                                        1. 1

                                          NixOS is unbreakable. When using windows or arch, I was re-installing the system from scratch a couple of times a year, because it inevitably got into a weird state. With NixOS, I never have to do that. On the contrary, the software system outlives the hardware. I’ve been using what feels the same instance of NixOS on six different physical machines now.

                                          How do you deal with patches for security issues?

                                          1. 8

                                            I don’t do anything special, just run “update all packages” command from time to time (I use the rolling release version of NixOS misnamed as unstable). NixOS is unbreakable not because it is frozen, but because changes are safe.

                                            NixOS is like git: you create a mess of your workspace without fear, because you can always reset to known-good commit sha. User-friendliness is also on the git level though.

                                            1. 1

                                              Ah I see. That sounds cool. Have you ever had found an issue on updating a package, rolled back, and then taken the trouble to sift through the changes to take the patch-level changes but not the minor or major versions, etc.? Or do you just try updating again after some time to see if somebody fixed it?

                                              1. 4

                                                In case you are getting interested enough to start exploring Nix, I’d personally heartily recommend trying to also explore the Nix Flakes “new approach”. I believe it fixes most pain points of “original” Nix; two exceptions not addressed by Flakes being: secrets management (will have to wait for different time), and documentation quality (which for Flakes is now at even poorer level than that of “Nix proper”).

                                                1. 2

                                                  I didn’t do exactly that, but, when I was using non-rolling release, I combined the base system with older packages with a couple of packages I kept up-to-date manually.

                                          2. 9

                                            It does what it says on the box, but I don’t like it.

                                            1. 2

                                              I use Nixos, and I really like it, relative to how I feel about Unix in general, but it is warty. I would definitely try it, though.

                                            1. 1

                                              One thing I use pyenv/rbenv for is managing multiple versions of python or ruby. In this approach would that require installing and uninstalling software each time? What if I’m offline when I want to switch between versions?

                                              1. 3

                                                I’m doing the same with shell.nix per project. Software is installed once, until garbage collected. You can create a reference to the project, so nix will not garbage collect it. I suppose that if you’re offline, just copying shell.nix from one project on your machine to another would at least give you exact same version of python/ruby, without needing to download anything. Of course, for anything on top, you’ll probably have to download stuff.

                                              1. 2

                                                I spent some time thinking about this 15 years ago, and ended up on a .name that my .com redirects to. I haven’t kept up[1] and I’m not sure if that’s still a good answer. But it’s worth considering.

                                                [1] After ICANN’s recent shenanigans I increasingly consider DNS to be damage to route around. I’m going to try very hard not to ever buy any more domain names.

                                                1. 13

                                                  If you provide a Turing machine, some idiot will implement a Turing machine on it.

                                                  “You were so preoccupied with whether or not you could, you didn’t stop to think if you should.” (with apologies to Ian Malcolm)

                                                  1. 21

                                                    I’d like a much smaller version of the web platform, something focused on documents rather than apps. I’m aware of a few projects in that direction but none of them are in quite the design space I’d personally aim for.

                                                    1. 6

                                                      Well, “we” tried that with PDF and it still was infected with featureitis and Acrobat Reader is yet another web browser. Perhaps not unsurprising considering Adobe’s track record, but if you factor in their proprietary extensions (there’s javascript in there, 3D models, there used to be Flash and probably still is somewhere..) it followed the same general trajectory and timeline as the W3C soup. Luckily much of that failed to get traction (tooling, proprietary and web network effect all spoke against it) and thus is still more thought of “as a document”.

                                                      1. 20

                                                        This is another example of “it’s not the tech, it’s the economy, stupid!” The modern web isn’t a adware-infested cesspool because of HTML5, CSS, and JavaScript, it’s a cesspool because (mis)using these tools make people money.

                                                        1. 5

                                                          Yeah exactly, for some examples: Twitter stopped working without JS recently (what I assume must be a purposeful decision). Then I noticed Medium doesn’t – it no longer shows you the whole article without JS. And Reddit has absolutely awful JS that obscures the content.

                                                          All of this was done within the web platform. It could have been good, but they decided to make it bad on purpose. And at least in the case of Reddit, it used to be good!

                                                          Restricting or rewriting the platform doesn’t solve that problem – they are pushing people to use their mobile apps and sign in, etc. They will simply use a different platform.

                                                          (Also note that these platforms somehow make themselves available to crawlers, so I use https://archive.is/, ditto with the NYTimes and so forth. IMO search engines should not jump through special hoops to see this content; conversely, if they make their content visible to search engines, then it’s fair game for readers to see.)

                                                          1. 4

                                                            I’ll put it like this: I expect corporate interests to continue using the most full-featured platforms available, including the web platform as we know it today. After all, those features were mostly created for corporate interests.

                                                            That doesn’t mean everybody else has to build stuff the same way the corps do. I think we can and should aspire for something better - where by better in this case I mean less featureful.

                                                            1. 4

                                                              That doesn’t mean everybody else has to build stuff the same way the corps do. I think we can and should aspire for something better - where by better in this case I mean less featureful.

                                                              The trick here is to make sure people use it for a large value of people. I was pretty interested in Gemini from the beginning and wrote some stuff on the network (including an HN mirror) and I found that pushing back against markup languages, uploads, and some form of in-band signaling (compression etc) ends up creating a narrower community than I’d like. I fully acknowledge this might just be a “me thing” though.

                                                              EDIT: I also think you’ve touched upon something a lot of folks are interested in right now as evidenced by both the conversation here and the interest in Gemini as a whole.

                                                              1. 3

                                                                I appreciate those thoughts, for sure. Thank you.

                                                              2. 2

                                                                That doesn’t mean everybody else has to build stuff the same way the corps do.

                                                                I agree, and you can look at https://www.oilshell.org/ as a demonstration of that (both the site and the software). But all of that is perfectly possible with existing platforms and tools. In fact it’s greatly aided by many old and proven tools (shell, Python) and some new-ish ones (Ninja).

                                                                There is value in rebuilding alternatives to platforms for sure, but it can also be overestimated (e.g. fragmenting ecosystems, diluting efforts, what Jamie Zawinski calls CADT, etc.).


                                                                Similar to my “alternative shell challenges”, I thought of a “document publishing challenge” based on my comment today on a related story:

                                                                The challenge is if the platform can express a widely praised, commercial multimedia document:

                                                                https://ciechanow.ski/gears/

                                                                https://ciechanow.ski/js/gears.js (source code is instructive to look at)

                                                                https://news.ycombinator.com/item?id=22310813 (many appreciative comments)

                                                                1. 2

                                                                  Yeah, there are good reasons this is my answer to “if you could” and not “what are your current projects”. :)

                                                                  I like the idea of that challenge. I don’t actually know whether my ideal platform would make that possible or not, but situating it with respect to the challenge is definitely useful for thinking about it.

                                                                  1. 1

                                                                    Oops, I meant NON-commercial! that was of course the point

                                                                    There is non-commercial content that makes good use of recent features of the web

                                                              3. 4

                                                                Indeed - tech isn’t the blocker to fixing this problem. The tools gets misused from the economic incentives overpowering the ones from the intended use. Sure you can nudge development in a certain direction by providing references, templates, frameworks, documentation, what have you - but whatever replacement needs to also provide enough economic incentives to minimise the appeal of abuse. Worse still, deployed at a tipping point where the value added exceed the inertia and network effect of the current Web.

                                                                1. 2

                                                                  I absolutely believe that the most important part of any effort at improving the situation has to be making the stuff you just said clear to everyone. It’s important to make it explicit from the start that the project’s view is that corporate interests shouldn’t have a say in the direction of development, because the default is that they do.

                                                                  1. 2

                                                                    I think the interests of a corporation should be expressible and considered through some representative, but given the natural advantage an aggregate has in terms of resources, influence, “network effect”, … they should also be subject to scrutiny and transparency that match their relative advantage over other participants. Since that rarely happens, effect instead seem to be that the Pareto Principle sets in and the corporation becomes the authority in ‘appeal to authority’. They can then lean back and cash in with less effort than anyone else. Those points are moot though if the values of the intended tool/project/society aren’t even expressed, agreed upon or enforced.

                                                                    1. 1

                                                                      Yes, I agree with most of that, and the parts I don’t agree with are quite defensible. Well said.

                                                              4. 2

                                                                Yes, I agree. I do think that this is largely a result of PDF being a corporate-driven project rather than a grassroots one. As somebody else said in the side discussion about Gemini, that’s not the only source of feature creep, but I do think it’s the most important factor.

                                                              5. 5

                                                                I’m curious about what direction is that too. I’ve been using and enjoying the gemini protocol and I think it’s fantastic.

                                                                Even the TLS seems great since it would allow some some simple form of client authentication but in a very anonymous way

                                                                1. 7

                                                                  I do like the general idea of Gemini. I’m honestly still trying to put my thoughts together, but I’d like something where it’s guaranteed to be meaningful to interact with it offline, and ideally with an experience that looks, you know… more like 2005 than 1995 in terms of visual complexity, if you see what I mean. I don’t think we have to go all the way back to unformatted text, it just needs to be a stable target. The web as it exists right now seems like it’s on a path to keep growing in technical complexity forever, with no upper bound.

                                                                  1. 9

                                                                    I have some thoughts in this area:

                                                                    • TCP/IP/HTTP is fine (I disagree with Gemini there). It’s HTML/CSS/JS that are impossible to implement on a shoestring.

                                                                    • The web’s core value proposition is documents with inline hyperlinks. Load all resources atomically, without any privacy-leaking dependent loads.

                                                                    • Software delivery should be out of scope. It’s only needed because our computers are too complex to audit, and the programs we install keep exceeding their rights. Let’s solve that problem at the source.

                                                                    I’ve thought about this enough to make a little prototype.

                                                                    1. 5

                                                                      It’s of course totally fine to disagree, but I genuinely believe it will be impossible to ever avoid fingerprinting with HTTP. I’ve seen stuff, not all of which I’m at liberty to talk about. So from a privacy standpoint I am on board with a radically simpler protocol for that layer. TCP and IP are fine, of course.

                                                                      I agree wholeheartedly with your other points.

                                                                      That is a really cool project! Thank you for sharing it!

                                                                      1. 4

                                                                        Sorry, I neglected to expand on that bit. My understanding is that the bits of HTTP that can be used for fingerprinting require client (browser) support. I was implicitly assuming that we’d prune those bits from the browser while we’re reimplementing it from scratch anyway. Does that seem workable? I’m not an expert here.

                                                                        1. 6

                                                                          I’ve been involved with Gemini since the beginning (I wrote the very first Gemini server) and I was at first amazed at just how often people push to add HTTP features back into Gemini. A little feature here, a little feature there, and pretty soon it’s HTTP all over again. Prune all you want, but people will add those features back if it’s at all possible. I’m convinced of that.

                                                                          1. 4

                                                                            So you’re saying that a new protocol didn’t help either? :)

                                                                            1. 4

                                                                              Pretty much. At least Gemini drew a hard line in the sand and not try to prune an existing protocol. But people like their uploads and markup languages.

                                                                              1. 2

                                                                                Huh. I guess the right thing to do, then, is design the header format with attention to minimizing how many distinguishing bits it leaks.

                                                                          2. 1

                                                                            Absolutely. There is nothing very fingerprintable in minimal valid http requests.

                                                                      2. 5

                                                                        , but I’d like something where it’s guaranteed to be meaningful to interact with it offline

                                                                        This is where my interest in store-and-forward networks lie. I find that a lot of the stuff I do on the internet is pull down content (read threads, comments, articles, documentation) and I push content (respond to things, upload content, etc) much less frequently. For that situation (which I realize is fairly particular to me) I find that a store-and-forward network would make offline-first interaction a first-class citizen.

                                                                        I distinguish this from IM (like Matrix, IRC, Discord, etc) which is specifically about near instant interaction.

                                                                        1. 1

                                                                          I agree.

                                                                    2. 2

                                                                      Have you looked at the gemini protocol?

                                                                      1. 2

                                                                        I have, see my other reply.

                                                                    1. 3

                                                                      Between this and the recent Software Crisis 2.0 thread, I’m starting to think the “software crisis” framing is counter-productive. Its metric of choice is “percentage of projects that fail”, but the very notion of “project” feels burdensome to me in the 21st century. It’s been hard to find good definitions, but in these discourses the term seems to have the following properties:

                                                                      • Fixed start and end time
                                                                      • Externally defined requirements

                                                                      Neither has been true of the well run tech organizations I’ve been in, where programmers are above all maintaining the factory that does the work while we sleep, and where we have a strong say in what to build.

                                                                      Which isn’t to say there’s no crisis. These days the part of the elephant I start with is the need for intermediation and delegation. Why is there a divide between the person in need of automation and the “programmer” creating said automation, and how can we help people help themselves. A lot of what ails software seems due to misaligned incentives, and I think the problem gets much more tractable if you fix the incentives first.

                                                                      1. 3

                                                                        Why is there a divide between the person in need of automation and the “programmer” creating said automation,

                                                                        Because “real programmers” don’t like being put out of business by users just using Excel, Access, and FrontPage.

                                                                        1. 4

                                                                          Having done a few rescue projects for those who have outgrown their homegrown solution - I’m confident that there’s more paid work in giving them enough rope to get into trouble.

                                                                          It turns out that using those (great) tools doesn’t free you from caring about change management, testing, communicating with others who might contribute, etc - and people who have developed those skills are usually able (and prefer) to use higher-level abstractions.

                                                                          1. 2

                                                                            My favourite (and last!) .NET gig was to help a company build a .NET development capability in-house, so they could evolve their software solution built in VBA and Excel.

                                                                            Far from getting themselves in trouble, they’d used that VBA + Excel solution to grow their business profitably to the order of 100 staff internationally. They weren’t in trouble, but they had hit the practical limits of their solution in terms of additional functionality (especially Web), and scale.

                                                                            Most of their VBA code had been written by one of the company co-founders, who was himself an expert in their vertical, but not at all a programmer by trade.

                                                                            I’m fairly certain that they’d never have been able to bootstrap their business in the way they did by starting with a ‘bells and whistles’ software development team, or with software that didn’t happen to already be running on all their machines.

                                                                        2. 3

                                                                          Why is there a divide between the person in need of automation and the “programmer” creating said automation, and how can we help people help themselves.

                                                                          Because automation is fucking hard and it helps to have someone experienced fix it before you make a mess of it. It will stay hard. Forever. Not because of tooling, not because of developer productivity, but simply because the world is hard.

                                                                          Of course it is good to let people help themselves if that is possible. But this will always require some investment from those people. We all have limited amount of time and energy, so for a lot of problems and situations, getting an automation expert is simply the best option.

                                                                          1. 1

                                                                            Totally agreed! Automation is fucking hard, as you so eloquently put it. This is why I said “more tractable” and not “solved”.

                                                                            Our choice is between making a mess of it ourselves… or giving it to someone else who will also make a mess of it as well, either by accidentally creating security holes or by giving us software that we make vulnerable by failing to patch on time or by pivoting to a new market/strategy and causing us grief by increasingly user-hostile actions.

                                                                            It’s not clear to me that one is better than the other, outside of say cryptographic libraries. Even if you think you’re ahead right now, your exposure is still high over the long term. Particularly when you take epidemiological effects into account. In our connected world, other people’s irresponsibility with software has a great capacity to hurt us.

                                                                            A third way is to stop using so much software. That’s my preferred approach to the crisis. Software has eaten the world way too fast. You don’t have to let it eat you. Which isn’t to say you have to quit cold-turkey. There is value in being eaten slower than the next person.

                                                                            For people who have limited time and some money to spend, a fourth way is:

                                                                            • Create personal relationships with programmers. If everyone did this, it would result in a higher programmer/user ratio, which moves the needle in the right direction. It would also prevent power law effects which cause the greatest user abuse from the Googles of the world. Relationships work best when they’re between approximate equals.
                                                                            • Weight the antifragility of new and bespoke stacks against the “efficiency” of getting pwned, ransomed and data-breached at the same time as everyone else.
                                                                            1. 2

                                                                              A third way is to stop using so much software. That’s my preferred approach to the crisis. Software has eaten the world way too fast. You don’t have to let it eat you. Which isn’t to say you have to quit cold-turkey. There is value in being eaten slower than the next person.

                                                                              I agree with this. It is my preferred option too. If I have to explain it to others, I refer to it as digital minimalism or digital frugality. In my mind, I draw comparisons with vegetarians/vegans, who also constrain their options because they think they and/or the world will be better off that way.

                                                                              But, as slow my personal transition may go, I feel the gap getting wider and wider. I feel like a vegetarian in the 70s. People sort of understand what you are talking about on a rational level, but it is already way too far from their personal experience to relate. It would cause them a disproportionate amount of discomfort to start moving in the same direction for the benefit they perceive it will give. Society is not yet able to handle those minimal people, so even the simplest things are very hard.

                                                                              Somewhere on my todo list is the item of starting a blog to show and explain how this minimalism works for me and how it might work for others. I first have to finish my todo app though.

                                                                          2. 2

                                                                            Why is there a divide between the person in need of automation and the “programmer” creating said automation

                                                                            Because that’s where a lot of money is extracted.

                                                                            Say you’re Apple or Google, and your users have a problem.

                                                                            Which gets you more money? Those users solving their own problem with a general purpose programming environment targeted at laypeople[1], or those users buying a single purpose app to solve it and paying you a cut?

                                                                            [1] I still think the combination of VBA + Excel is the greatest example of this that we’ve yet produced.

                                                                            1. 1

                                                                              You’re absolutely right. It was a rhetorical question, hence the absence of the question mark.

                                                                              The only way to keep one set of entities from doing what’s in their interest is for their opponents to push back. There’s a growing sense of just how much people have given up by letting a tiny slice of society write all the software for all of society. But I think this needs to bubble up a lot more before it will constrain Google or Apple’s behavior. In the form of more people taking an interest in hyperlocal, situated software. Which is hard to do, because the software we use to build software isn’t really intended for that use case. Because the software we use to build software was written by commercial entities for commercial “requirements”, all the way back to the dawn of computing.

                                                                          1. 24

                                                                            Typesetting systems. It’s interesting to think about the differences between TeX and html, one predating scrolling and one designed for screens rather than paper. What would a simple typesetting system look like that was built with a minimalist ethos, for scrolling, without perfect hyphenation and pixel-perfect page boundaries.

                                                                            1. 7

                                                                              I’ve been playing around with SILE recently. While it still has some rough edges, it has been refreshing coming from LaTeX. I don’t know if you’ve already looked into it.

                                                                              1. 2

                                                                                Going the other way– a easier to deploy/run TeX system– have you seen Tectontic?

                                                                                I’m using it on and off with some existing documents and was pleasantly surprised.

                                                                                1. 2

                                                                                  I have seen it. I will admit that I haven’t dug too deep into it. I respect the effort, however the clean slate implementation of SILE (as opposed to Tectontic’s port of XeTeX to Rust) offers some advantages.

                                                                                  Documents can be either in TeX-like syntax or XML. (Meaning they can be generated by a program and be valid) Also the native support of SVG (instead of the convoluted Tikz) is a killer feature for me. But in general, SILE is more lightweight.

                                                                            1. 17

                                                                              This appears to be an opining for the “Old Times”, and a hit piece against systemd.

                                                                              The opining for the old times of Unix reminds me of the quotes by Socrates that can be summed up as “Kids these days”.

                                                                              As for Systemd, thats a harder discussion topic, but I think https://www.youtube.com/watch?v=o_AIw9bGogo is a good way to approach that topic.

                                                                              Systemd enables dynamic hardware and dynamic routing of data in software based on hardware events. As a whole concept, this is what’s needed for anything not a single application server (example: your desktop or laptop). There’s glaring flaws in systemd, but we can do better. Removing and going back to init.d runlevel scripts is not the way to do that!

                                                                              1. 14

                                                                                I don’t intend to argue whether systemd is good to bad but there’s a huge gap between init.d scripts and systemd. The combination of udev and dbus enabled dynamic routing of data in software based on hardware events for years before systemd was invented. That happened on systems that relied on init.d scripts. They are also core parts of systems that rely on systemd.

                                                                                1. 9

                                                                                  That detail is helpful, and I wish OP had more such nuggets. After reading OP I found myself thinking, “yes, Linux is different. But why is it worse?” Why should POSIX matter to more people? Are udev and dbus reinventing Unix poorly, or are they aiming for something new that the designers of UNIX hadn’t designed for?

                                                                                  It’s common for people to have divergent priorities. In this case, are they misguided?

                                                                                2. 2

                                                                                  The author literally doesn’t mention systemd at all. So you’re now projecting that into the article.

                                                                                  I do like that tragedy of systemd talk by the BSD guy (I think I saw the BSD Canada conference version). It’s good and a system layer can be good. Even prior to systemd, Linux was slowly developing a system layer. Redhat HAL went away and we were left with dbus + networkmanager + cups and host of little tools that could be strung together.

                                                                                  Yes systemd is modular, but you can’t really replace any of the little modules. The target file structure is nice, but it’s expanded way out from just process management to encompass other parts of the systems, instead of just hooking into any of the previous system layer tools.

                                                                                  But I digress, I don’t think the author is really talking about systemd at all. If it’s included, it’s auxiliary. A lot of people quote the DOTADOW philosophy statement without the 2nd part:

                                                                                  Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.

                                                                                  Programs should work together. I think dbus goes in the right direction, and you can run it anywhere. But now look at some more modern Linux specific things: Docker.

                                                                                  Docker only runs on Linux. Docker for Mac and Windows run in hypervisors. It’s dependent on cgroups, namespaces and lots of specific Linux things. Can you implement a container system in BSD that’s compatible with the Docker API, but uses ZFS for layers and jails for chroots instead? Absolutely, and there are some attempts at it being made. But I think it goes to the point of the article: certain things are now tied deeply into Linux and cannot be run on other UNIX like (macos/darwin, freebsd, etc.) systems.

                                                                                  1. 8

                                                                                    The author mentions systemd a couple of times. It seems to be an example he’s using to support his argument.

                                                                                    1. 2

                                                                                      oh huh, I did miss it, or maybe it didn’t register. It’s mentioned twice it seems.

                                                                                    2. 3

                                                                                      The author literally doesn’t mention systemd at all. So you’re now projecting that into the article.

                                                                                      from the article:

                                                                                      This leaves Linux, a system Dennis Ritchie described in 1999 as “draw[ing] so strongly on the basis that UNIX provided”. My worry is Linux and the community around it have strayed further from UNIX for a long time since, to the point where systemd discussions include comments like this with hundreds of upvotes:

                                                                                      (Bold is my addition)

                                                                                      1. 1

                                                                                        I know. I already addressed that in the above comment:

                                                                                        https://lobste.rs/s/ezqjv5/i_m_not_sure_unix_won#c_leook6

                                                                                  1. 6

                                                                                    This is really neat.

                                                                                    This is not quite what Knuth conceived of as Literate Programming (where tangling and weaving are the critical concepts that are missing here), but this is extremely similar to Literate Haskell.

                                                                                    1. 4

                                                                                      The one time I tried working on a project that did it the Knuth way (with cweb), it was miserable to refactor in. I’m not against literate programming, but it wasn’t a good first impression.

                                                                                      1. 11

                                                                                        WEB/CWEB has, in my opinion, an upper length to the programs it can comfortably express, and that limit is pretty small.

                                                                                        It also doesn’t work well with modularization, IMHO.

                                                                                        Then again, Donald Knuth wrote CWEB/TeX/TAOCP and I’m writing a comment on Lobste.rs so what do I know.

                                                                                        1. 3

                                                                                          Yeah, the program I was working with only had a single file under CWEB. I basically did what I could to avoid touching it.

                                                                                      2. 4

                                                                                        I think that’s an overly narrow definition of LP, which Knuth himself has defined like this:

                                                                                        In literate programming the emphasis is reversed. Instead of writing code containing documentation, the literate programmer writes documentation containing code.

                                                                                        By this definition OP is a purer form of LP than Web, which hacked two languages together just to reduce implementation effort. Minimizing syntax allows the reader to focus on the content.

                                                                                      1. 4

                                                                                        The “pitfalls” are kind of a misnomer. A pipeline is a series of programs designed to deal with some kind of data. If you change the data, you gotta change the programs, and I don’t really see that as anything other than the nature of programming.

                                                                                        1. 5

                                                                                          Shell pipelines are often used to deal with poorly specified data, and it’s easy for bugs to creep in once you stop checking the output. This is a problem shared with spreadsheets.

                                                                                          1. 4

                                                                                            I agree, but that’s not a fault of the pipeline, it’s more a round-peg-square-hole scenario.

                                                                                            1. 2

                                                                                              It’s not clear to me that there exist any square pegs in this context. Well-specified data sources that don’t use a recursive format like CSV or JSON. CSV is tempting to cut -d , – until you realize the generator will insert quotes once in a very long while when there’s commas in the data.

                                                                                              I’m pretty proficient with pipelines. But I always double-check my inputs and outputs. If the data scales beyond my ability to eyeball, I stop using pipelines.

                                                                                              1. 2

                                                                                                But you can use pipelines without using cut -d ,! There are lots of CSV and TSV utils that do non-naive parsing.

                                                                                                https://github.com/oilshell/oil/wiki/Structured-Data-in-Oil#projects (csvkit, xsv, etc.)

                                                                                                Naive parsing is bad but nothing is forcing you to do it. So this is not a problem with pipelines per se, but the way people use them.

                                                                                                Although I think some support in the shell will help guide people toward non-naive parsing, so Oil should have a small upgrade over TSV which is called QTT (Quoted, Typed Tables): https://github.com/oilshell/oil/wiki/TSV2-Proposal

                                                                                                1. 1

                                                                                                  For sure. Taking it back, I think OP’s point stands that using shell pipelines has pitfalls.

                                                                                                  You can use a principled way to solve a problem. Or you can use pipelines. Choose.

                                                                                                  1. 5

                                                                                                    By the words „(classic) unix pipeline“ we usually mean not only the pipeline itself (passing stream of bytes from STDOUT of one process to STDIN of another one) but also the classic tools (grep, sed, cut). Rest of the website of this project provides more context…

                                                                                                    1. 3

                                                                                                      No I disagree, what I’m saying is that you can use pipelines in either a principled or unprincipled way / correct or naive way.

                                                                                                      I don’t see any argument that it’s either-or.

                                                                                                      1. 1

                                                                                                        You don’t seem to be disagreeing that using shell pipelines has pitfalls, so I assume we’re disagreeing about what “principled” means. For me it means, when you make a mistake you get an error. You don’t silently get bad data. I fail to see how pipelines permit that. As you said, you could use the correct parser, but it’s also easy/idiomatic to use the wrong one. So I’m curious what “principled” means to you.

                                                                                                        Hmm, I suppose you could argue that you can use shell pipelines as long as you’re principled and use the right parser for the data format at each pipe stage.

                                                                                                        1. 1

                                                                                                          So I think the disagreement is what OP @franta pointed out in a sibling comment: “classic pipelines” using grep/sed/cut vs. “pipelines” as a general mechanism.

                                                                                                          “Classic pipelines” defined that way have pitfalls. You can approximate some transformations on structured data, but they’re not reliable.

                                                                                                          But other ways of using pipelines already exist and are not theoretical: csvkit, xsv, which I pointed to above.

                                                                                                          It’s up to those tools – not the shell or the kernel – to validate the data. Although I just tested csvkit and it doesn’t seem to complain about extra commas.

                                                                                                          I guess that validates me writing my own CSV-over-pipes utilities for Oil, which actually do check errors:

                                                                                                          https://github.com/oilshell/oil/tree/master/devtools

                                                                                                          https://github.com/oilshell/oil/tree/master/web/table

                                                                                                          I’ll concede that there’s a culture of sloppiness with text in Unix, but that can be fixed, just like sloppiness with memory safety is a cultural change.

                                                                                                          So the point is that pipelines are a great mechanism, but the user space tools need to be improved. I’ve been working on that for awhile :)

                                                                                                          1. 2

                                                                                                            Thanks, yeah I’m persuaded that the issue is one of culture rather than anything technical.

                                                                                            2. 3

                                                                                              Here’s another one that I ran into: if you use cut -c80 to get the first 80 characters of a string, you will fail in subtle ways if your string starts including UTF-8 multibyte characters, since it will get the first 80 bytes. There’s no way to fix this short of converting to a fixed-width representation and then back, which is very silly.

                                                                                              How did I find this out? When the command I was piping it to started failing on invalid UTF-8 input.

                                                                                              On the other hand, GNU awk will just do the right thing. But you have to know this pitfall exists, and it’s embarrassing that GNU cut breaks this way after decades of Unicode being a thing.

                                                                                              1. 1

                                                                                                GNU cut has -b for bytes and -c is meant for characters, but not yet implemented:

                                                                                                The same as -b for now, but internationalization will change that.

                                                                                                Also, wc has -m option for character count, but head, tail, tr etc work on bytes only.

                                                                                                1. 1

                                                                                                  This is a GNU cut issue, again not the fault of pipelines as we use them - you used an unsuitable program for some kind of input, simple as

                                                                                                  1. 1

                                                                                                    It’s certainly not the fault of pipelines, but if the ‘traditional’ pipeline tools are still stuck in a world where multibyte characters don’t exist, that’s pretty bad. It means that if you’re going to be processing text in German, or Japanese, or Russian, then cut -c has a chance to just silently break at some point. Even in English it can fail if you give it input with a word like “passé”!

                                                                                              1. 4

                                                                                                On a tangent this just got me to learn about C++11 trailing return types. Whoa!

                                                                                                1. 3

                                                                                                  On the top level like that, it’s actually a C++14 feature, see Return type deduction.

                                                                                                1. 2

                                                                                                  I wish there were some examples. How often does this forking happen? If it’s something rare that happens only when the stars align (person knows to program, has expertise in the source code but chooses to outsource the service in the first place, doesn’t have a viable competitor to exit to for less effort than forking), does it matter?

                                                                                                  1. 3

                                                                                                    I think private forking happens all the time. I keep patches for several programs on my system. What happens rarely is organizational forking, ie. an attempt to set up a competing development team. A historic example would be the EGCS GCC fork, which was so successful that it just became the official GCC project.

                                                                                                    A maintainer has to mess up pretty bad for that to happen, and most projects that attempt this just starve for lack of people caring about their version.

                                                                                                    That said, Wikipedia has a list (of course): https://en.wikipedia.org/wiki/List_of_software_forks

                                                                                                    1. 2

                                                                                                      I’m very curious to hear about such private forks! It seems to me that the current state of open source doesn’t make them easy or convenient. Creating a fork isn’t just a one-time expenditure of effort, it’s an ongoing effort thereafter of keeping up with upstream. I’ve personally tried to do this multiple times and invariably given up. So I’m curious to hear how long you’re able to continue maintaining a fork for, and any tricks you have to minimize the overheads involved.

                                                                                                      (I actually don’t care about organizational forking. Organizations are certainly more likely to be able to afford the ongoing opex of maintaining a fork. But the original Exit vs Voice concerns people in a civic organization, and that tends to be my focus as well.)

                                                                                                      1. 4

                                                                                                        It seems to me that the current state of open source doesn’t make them easy or convenient

                                                                                                        It depends on what you’re doing with a private fork. If your changes are relatively minor it’s just merging from mainline periodically which modern VCSs are pretty good at.

                                                                                                        As an example, I participate in an OSS project that includes a third party library to provide a rich text editor in-browser. IME is very important to us, but not as high-priority for the library. We maintain a “private” fork (it’s publicly readable, but I no one else really cares that it exists) that differs mostly in the form of disabling a couple of features that interfere with IME. Occasionally we’ll merge code from a branch bound for release that hasn’t made it to mainline yet because we need it sooner. Maintenance involves pulling from upstream and re-evaluating our patches every couple of months. The most inconvenient part of it is having to self-host the built packages, which all things considered really isn’t bad.

                                                                                                        I mean it’s a kludge, we’d obviously rather not have to spend the small amount of effort required to maintain a “private fork”, but I’d much rather have the option than not.

                                                                                                        1. 1

                                                                                                          Oh certainly, it’s nice to have the option. Taking it back to OP, I just wonder if your example is worth considering on par with “exit and voice.” It seems rather the equivalent of putting dinner on a plate after purchasing it.

                                                                                                        2. 3

                                                                                                          IME, Gentoo makes this sort of thing pretty easy, at least for basic changes. You don’t have to maintain the repo, you can just stick your patches in /etc and have Gentoo autoapply them on rebuild. So you only need to do maintenance if they stop working, and in that case you will already have the repo checked out in the state Gentoo is trying to build from. So you just copy the build folder, reapply your patch, take a diff and stick it in /etc again.

                                                                                                    1. 2

                                                                                                      I think I’m going to start building a read-only non-web browser for my network-less computing stack. I already have true-color image rendering working, even though it really only has 256 colors.

                                                                                                      1. 5

                                                                                                        [sic] is a community about everything that piques your curiosity and interest. To quote others before us: “anything that gratifies one’s intellectual curiosity”.

                                                                                                        This is a lobste.rs cousin site I started approximately two weeks ago, that follows the lobste.rs schema in some degree. The test instance is at https://sic.pm/

                                                                                                        Development focus will be on managing tags into collections called “aggregations” which are buckets of tags with certain rules. A user follows aggregations and they makeup the frontpage. That way there’s no rigid split of the website into subcategories and new tag discovery can happen organically through the hierarchical tag graph.

                                                                                                        Contributions welcome. Invitations can be requested here or on IRC, #sic on libera chat. I’d love to have people from lobste.rs over :)

                                                                                                        1. 2

                                                                                                          Is “community” here referring to the open source project or to the “test instance”?

                                                                                                          1. 1

                                                                                                            Latter but both. We also gather at the IRC channel.