1. 2

    I’m simultaneously educated and greatly confused.

    If α and β are arbitrary entities and ℝ is any relation defined on them, the relational statement αℝβ is a logical variable which is true if and only if α stands in the relation ℝ to β. For example, if x is any real number, then the function (x > 0) - (x < 0) assumes the values 1, 0 or -1 according as x is strictly positive, 0 or strictly negative.

    Both these sentences make sense in isolation, but I don’t understand how they connect. What are α, β and ℝ in the example?

    I understand most of the next 7 pages. But I don’t understand how the beginning connects up with them, and why Knuth keeps referring to this notation as “Iverson’s convention.”

    1. 3

      Kenneth Iverson, of APL and J fame, invented the convention of using 1 to represent true and 0 to represent false, and then allowing all the ordinary integer operations on them. This is still how both J and APL work today. The 2nd sentence from your quote shows how this convention allows you to easily define the “signnum” operator on any x as (x > 0) - (x < 0). In what follows Knuth lists other (unexpected but happy) advantages of this notation.

      Does that answer your question?

      1. 3

        Ohh, I elided the “(equal to 1)” when transcribing, but it’s load-bearing. Thanks!

      2. 3

        The expression (x > 0) - (x < 0) actually contains two examples. In the subexpression x > 0, we have α = x, β = 0, and ℝ is the relation “greater than” on numbers. In the subexpression x < 0, we have α = x, β = 0, and ℝ is the relation “less than” on numbers.

        1. 2

          Thank you!

      1. 1

        Does Frink really disallow adding two timestamps?

        1. 5

          If it does, then it is right to do so!

          A timestamp cannot be added to a different timestamp, as there is no natural zero value for it (Big Bang could theoretically be one, but it is not practical for obvious reasons). Think about it similarly like a position in space. Two points (places) cannot be added, but their difference can be calculated, and it can be added to any of them. In a coordinate system you have an origin, and places are differences from that point, and those differences can be added! You probably think about a similar operation.

          You cannot add London to Budapest, but you can add the vector pointing from point on the equator and on the Greenwich meridian to London to the vector pointig from the same origin to Budapest.

          I guess what you mean by adding two timestamps is something similar to the above: like: 1999-12-31 23:59:59 + 2000-01-01 00:00:01 = 4000-01-01 00:00:00 (this is obviously incorrect numerically, just some demonstrataion of the concept). In this case you are not thinking about timestamps, you are actually adding two time intervals, getting a third time interval. These timestamps have an ‘origin’ in their coordinate systems, which are eras in timekeeping, and the above timestamps are in the Christian calendar era.

          I hope it clarifies the reason why timestamps cannot be added, but why the usecase You have in mind might make sense, only your intuition of the problem was a bit rough (if I guessed correctly).

            1. 1

              I don’t get what interface you mean by it. It does not make sense for me in any other way than stated above.

              Maybe you could share some user story you have in your mind?

              1. 2

                Sorry my previous comments were on the phone so I was terse. Let me back up.

                The first part of OP points out several complexities about units. Some (like timestamps) permit selective operations. Some with the same dimensions (like surface tension and rate of change of force) are not commensurable. I agree with the observations, but as someone who’s built languages in the past I have no idea how to support these patterns in a general way that permits new units to be defined.

                I went looking in Frink’s default unit definitions, but saw no mention of timestamps or surface tension or gravity wells. Hence my original question: was the initial list of complexities just a general list, or was it actually alluding to features Frink supports? If it does support them, I’m curious to know if there are limitations that are just hard-coded to specific units, or if they’re provided in a general way that allows me to say “two values of unit foo cannot be added together.”

                1. 1

                  Ah, thank you for the clarification!

          1. 1

            What value would it be? Subtracting two is well defined as the time interval between them.

            1. 2

              No, it makes sense as an interface. I just don’t immediately understand how the implementation would work. Do arbitrary new units support arbitrary restrictions on operations and compatible units? How does it deal with the incompatible N/m units?

              I guess I’m wishing your article was longer :)

              1. 1

                I just don’t immediately understand how the implementation would work.

                Not sure either but really glad you highlighted this!

                So I guess maybe a generalization is that timestamps, like some other…things, are a difference from whatever we chose as the conventional zero point, vs. durations, lengths, etc. being differences that don’t need a conventional zero defined.

                Like, when you say it’s 2020, you mean it’s 2,020 years from the human-chosen year 0. Subtraction cancels out the constant, getting you a difference independent of when year 0 was. Finding a midpoint between two times gets you another timestamp relative to the same year zero. Adding or doubling gets you some number with the conventional-zero-point constant factor doubled (you move year 0 one year, result moves two years) and that doesn’t make sense anymore.

                We could apply “positions vs. a conventional zero aren’t the same thing as deltas” in some places we don’t, e.g. you wouldn’t want to add two temperatures Fahrenheit, as opposed to a temperature and a temp. difference, or two temp. differences. (But since we write both degrees-F, it’s not intuitive/trivial to apply.) I now recall my physics teacher scolding me for talking about a voltage at a point in a circuit, rather than between two points – guess we wanted a difference, not a position relative to ground.

                Then there are tricky things like lat/lon pairs (angles relative to the poles), where you sure can’t add, can maybe sorta subtract for a pair of angles?, and you have some mostly-well-defined operations like midpoint and distance.

                I’m getting in over my head and my language is all wrong, but the whole issue of arbitrary zero points (and the other examples in the post like angular force and energy) suggest there is depth to this beyond the dimensional analysis I was taught in school, which is pretty fun.

                1. 8

                  In terms of established language, timestamps form a one-dimensional affine space, with time deltas as the associated one-dimensional vector space.

                  1. 1

                    Oh most interesting, thank you.

                    1. 1

                      Thank you!

            1. 98

              I don’t write tests to avoid bugs. I write tests to avoid regressions.

              1. 13

                Exactly. I dislike writing tests, but I dislike fixing regressions even more.

                1. 6

                  And i’d go even further:

                  I write tests and use typed languages to avoid regressions, especially when refactoring.

                  A test that just fails when I refactor the internal workings of some subcomponents, is not a helpful test – it just slows me down. 99% of my tests are on the level of treating a service or part of a service as a black box. For a web service this is:

                  test input (request) -> [black box] -> mocked database/services
                  

                  Where black box is my main code.

                  For NodeJS the combo express/supertest is awesome for the front bit. I wish more web frameworks in Rust etc also had this. I.e. providing ways to “fake run” requests through without having to faff around with server/sockets (and still be confident it does what it should).

                  1. 5

                    Now the impish question: what is the correct decision if the test is more annoying to write than the regression is to observe and fix?

                    1. 3

                      Indeed!

                      (I research ways[1] to avoid that. But of course they don’t apply when you’ve already chosen a stack and framework for development. In my day job we just make hard decisions about priority and ROI and fall back sometimes to code comments, documents or oral story-telling.)

                      [1] https://github.com/akkartik/mu1#readme (first section)

                      1. 2

                        Every project is different, but ideally you can invest time in the testing infrastructure such that writing a new test is no longer annoying. I.e, maybe you can write re-usable helper functions and get to the point where a new test means adding an assertion or copy / pasting an existing test and modifying it a bit. The tools used (test harness, mocking library, etc) also play a huge role in whether tests are annoying or not, spending time ensuring you’re using the right ones (and learning how to properly use them) is another way to invest in testing.

                        The level of effort you should spend on testing infrastructure depends on the scope, scale and longevity of your project. There are definitely domains that will be a pain to test pretty much no matter what.

                        1. 2

                          In my experience such testing frameworks tend to add to the problem, rather than solve it. Most testing frameworks I’ve seen are complex and can be tricky to work with and get things right. Especially when a test is broken it can be a pain to deal with.

                          Tests are hard because you essentially need to keep two functions in your head: the actual code, and the testing code. If you come back to a test after 3 years you don’t really know if the test is broken or the code is broken. It can be a real PITA if you’re using some super-clever DSL testing framework.

                          People trying to be “too clever” in code can lead to hard to maintain code, people trying to be “too clever” in tests often leads to hard to maintain tests.

                          Especially in tests I try to avoid needless abstractions and be as “dumb” as possible. I would rather copy/paste the same code 4 times (possible with some slight modifications) than write a helper function for it. It’s just such a pain to backtrack when things inevitably break.

                          It really doesn’t need to be this hard IMHO; you can fix much of it by letting go of the True Unit Tests™ fixation.

                          1. 2

                            I don’t disagree, and I wasn’t trying to suggest using a “clever” testing framework will somehow make your tests less painful. Fwiw I even suggested the copy / paste method in my OP and use it all the time myself :p. My main point was using the right tool / methods for the job.

                            I will say that the right tool for the job is often the one that is the most well known for the language and domain you’re working in. Inventing a bespoke test harness and trying to force it on the 20 other developers who are already intimately familiar with the “clever” framework isn’t going to help.

                            1. 2

                              Fair enough :-)

                              I will say that the right tool for the job is often the one that is the most well known for the language and domain you’re working in. Inventing a bespoke test harness and trying to force it on the 20 other developers who are already intimately familiar with the “clever” framework isn’t going to help.

                              I kind of agree because there’s good value in standard tooling, but on the other hand I’ve seen rspec (the “standard tool” for Ruby/Rails testing) create more problems than solve IMHO.

                      2. 4

                        When fixing testable bugs you often need that “simplest possible test case” anyway, so you can identify the bug and satisfy yourself that you fixed it. A testing framework should be so effortless that you’d want to use it as the scaffold for executing that test case as you craft the fix. From there you should only be an assert() or two away from a shippable test case.

                        (While the sort of code I write rarely lends itself to traditional test cases, when I do, the challenge I find is avoiding my habit of writing code defensively. I have to remind myself that I should write the most brittle test case I can, and decide how robust it needs to be if and when it ever triggers a false positive.)

                        1. 3

                          +1

                          This here, at the start of the second paragraphs is the greatest misconception about tests:

                          In order to be effective, a test needs to exist for some condition not handled by the code.

                          A lot of folks from the static typing and formal methods crowd treat tests as a poor man’s way of proving correctness or something… This is totally not what they’re for.

                          1. 1

                            umm…..aren’t regressions bugs?

                            1. 9

                              Yes, regressions are a class of bug. The unwritten inference akkartik made when saying “I don’t write tests to avoid bugs” is that it is refers specifically to writing tests to pre-empt new bugs before they can be shipped.

                              Such defensive use of tests is great if you’re writing code for aircraft engines or financial transactions; whereas if you’re writing a christmas tree light controller as a hobby it might be seen as somewhat obsessive compulsive.

                              1. 0

                                I-I don’t understand. Tests are there to catch bugs. Why does it matter particularly at what specific point in time the bugs are caught?

                                1. 8

                                  Why does it matter particularly at what specific point in time the bugs are caught?

                                  Because human nature.

                                  Often times a client experiencing a bug for the first time is quite lenient and forgiving of the situation. When it’s fixed and then the exact same thing later happens again, the political and financial consequences of that are often much, much worse. People are intensely frustrated by regressions.

                                  Sure, if we exhaustedly tested everything up front, they might never have experienced the bug in the first place, but given the very limited time and budgets on which many business and enterprise projects operate, prioritizing letting the odd new bug slip through in favor of avoiding regressions often makes a hell of a lot of sense.

                                  1. 5

                                    Not sure if you are trolling …

                                    Out of 1000 bugs a codebase may have, users will never see or experience 950 of them.

                                    The 50 bugs the user hits though – you really want to make sure to write tests for them, because – based on the fact that the user hit the bug – if it breaks again, the user will immediately know.

                                    That’s why regression tests give you a really good cost/benefit ratio.

                                    1. 3

                                      A bug caught by a test before the bad code even lands is much easier to deal with than a bug that is caught after it has already been shipped to millions of users. In general the further along in the CI pipeline it gets caught, the more of a hassle it becomes.

                                      1. 3

                                        The specific point in time matters because the risk-reward payoff calculus is wildly different. Avoiding coding errors (“new bugs”) by writing tests takes a lot of effort and generally only ever catches the bugs which you can predict, which can often be a small minority of actual bugs shipped. Whereas avoiding regressions (“old bugs”) by writing tests takes little to no incremental effort.

                                        People’s opinion of test writing is usually determined by the kind of code they write. Some types of programming are not suited to any kind of automated tests. Some types of programming are all but impossible to do if you’re not writing comprehensive tests for absolutely everything.

                                        1. 2

                                          The whole class of regression tests was omitted from the original article which is why it’s relevant to bring them up here.

                                          1. 2

                                            The article says “look back after a bug is found”. That sounds like they mean bugs caught in later stages (like beta testing, or in production).

                                            If you define bugs as faults that made it to production, then faults caught by automated tests can’t be bugs, because they wouldn’t have made it to production. It’s just semantics, automated tests catch certain problems early no matter how you call them.

                                            1. 1

                                              I’m of the same opinion. It means that the reason why we’re writing tests is not to catch bugs in general, but specifically to catch regression bugs. With this mindset, all other catching of bugs is incidental.

                                      1. 2

                                        This is very hard to read as someone who has no context in the larger relational-pipes project. Can you link to the manpage of the new scripts, or some equivalent?

                                        You seem to be motivating it with some example involving fstab, but it’s not clear what the inputs and desired outputs are.

                                        You generate some input for the script, but it too involves some bespoke command (relpipe-in-cli). Can you just show numbers.rp and skip how it’s generated, since that seems irrelevant to the main commands?

                                        1. 3

                                          For context I recommend reading the front page, Principles and Classic pipeline example.

                                          Relational pipes are an open data format designed for streaming structured data between two processes.

                                          i.e. it is not about particular tools, rather:

                                          More generally, Relational pipes are a philosophical continuation of the classic *NIX pipelines and the relational model.

                                          This software is still in the development phase and it has unstable API (before v1.0 there might be incompatible changes, not much, it is relatively stable, however: you have been warned :-). There is still quite long TODO list… and using the software is part of the process of crafting the specification. Feedback from potential users and practical experience is important to verify theoretical concepts (that look nice on paper) and tune implementation details.

                                          The content of the *.rp files is not important for now. Before v1.0 the format specification will be published and then it will make sense to work with content of these files (however it would be still much more convenient to use a reader or writer library – so the specification is rather important to those, who will want to implement such libraries in not yet supported languages). In current (pre v1.0) versions, I recommend using other format like CSV, XML or Recfile – there are input and output filters for them, so if you serialize data e.g. into CSV or Recfile, you will be able to read them in future versions. The pre v1.0 versions are not backward compatible, but there is „Backward incompatible changes“ chapter in the release notes. The relational format and API of libraries and tools will be stable and semantically versioned since v1.0.

                                          If you are interested, please share your use cases and comments (maybe the mailing list would be better place for more in-depth discussion). BTW: There was prolific discussions on Czech forums that led e.g. to implementing the AWK transformation module and other improvements – but here on Lobste.rs there is very low feedback rate.

                                        1. 8

                                          I’m building little prototypes with my language that’s implemented in machine code and mostly translates 1:1 to machine code (paper [pdf; 12 pages]; repo; blog post). It still has some rough edges, and does zero type-checking. But I’m sick of compiler-writing and the language is at some minimal level of ‘working’. Examples so far:

                                          Now I’m working on a more serious app: a multi-column paginator for reading text files. I’d like to now extend it for some basic markdown-like notation. Then add support for hyperlinks and voila, a simple browser for a subset of the web that’s bootstrapped out of machine code. And no images or javascript. I’ll claim that’s for privacy, yes.

                                          Then again, I may well just go back to Mu hacking. I’ve already found one bug in Mu so far. And a few instances where the lack of checks is likely to bite anyone building apps, particularly since you have to juggle registers as part of types.

                                          1. 2

                                            Your work continues to impress. Seriously love the factorial example.

                                            1. 1

                                              Then again, I may well just go back to Mu hacking. I’ve already found one bug in Mu so far. And a few instances where the lack of checks is likely to bite anyone building apps, particularly since you have to juggle registers as part of types.

                                              Thanks for this. Mu is being offered as a super tool for beginning/newer coders, so every rough edge we can file is good for everyone.

                                              1. 2

                                                That is still a distant goal, but you’re right that I’m aiming towards it.

                                                Right now programming in Mu requires a willingness to learn about the internals of processors, and the persistence to debug problems all through the still-immature tooling. I think this way has promise in time to be a better experience than conventional software, but for now it’s quite rough. Anybody trying it out should liberally pepper me with questions. I can’t yet guarantee a smooth experience, but I can guarantee I’ll be right there on the road beside you if you will ask questions. And together we’ll make things a little bit better for the next traveler.

                                                1. 2

                                                  This makes me wonder if we’re talking about the same Mu.

                                                  Mine is a Python editor for beginning programmers with excellent support for CircuitPython

                                                  1. 2

                                                    Oh jeez, I hadn’t seen this one before. I’ve clarified above which Mu I mean.

                                                    1. 2

                                                      Neat! I’d not heard of this before. Do you have an end use case in Mu in mind or is it just an educational experience for you?

                                                      1. 2

                                                        Not a use case, exactly, but I go into the motivations in the paper. The abstract/intro/conclusions in particular should be a quick read on the goals and belief system behind the project.

                                              1. 1

                                                Not sure what you’re referring to. I don’t expect the style guide to be 100% in line with Linus’s opinion, since that’s whimsy whereas the guide is a compromise. But this seems more or less compatible:

                                                Statements longer than 80 columns will be broken into sensible chunks, unless exceeding 80 columns significantly increases readability and does not hide information.

                                                1. 2

                                                  Here’s Linus pointing out that the style guide needs updating: https://lkml.org/lkml/2020/5/28/1245

                                                  If 80 columns isn’t reasonable anymore, then there’s no reason to mention 80 columns in the style guide.

                                              1. 3

                                                Not really a new idea. White space based syntax has been done a scheme SRFI before as well as a number of scheme variants (e.g. wart)

                                                1. 4

                                                  There is also SRFI-119, which allows you to continue arguments to a procedure on one line.

                                                  1. 3

                                                    Others commented the same back in 2013 when this was published. It has not been done before, in SFRI-49, SRFI-119, SRFI-110 nor Wart.

                                                    1. 3

                                                      Could you comment on the differences? They look superficially similar but I can imagine this kind of syntax lives or dies on small subtleties.

                                                      1. 3

                                                        The other projects attempt to reduce the number of parentheses used, by introducing whitespace, and allow free mixing of indentation with s-expressions. This free mixing results in a variety of trade-offs and exceptions.

                                                        z-expressions retain the uncompromising regularity of s-expressions, replacing parentheses entirely with an indentation scheme, capable of expressing any grouping that parentheses can. The new benefit is that input to macros (which are all reader macros in z), have no start/end delimiter tokens (such as () or {} or []), and therefore also no escape sequences needed. The indentation delimits the text input, just like a code block in markdown. Unlike a Common Lisp reader macro, you can parse the rest of the document before finding an ending delimiter.

                                                        1. 1

                                                          Unlike a Common Lisp reader macro, you can parse the rest of the document before finding an ending delimiter.

                                                          Ah, that’s really interesting. Thanks.

                                                      2. 3

                                                        I agree. Z is just damn cool. (Author of Wart here.)

                                                      1. 5

                                                        Not that paper again. I have a bite-sized rebuttal, fight me:

                                                        The authors’ “ideal world” is one where computation has no cost, but social structures remain unchanged, with “users” having “requirements”. But the users are all mathematical enough to want formal requirements. The authors don’t seem to notice that the arrow in “Informal requirements -> Formal requirements” may indicate that formal requirements are themselves accidental complexity. All this seems to illuminate the biases of the authors more than the problem.

                                                        (So I mostly agree with OP.)

                                                        1. 34

                                                          Every time I see this article, I causes me to go look up and read the one below, and it has yet to fail to improve my day

                                                          https://www.usenix.org/system/files/1311_05-08_mickens.pdf

                                                          1. 6

                                                            This is an excellent rebuttal that I’d never considered, even though I’ve read and enjoyed both articles multiple times.

                                                              1. 1

                                                                I’m so excited, thank you for sharing this

                                                            1. 29

                                                              I’ve never worked on a project complicated enough to need something like Ninja, but I appreciated the humble, down-to-earth tone here. This part especially struck me, about being a maintainer:

                                                              A different source of sadness were the friendly and intelligent people who made reasonable-seeming contributions that conflicted with my design goals, where I wanted to repay their effort with a thorough explanation about why I was turning them down, and doing that was itself exhausting.

                                                              1. 15

                                                                I really liked the article too, and learned a bunch of things from it, but I would quibble with this one part:

                                                                People repeatedly threatened to fork the project when I didn’t agree to their demands, never once considering the possibility that I had more context on the design space than they did.

                                                                Forking is a feature, not a bug. Forks are experiments, and can be merged, and that has happened many times (XEmacs, etc.)

                                                                So if I were the maintainer, I would encourage all the dissenters to fork (as well as rewrite, which he mentioned several people did). Ninja is used by Chrome so it’s not like its core purpose will be diluted.

                                                                Of course acting like it’s a “threat” to fork is silly … the forkers will quickly find out that this just means they have a bunch more work to do that the maintainers previously did for them.

                                                                So in other words, if you don’t want users to treat the project as a product with support, then don’t look down upon forks? That is pretty much the way I think about Oil. It’s a fairly opinionated project, and I don’t discourage forks (but so far there’s been no reason to).

                                                                I would use the forks as a sign that people actually care enough to put their own skin in the game… and figure out a way to merge the changes back compatibility after a period of research/development (or not, if it is deemed out of scope).


                                                                Anyway, I think Ninja is a great project, and I hope to switch Oil to it in the near future for developer builds (distro builds can just use a non-incremental shell script). A couple years ago, I wrote a whole bunch of GNU make from scratch and saw all its problems first hand. Oil has many different configurations, many steps that could be executed in parallel, and many different styles of tools it invokes (caused by DSLs / codegen). So GNU make falls down pretty hard for this set of problems.

                                                                1. 10

                                                                  I think it really depends on the fork itself. Experimental forks are great, but often when someone “threatens” it, they also intend on trying to pull part of the community off with them and may not have any intention on ever giving back anything. One example of this was ffmpeg vs libav, where the latter was “hostile fork” that caused all sorts of general trouble (remember when running ffmpeg on Debian said the command was deprecated), and even though it eventually died off in popularity, it didn’t happen soon enough to avoid all sorts of nasty drama.

                                                                  1. 5

                                                                    If you don’t plan to support the project, the prospect of others pulling part of the community off should feel like a relief.

                                                                    I agree with you that it is possible for forking to be a threat, but it’s usually just the way it’s said: do this or else. (And usually it’s an empty threat. Someone clueless enough to consider it a threat is usually not actually planning to follow through.)

                                                                    But a polite heads-up that one intends to fork a project should always be cause for relief, in working out some unresolved tension in the community. Taking part of the community away is the whole point of a fork. If the new fork doesn’t intend to support a part of the community they wouldn’t be talking about it. And if people didn’t try it out, it wouldn’t be an experiment.

                                                                    1. 2

                                                                      Yeah that is one fork I remember being surprised by since I’m a Debian/Ubuntu user… But I would still say the occasional fork is evidence of the system working as intended. There was a slight inconvenience to me, but that doesn’t outweigh improving the overall health of the ecosystem through a little competition.

                                                                      Some people may do it in bad faith, but it doesn’t change the principle. It’s hard to see any instance where forking made things worse permanently, where I can see many cases where the INABILITY to fork (e.g. closed source software) made things worse forever.

                                                                      e.g. when I used windows I used to use http://www.oldversion.com/

                                                                      e.g. Earlier versions of Winamp were much better. Same with a lot of Microsoft products. If it were open source then there would be no need for “oldversion.com” (and there is no none AFAIK). The offending new features can be modularized / optimized. There are some open source releases that are bungled, but outsiders are often able to fix them with patches or complaints.

                                                                    2. 4

                                                                      [OP here] Thanks for this comment, it is very insightful.

                                                                      Upon reflecting after writing the post I came to the same conclusion as you, that I should have encouraged forks more as a way to offload responsibility. I think at the time I was more excited about “fame” or whatever for my project, and now that I’m on the other side of it I realize that it wasn’t worth it. I have a similar thing with my work at Google – a younger me wanted to share it all with the world, but these days I am relieved when only people within Google can ask me about it.

                                                                      Unfortunately forks are not as free as we’d like. Imagine someone makes a fork that adds some command-line flag they want, and then some random package starts depending on that; now users are confused about which fork to use, contributors are split, Debian has to decide whether to package both forks, and so on.

                                                                      I think I wouldn’t mind forks if they were about meaningful changes, like the person who wanted to make Ninja memory resident to make it scale to their problem space. Especially when you’re making an app that works on multiple platforms, I’ve sometimes wondered if the best way to maintain it is via mutually communicating forks (e.g. a new release on Linux means that the Windows fork can adapt which changes are relevant to it). It’s the forks about trivialities that are frustrating, and in particular because in the context of a trivial change the word “fork” is brought up not in the way you intend, but rather just as a rhetorical weapon.

                                                                      1. 2

                                                                        Yeah it’s a tough issue for sure. I think different names are important, so a publicly distributed fork of ninja shouldn’t be called ninja. That way it can support different flags, or even a different input language.

                                                                        That seems to be respected by most forkers: libav != ffmpeg, and emacs != xemacs, etc. The distro issue is also tricky, as Debian switched to libav and then back to ffmpeg. IMO they should try to make packages “append only”, but I understand that curation also adds some value.

                                                                        But I’d say the people who actually would follow through on a fork rather are the least of the problem. Of course it’s not easy to tell who those people are to begin with. Those are people who have the technical expertise to help the project too!

                                                                        Another good example of a fork is Neovim. In this video one of the primary contributors to Neovim shows some pretty good evidence that it has met user demand, and also motivated vim author Bram Moolenaar to add features that he resisted for many years!

                                                                        https://vimconf.org/2019/slides/justin.pdf

                                                                        https://www.youtube.com/watch?v=Bt-vmPC_-Ho

                                                                        It may not have been pleasant for Bram to have his work criticized, but I think it’s a healthy criticism when someone puts in work, rather than low effort, uneducated flaming.

                                                                        (I’m still a Vim user, and haven’t tried Neovim, but I appreciate the experimentation. And honestly I learned a whole bunch of things about Vim internals from that talk, despite having used it for 15 years. It’s good to have new eyes on old code.)

                                                                        1. 2

                                                                          I should also mention that I think the Elm project could save themselves a lot of hassle and drama by respecting several open source norms:

                                                                          1. If you don’t want people to treat your project like a product, don’t engage in “marketing”!

                                                                          There’s too much marketing on their home page, in a way that appear to elevate the authors/maintainers above the users: https://elm-lang.org/

                                                                          It looks like a product. In contrast, peers don’t market to each other. Instead they tell people what they did in plain language. Sometimes that involves teaching and I’ve found that people respect that. But there is a lot of marketing that’s not teaching.

                                                                          1. SImilarly, put the limitations up front and center. WE BREAK CODE THAT WORKED. That is perfectly within your right to do, as long as you clearly state that you violate that norm. I wrote a couple years ago that they need a FAQ here: http://faq.elm-community.org/ about that, since it appears to literally be the #1 FAQ, but it’s not mentioned anywhere.

                                                                          2. Don’t be hostile to forking. This is mentioned here: https://lukeplant.me.uk/blog/posts/why-im-leaving-elm/#forkability

                                                                          The author of that post may have been unreasonable in other respects, but I do agree about forking.

                                                                          So while I think the talk you linked is thoughtful (I watched it awhile ago), I think the project is suffering from some self-inflicted pain…

                                                                          1. 1

                                                                            I think the Elm project is trying to forge a new form of financing code production, midway between outright corporatization and open source “share cropping”.

                                                                            Leaving aside their internal politics, I think something like that is worth exploring and maybe required for the long term health of the space.

                                                                      2. 5

                                                                        Likewise!

                                                                        Although this is older now and some sections may be out of date, the author also has an essay on ninja in The Performance of Open Source Applications which is well worth reading.

                                                                        The tone is similarly down-to-earth and there’s a bit more in-depth technical content on how it was optimised. It made an impression on me during a recent re-read as one of the better essays in an overall excellent book.

                                                                      1. 2

                                                                        It’s been many years since I played with Haskell, but the last time I did I remember running into a major stumbling block when I realized not all Haskell code can be entered at the GHCi repl. Maybe definitions can’t be added? I’d appreciate an explanation of how you use the REPL without hitting your shins on that constraint all the time.

                                                                        1. 5

                                                                          It has been many years! Bindings have been supported for 4 years now.

                                                                          https://gitlab.haskell.org/ghc/ghc/issues/7253

                                                                          1. 3

                                                                            One of the key differences with the REPL is that you used to need to use the let keyword for definitions.

                                                                            1. 2

                                                                              Short definitions can be added in the repl, but it’s more common to write them in a file and reload it on each change.

                                                                            1. 9

                                                                              I’ve just started using Spacemacs over Vim so I could get access to Racket mode after I was recommended to try it by fellow Lobsters.

                                                                              Spacemacs is great! Evil mode works just as I expect it, the actual status lines and such make sense and aren’t completely ugly, SPC-SPC gets you to a fuzzy command search, and as Emacs commands are verbose, most of the time I can totally guess what I want: SPC-SPC-wrap shows me toggle-word-wrap which was indeed what I wanted. When I open a file with a format that is recognized but I don’t have the plugin for, I’m immediately prompted if I want the plugin and away it goes.

                                                                              I might have been able to hack such an environment together in Vim, but after 6 years of Vim, I never made it even half as usable.

                                                                              I think it’s 100% OK to treat things like Emacs as base foundations, and then build on top of them. Just like how Debian is a rock solid foundation, and Ubuntu gets built on top. Let the Emacs people be slow and make things work for them and not get all bent out of shape on use numbers. Have those things layered on top be the ones that grow users.

                                                                              1. 4

                                                                                Same-ish but with doom-emacs, which is lighter (I think?) and does all of the above.

                                                                                1. 3

                                                                                  I’ve just started using Spacemacs over Vim

                                                                                  I wonder how long you stick with it. I started using Emacs with Spacemacs, but over time I encountered so many bugs and slowness that I quit.

                                                                                  Then I started just with vanilla Emacs + Evil and gradually added packages. I stole the idea from Spacemacs to bind commands to SPC-something with general.el and which-key. Now I have my own little Spacemacs which is fast and rarely breaks. [1]

                                                                                  Spacemacs is a good gateway drug though. I would newcomers to Emacs definitely recommend to start with Spacemacs, because it shows what is possible with Emacs.

                                                                                  [1] Most of the configuration, some language-specific parts are in different files: https://github.com/danieldk/nix-home/blob/master/cfg/emacs.nix

                                                                                  1. 3

                                                                                    Spacemacs, but over time I encountered so many bugs and slowness that I quit

                                                                                    I had that issue with the main branch, but the devel branch rarely gives me any problems and I experience non of the slowness that plagued the main branch (for me).

                                                                                    1. 1

                                                                                      I’ve been using spacemacs for about 4 years. WFM.

                                                                                    2. 1

                                                                                      What does hitting Tab do for you in your Spacemacs setup? Coming from Vim, having it often do nothing drives me batty.

                                                                                      1. 1

                                                                                        Let the Emacs people be slow and make things work for them and not get all bent out of shape on use numbers.

                                                                                        I wonder if those percentages include spacemacs and other layers on top of emacs. I would expect them to include them, as those people are also “using emacs”, maybe less directly, but it’s certainly emacs they’re running.

                                                                                      1. 3

                                                                                        I was recently reading the vi (not vim) manual page and found this note under “SET OPTIONS”:

                                                                                        lisp [off]
                                                                                             Vi  only.   Modify  various  search commands and options to work
                                                                                             with Lisp.  This option is not yet implemented.
                                                                                        

                                                                                        I don’t know if this is just a nvi thing, or if such a feature was generally never implemented. It would certainly be interesting to see what these “modifications” would be. Something like paredit or just changing the definition of words to lisp symbols…

                                                                                        But related to this article, is there anything one does better with lisp for vim? Would you recommend it to a beginner in lisp with no editing experience/preference? I’ve always just used Emacs, and ironically never had a perfect lisp setup, but then again I also can’t compare.

                                                                                        1. 1

                                                                                          Interesting! FWIW, Vim does have set lisp:

                                                                                          https://vimhelp.org/options.txt.html#%27lisp%27

                                                                                          While most lispers seem to use Emacs, Vi is by no means rare. I remember reading that Paul Graham uses some sort of Vi. And I use Vim, to the extent that I can be considered a lisper.

                                                                                          1. 1

                                                                                            Another example is the author of Let over Lambda, Doug Hoyte, who explains why he uses vi in said book. Then again, because they were writing 15-25 years ago, their story is a totally different one from the slimv and vlime usage explained in this article.

                                                                                        1. 1

                                                                                          Attending the Convivial Computing Salon (talks open to all) and (procrastinating on) preparing my talk for it.

                                                                                          1. 34

                                                                                            Accidental complexity is just essential complexity that shows its age.

                                                                                            This is dangerously wrong, and I see the unimpeachable evidence that it’s wrong every day.

                                                                                            A ton of complexity comes from developers choosing suboptimal designs and overly complex tools. Two people writing the same software with the exact same set of requirements, edge cases, weird assumptions, etc, can arrive at designs with radically different complexity.

                                                                                            So while yes, there is essential complexity that has to live somewhere, the high-order bit is usually the complexity of the design, and the skill of the designer matters a lot.

                                                                                            From one of the author’s own posts:

                                                                                            It’s one small piece of data-centred functional design, where thinking a bit harder about the problem at hand greatly simplified what would have been a more straightforward and obvious implementation choice in a mutable language.

                                                                                            The other problem with the POV being argued for here is that, even while it contains some truth, as soon as it accepted it is used to justify shoddy work as “inevitable.”

                                                                                            1. 4

                                                                                              Would the two developers also have the same knowledge, same environment, context, and time pressures? Ignoring these is ignoring fundamental aspects of software design.

                                                                                              The idea is not that the shoddy work is inevitable, but that complexity is inevitable. You can’t control it, but you can manage it. The previous article of mine you referred to in terms of functional design is one example of doing work on an implementation that was slower (both to write and to execute), less simple/obvious in approach (it needed more documentation and training for newcomers), and was conceptually more complex and demanding, but ended up composing better to simplify some amounts of maintenance (the logical separation and how change sets would need to be applied in the future).

                                                                                              If we had been under intense time pressure to ship before losing major contracts? Taking the time to do it right the first time would have definitely been the wrong way to go about it. If we ever had the need or shifting requirements for logs “as soon as the request is received” instead of once it is over? We would have been retrospectively wrong. You put in the time sooner or later, and context matters. It’s why I say accidental complexity is just essential complexity that shows its age.

                                                                                              1. 11

                                                                                                but that complexity is inevitable.

                                                                                                Unnecessary, accidental complexity isn’t inevitable as illustrated by those that avoid it in their work. It’s a cultural and/or educational problem. An example is language compile times. C++ has a design that makes it really, really hard to compile. The Lisp’s and D can do either more than or as much as C++ in terms of features. Yet, they compile way, way faster since they were designed to be easy to compile. In different ways, they eliminated accidental complexity that hurt them on that metric.

                                                                                                Another example is probably SystemD. I’ll say ahead of time I’m not a zealot in any camp in that debate. I just have seen simple things to run in privileged processes, user-mode components for system management, and kernel hooks for components like that to use. The SystemD solution was a massive pile of code running with high privilege. We have decades of evidence that is bad for maintenance, reliability and security. Yet, they did it anyway for who knows what reason. Meanwhile, the modular and isolating design of QNX kept those systems just running and running and running with changes being easier to make with less breakage when they made them.

                                                                                                On usability side, I always thought it was strange how hard it was to set up most web servers. If one was easy, you’d have to worry about its security or something. Supporting new standards just added more complexity for both developers and users. Then, two people decide to build a usable one, Caddy, in a memory-safe language supporting modern standards. The advanced features are there if you need to put time into them. Otherwise, install, config, HTTPS, etc is super easy compared to prior systems I saw. It’s super-easy because the developers intentionally eliminated most of the accidental complexity in setup and configuration for common usage.

                                                                                                So, it just looks like much of the accidental complexity isn’t inevitable. Developers either don’t know how to avoid it or are intentionally adding it in for questionable reasons. If it’s inevitable, it’s for social reasons rather than anything inherent to complexity. Maybe the social sciences, not complexity science, need to be responsible for studying how to combat accidental complexity. Might get further than the tech people. ;)

                                                                                                1. 4

                                                                                                  Maybe the social sciences, not complexity science, need to be responsible for studying how to combat accidental complexity.

                                                                                                  I believe that this is much more true than commonly acknowledged. There is so much to gain by having insight in basic rules of how people behave and work together. Currently I feel like we are completely in the dark. 30 years from now, I imagine that the main topic in software development will not be which language or IDE to use but how to optimize circumstances, incentives and relationships for the most stable code.

                                                                                                  1. 1

                                                                                                    I think you’re right about the former, but in 30 years the incentives that produce the current obsessions will not have changed and neither will the willingness to ignore human factors while producing crap.

                                                                                                2. 8

                                                                                                  “People often don’t have time to do a good job” isn’t the same thing as “Complexity is unavoidable”.

                                                                                                  1. 4

                                                                                                    “Resources are finite and decisions must be made accordingly” is how I would frame it.

                                                                                                    1. 8

                                                                                                      Which implies that complexity is optional, and people often chose to invest elsewhere – and are then surprised when making changes is hard due to poor choices.

                                                                                                      1. 5

                                                                                                        Software isn’t written in a vacuum by robots with infinite time and energy, in an immutable environment that never evolves (nor get impacted by the software that contains it). There’s a reason most of us don’t use formal proofs to write blog engines. The complexity is an inherent part of development, the same way juggling between finite resources is not avoidable. You can’t de-couple the programs to be written from the people who will write it. That just means you’re working from a poorer model of things.

                                                                                                        1. 6

                                                                                                          You seem to have completely changed arguments. Capitalism and the resulting market pressure may breed complexity, but that’s a rather different conversation.

                                                                                                          1. 5

                                                                                                            No. Complexity can control complexity, as mentioned in the post via the law of requisite variety. To simplify software, you still have to add/shift complexity; a shift in perspective is asking the devs to gain better understanding, to absorb (and carry) that complexity in their heads to simplify the program. Someone, somewhere, whether in code or in their mind, needs to establish a more complete and complex model of the world.

                                                                                                            The only difference between an implementation that was done fast as required to work and an implementation that was cleaned up and made simpler is that as an organization (or as an engineer) is that we have taken the time to gain expertise (transfer knowledge “into the head” to reuse terms from the post) to extract it from the program, clarify it, and re-inject a distilled form. In order to maintain the software effectively, we have to maintain that ability to control complexity, that fancier model which now resides in people. If you don’t do that, the next modifications will also be full of “accidental complexity”.

                                                                                                            If you write simple software without necessarily handling complex things (i.e. there is no gain in complexity in the entire development chain whether in code or in people), the complexity gets passed on to the users, who now have to adjust to software that doesn’t handle complex case. They learn to “use the software the right way”, to enter or format data in a certain way, to save before doing operations that often crash, and so on. They end up coping with what wasn’t handled anywhere before it made it to them.

                                                                                                            I maintain that complexity just shifts around.

                                                                                                            1. 3

                                                                                                              I think I understand your claim. However, it’s not clear to me, once you expand the term ‘complexity’ to encompass so much, what that buys you. Could you give an example where somebody thinking about reducing complexity would act differently from somebody “embracing it, giving it the place it deserves, focusing on adapting to it”?

                                                                                                              Edit: Ah, looks like @jonahx asked the same thing.

                                                                                                    2. 2

                                                                                                      I also think that “modern” software has a fetish for features, and, indirectly, for complexity.

                                                                                                      When I was programming in qbasic I could draw a graph in literally four lines of code.

                                                                                                      Now in any “modern” environment this will be really hard. I could probably work it out in C#.NET, but it will take me at least half an hour. Then sure, it will have a little more than the qbasic program (a window around it, which may be resizable, a lot more colors, and maybe double buffering), but none of that was a goal; I just wanted to draw a graph.

                                                                                                1. 4

                                                                                                  A lot of software is already like this through plugins/extensions. For example Firefox, Wordpress, Vim, etc.

                                                                                                  The problem with locally modifying software is that it’ll become rather hard to update. I run a modified version of dwm, and even though it’s pretty simple and doesn’t get many updates, updating it is time-consuming as I have to merge my local changes with upstream. I’m not entirely sure how practical truly editable software will be.

                                                                                                  I’ve been thinking about writing my own WM which just handles the basics, and everything else is an external program. You don’t need the WM to do tiling or to move the windows, something like xdotool can do that just as well. I’m not sure how practical it would be, or when exactly you’ll run in to limitations about this though, but it would be an interesting experiment.

                                                                                                  1. 3

                                                                                                    The author contrasts plugins/extensions against editing software. Extensions require the original developer to try to anticipate all the places that a user might want to change the behavior and provide a fixed api that can accommodate everyone’s needs. As opposed to encouraging the user to just download the source and make the changes directly.

                                                                                                    Obviously that isn’t going to work with firefox today because you need some kind of supercomputer just to compile it from scratch. But maybe we can figure out how to design software to make that kind of direct editing easier.

                                                                                                    1. 1

                                                                                                      Have you thought of using a version control system? What I tend to do is download the source to a package, and if it’s via a tarball, before I do anything (./configure, etc.) I’ll create a git repo out of it, then immediately create a branch and check it out. Any changes I make are to that local branch. A new version comes out? I checkout master, update, then switch back to the local branch and merge.

                                                                                                      1. 1

                                                                                                        Yeah, dwm is in git, but it’s still manual work fixing merge conflicts, which only gets worse the more modifications you make.

                                                                                                        1. 1

                                                                                                          The truly tragic thing IMO: all the software that tries to be simple and hackable has no tests. All the software that has tests gets complex and baroque. As a result tests end up seeming just a convenience for maintainers, when they could be so much more.

                                                                                                          1. 1

                                                                                                            I think that for a lot of simple and hackable software, the need for tests is much smaller, whereas for complex and baroque software it’s pretty much required.

                                                                                                            It’s not that simple software couldn’t benefit from tests, it’s just that the ROI on it is much smaller. After all, if you can understand all of the program then you don’t need to unit test isolated parts.

                                                                                                            The same applies to some other “best practices” btw. The smaller and simpler your program, the more acceptable it becomes to just use global variables for example.

                                                                                                            1. 2

                                                                                                              Totally agreed on global variables and unit vs integration tests. But I disagree on total tests. I refuse to believe that dwm’s sources wouldn’t benefit from tests. Yes, the author may not need them. But they seem super useful to a user wanting to hack on them. Particularly if you just encountered a merge conflict. 2kLoC isn’t insignificant. And just look at the number of concepts needed to deal with X windows!

                                                                                                              1. 1

                                                                                                                I refuse to believe that dwm’s sources wouldn’t benefit from tests

                                                                                                                Yeah sure, it would certainly benefit; it’s just that the benefit is smaller. How many bugs would be prevented by testing? Probably not that many. Like I said, it’s all about ROI; you can probably chart project size vs. benefit from testing on a chart quite nicely.

                                                                                                                1. 2

                                                                                                                  Your argument makes sense, but leaves one question open: benefit for whom? If you mean benefit for the main repo, you’re absolutely right. But I’m addressing your original point at the top of this thread:

                                                                                                                  The problem with locally modifying software is that it’ll become rather hard to update. I run a modified version of dwm, and even though it’s pretty simple and doesn’t get many updates, updating it is time-consuming as I have to merge my local changes with upstream. I’m not entirely sure how practical truly editable software will be.

                                                                                                                  It would be less time-consuming for you if dwm had tests. And it would make editable software seem more practical.

                                                                                                                  Of course, this isn’t in your control. I was thinking of that when I used the word ‘tragic’. But your subsequent defense of lack of tests strikes me as Stockholm Syndrome :)

                                                                                                    1. 4

                                                                                                      I think this is where emacs shines. The whole Lisp/Smalltalk world is sort of ideologically like this. High trust of users.

                                                                                                      1. 2

                                                                                                        I feel very cynical/fatalistic about Emacs/Lisp/Smalltalk lately. Yes, the core design choices exhibit a high trust for users. But then it seems to be inevitable that the layers on top start chipping away at this trust. Vim’s package managers introduce hurdles for anyone who wants to modify sources. (See my recent war story.) How does the Emacs eco-system compare? Is it really common for people to modify packages? Racket’s raco feels the same, additional complexity between me and the libraries they want me to ‘use’.

                                                                                                        1. 4

                                                                                                          Emacs is great for this, especially if you use straight. Straight makes a local clone of each package repo. If you want to edit it, edit it and commit (or don’t), and your emacs will use the local copy.

                                                                                                          There is a built in facility called advice that let you intercept functions and change how they behave.

                                                                                                          Finally there’s a very neat macro called el-patch that lets you make modifications to the source of other functions and will tell you if they’ve changed out from under your mods, if you wanna get dirty.

                                                                                                          1. 1

                                                                                                            Thank you, straight.el looks wonderful. This whole thread has also had me (finally!) reading up on Guix.

                                                                                                          2. 1

                                                                                                            I typically don’t modify the original source directly, but copy-paste the function into my own init and edit it there. I find it easier to have all my changes in one file with one version control history rather than spread out across many repos.

                                                                                                            But the answer to your general complaint about finding the source code is easy in emacs - xref-find-definitions works across packages.

                                                                                                          3. 1

                                                                                                            High trust of users

                                                                                                            That seems like a key point. A lot of the reasoning around locking down code and having pre-defined, well-controlled configuration points is to prevent breakage. But if you think of your users as responsible adults then maybe you can just say “here is the sharp edge, you know the risks”?

                                                                                                            This seems similar to access control in languages like python and julia where there are conventions around private interfaces but they aren’t enforced by the compiler. So you can reach in and call some private function if you need to do it, but you’re made aware that you’re taking on a higher risk that future versions of the library will break your code. There’s kind of a social aspect too - if you rely on a public interface and it breaks you can blame the library developer but if you rely on a private interface and it breaks then it’s clear that you took on that responsibility yourself.

                                                                                                            1. 2

                                                                                                              I think your reflection here picked out the most important point, which I didn’t really key on when I wrote the comment.

                                                                                                              On that vein, I want to gesture at the communities which produce different kinds of software, along with the overall broader erosion of trust in society, and suggest that perhaps the trust levels of software design is inherent in the community and time which forms it.

                                                                                                          1. 9

                                                                                                            Something that I’ve been thinking about a lot is that the way that most software is distributed is really hostile to modifications - this post touches on that, but doesn’t go very deep into it. A question I’ve been asking recently is - what would a package manger that’s designed with end-users patching software as a first-class concern look like? And a somewhat more challenging version for some systems - what would it look like to allow end-users to patch their kernels as a first-class concern?

                                                                                                            NixOS has the start to an answer for this (fork the nixpkgs repo, make whatever changes you like), but it still doesn’t seem ideal. I guess maybe gentoo is a sort of answer to this as well, but it doesn’t seem like gentoo answers the question of “how do you keep track of the changes that you’ve made”, which is I think a really important part of systems like this (and the thing that’s missing in most current package managers support for patching)

                                                                                                            1. 13

                                                                                                              Note you don’t need to fork nixpkgs to edit individual packages, you can just add them to an overlay with an override eg I was trying to debug sway recently so I have:

                                                                                                              (sway.overrideDerivation (oldAttrs: {
                                                                                                                    src = fetchFromGitHub {
                                                                                                                      owner = "swaywm";
                                                                                                                      repo = "sway";
                                                                                                                      rev = "master";
                                                                                                                      sha256 = "00sf8fnbj8x2fwgfby6kcrxjxsc3m6w1yr63kpq6hv94k3p510nz";
                                                                                                                    };
                                                                                                                  }))
                                                                                                              

                                                                                                              One of the guix devs gave a talk called Practical Software Freedom (slides) where the core point was that it’s not enough to have foss licensing if editing code is so much of a pain that noone does it. It looks like guix has a pretty nice workflow for editing installed packages, and since the tooling is all scriptable I bet you could streamline it even more - eg add a command for “download the source, fork it to my personal repo, add the fork to my packages list”.

                                                                                                              1. 6

                                                                                                                I only very briefly used guix, but its kernel configuration system is also just scheme. It’s so much nicer than using any of the kernel configurators that come with the kernel, and I actually played around with different configurations rather than opting to play it safe which is what I normally do since it’s such a faff if I accidentally compile out support for something important.

                                                                                                                1. 2

                                                                                                                  How often do you find your overlays breaking something in strange ways? My impression is that most packages don’t come with tests or other guardrails to give early warning if I break something. Is that accurate?

                                                                                                                  1. 3

                                                                                                                    I haven’t had any problems so far. I guess if you upgrade a package and it changes the api then you might be in trouble, but for the most part it seems to just work.

                                                                                                                2. 6

                                                                                                                  One thing this brought to mind was @akkartik’s “layers” script: http://akkartik.name/post/wart-layers

                                                                                                                  The post is short, but the money quote is here:

                                                                                                                  We aren’t just reordering bits here. There’s a new constraint that has no counterpart in current programming practice — remove a feature, and everything before it should build and pass its tests

                                                                                                                  If we then built software such that reordering commits was less likely to cause merge conflicts, you could imagine users having a much easier time checking out a simpler version of the system and adding their functionality to that if the final version of the system was too complex to modify.

                                                                                                                  1. 4

                                                                                                                    I’ve gotten close with rpm and COPRs. I can fairly quickly download a source RPM, use rpmbuild -bp to get a prepped source tree where I can build a patch, track it, and add it back to the spec, then push it to a COPR which will build it for me and give me a yum repository I can add to any machines I want. Those pick up my changed version of the package instead of the upstream with minimal config fiddling.

                                                                                                                    It’s not quite “end users patching as a first class concern” but it is really nice and in that ballpark.

                                                                                                                    1. 3

                                                                                                                      At that point the package system would just be a distributed version control system, right?

                                                                                                                      1. 1

                                                                                                                        Interesting point! And I guess that’s kinda the approach that Go took from the start, with imports directly from GitHub. Except, ideally, you’d have a mutable layer on top. So something like IPFS, where you have the mutable IPNS namespace that points to the immutable IPFS content-addressed distributed filesystem.

                                                                                                                        Still, unlike direct links to someone elses GitHub repo, you would want to be able to pin versions. So you would want to point to your own namespace, and then you could choose how and when to sync your namespace with another person’s.

                                                                                                                        1. 3

                                                                                                                          This is how https://www.unisonweb.org/ works. Functions are content-addressed ie defined by the hash of their contents, with the name as useful metadata for the programmer. The compiler has a bunch of builtin refactoring tools to help you splice changes into an existing graph of functions.

                                                                                                                          1. 2

                                                                                                                            Just watched the 2019 strangeloop talk. Absolutely brilliant. Only drawback I see is that it doesn’t help with code written in other languages. So dependency hell is still a thing. But at least you’re not adding levels to that hell as you write more code (unless you add more outside dependencies).

                                                                                                                      2. 2

                                                                                                                        what would a package manger that’s designed with end-users patching software as a first-class concern look like?

                                                                                                                        If you just want to patch your instance of the software and run it locally, it is very straightforward in Debian and related distributions, using apt-src. I do it often, for minor things and pet peeves. Never tried to package these changes and share them with others, though.

                                                                                                                        1. 1

                                                                                                                          All the stuff package managers do doesn’t seem to help (and mostly introduces new obstacles for) the #1 issue in modifying software: “where are the sources?” It should be dead easy to figure out where the sources are for any program on my system, then to edit them, then to run their tests, then to install them. Maybe src as the analogue to which?

                                                                                                                          I appreciate npm and OpenBSD to a lesser extent for this. Does Gentoo also have a standard place for sources?

                                                                                                                          1. 1

                                                                                                                            I believe Debian is trying to streamline this with Salsa. That’s the first place I look when I’m looking for source code of any package on my system.

                                                                                                                          2. 1

                                                                                                                            what would a package manger that’s designed with end-users patching software as a first-class concern look like?

                                                                                                                            A DVCS. (It’s Christmas, “Away in a package manger, no .dpkg for a bed…”)

                                                                                                                            We drop in vendor branches of 3rd party libraries into our code as needed.

                                                                                                                            We hack and slash’em as needed.

                                                                                                                            We upstream patches that we believe upstream might want.

                                                                                                                            We upgrade when we want, merging upstream into our branch, and the DVCS (in our case mercurial) tells us what changed up stream vs what changed locally. Some of the stuff that changed upstream is our patches of improvements on them, so 99% of the time we take the upstream changes, and 0.9% of time take our changes and 0.1% of the time have to think hard. (ps: 99% of stats including this one are made up (from gut feel) on the spot. What makes these stats special is I admit it.)

                                                                                                                            A related topic is the plague of configuration knobs.

                                                                                                                            Every time any programming asks, “What should these configuration value be? He says, dunno, make it an item (at best) in the .conf file or at worst, in the UI.”

                                                                                                                            The ‘net effect is the probability of the exact configuration ever having been tested by anyone else on the planet is very very very low and a UI that you’d either need a Phd to drive (or more likely is full of wild arsed guesses)

                                                                                                                            A good habit is, unless there is a screaming need, by default put config items in a hidden (to the user) file and that is not part of the user documentation.

                                                                                                                            If anybody starts howling that they need to alter that knob, you might let them into the secret for that knob, and consider moving it into a more public place.

                                                                                                                          1. 2

                                                                                                                            OP and everybody commenting on this thread feels like a kindred spirit. <3