Threads for PuercoPop

    1. 3

      I only recently came across an interview with Arthur Whitney about the language k and only then the array-based languages really started to draw me in. I now wish I knew more about them and am still looking for a nice intro to APL and the likes.

      1. 4

        The learner-friendliest I’ve found is Uiua but it’s a little different than the classic APLs.

        1. 3

          Wow that website is actually really neat! Thank you for pointing it out, I just lost a couple of hours digging into that.

        2. 3

          This is a nice intro to APL imho. It is short (Mastering Dyalog APL is very good, but it is massive) and introduces a few operators at a time

          https://xpqz.github.io/learnapl/intro.html

          That said the best way to learn APL is to write it. Dyalog has a conquest that start with 10 short exercises. There is also APL quest, which you can solve en your own and then compare with Adam’s solution

          https://m.youtube.com/playlist?list=PLYKQVqyrAEj9wDIUyLDGtDAFTKY38BUMN

          1. 3

            I think that kbook is a pretty good intro to k. There are also other resources on the k wiki.

            1. 1

              Awesome! is there an existing server I can point my nntp client to test out?

              1. 3

                https://illuminant.asjo.org/

                You can also see some screenshots in the fediverse which can be found in the comments of https://koldfront.dk/just_call_me_mr_nntp_1871#comment1891

                EDIT: Link to screenshots Add https://illuminant.asjo.org/user/asjo/object/71371

                1. 1

                  You’ll have to run it yourself - join the federated servers making up the fediverse! - I am not planning to host other people on my server :-)

                  1. 3

                    Oh, I have no problem running an activity pub instance, and I host my own – https://gopinath.org/. I just wanted to see what it is like.

                2. 9

                  I thought surely it just sent the file names in the repo when reading the post. I can’t believe it just straight up slurps the entire source code. That’s wild. This is sure to damage the “enterprise” perspectice of devenv and Nix as a whole, which really sucks as somebody really pushing for it in my dayjob.

                  1. 4

                    devenv sure, but Nix as a whole? Highly doubtful.

                    1. 1

                      I understood the nixpkgs repo was involved to some degree?

                      1. 13

                        Nixpkgs is involved by essentially shipping the devenv program just like it ships tens of thousands of other packages. The devenv author’s behaviour on the nixpkgs PR adds to the shadiness of the whole episode, but I don’t think it has any bearing on nixpkgs itself (except the fact that the devenv author has privileges on the nixpkgs repo too and that his trustworthiness is under scrutiny).

                        1. 4

                          Yes, in that the PR that ‘disabled telemetry’ by default was reverted by devenv’s author and nixpkgs mantainer.

                          https://github.com/NixOS/nixpkgs/pull/381981

                    2. 61

                      Leadership (likely Linus) needs to step in here and either say “rust is ok” or “rust is not ok.” Otherwise these same rehashed arguments in the LKML will be going over and over again, and nothing gets done except people burning out.

                      1. 14

                        But it is neither, I don’t know where people think to think that the adoption of Rust in the Linux kernel is already a decision that has been made years ago. From: https://docs.kernel.org/rust/index.html#the-rust-experiment

                        The Rust experiment¶

                        The Rust support was merged in v6.1 into mainline in order to help in determining whether Rust as a language was suitable for the kernel, i.e. worth the tradeoffs.

                        That said, people like GregKH are already convinced that Rust for Linux has been success (I do as well, but I’m just watching from the peanut gallery). But that doesn’t mean that “rust is ok” w/o further discussion. Different developers/maintainers still have to be convinced.

                        1. 17

                          Watching from the outside it seems that some maintainers don’t want to be convinced in the first place.

                          Some seem ready to actually make an effort to see what the experiment will lead to, but that’s far from the majority.

                          1. 6

                            Watching from the outside it seems that some maintainers don’t want to be convinced in the first place.

                            From personal experience with some of the involved in this thread from the maintainers side, I disagree. It’s unsurprising that, after years and years of holding the kernel as a pure C project for good reasons (not even allowing the sister language C++ in), there’s people that hold this as a value. Those aren’t convinced easily, not on a short timeline and not via email. I know people wish for a strong word from Linus, but this is only partially in his hands - particularly as the person who previously held the “pure C” line.

                            I’m not a fan the tone, but I was for example completely unsurprised by Chris Hellman holding that line and not arbitrarily.

                            R4L is not just a substantial engineering effort, it is also a substantial social effort. And that means a ton of legwork, and maybe some beers at a bar.

                        2. 10

                          Leadership (likely Linus) needs to step in here

                          At this point what is happening there is not ‘leadership’.

                          1. 3

                            My naive and oversimplified take on this is that Rust would be allowed, but is expected to have backwards compatibility assurances, and so long as Rust constantly changes things and lacks those assurances, Rust won’t be added.

                            Is this correct? Somewhat correct? Not really the issue? Incorrect?

                            As someone who sees Rust getting rebuilt monthly or even more often than monthly because of new versions, and codebases like Firefox (and even Rust itself) that can’t use an ever-so-slightly older version of Rust, I get the impression that Rust has a long way to go before it settles enough to be used for software that has extremely long maintenance cycles. But that’s just my impression - I have no experience actually programming in Rust.

                            1. 24

                              The results of the annual survey have come out, and 90% of users use the current stable version for development. 7.8% use a specific stable released within the past year.

                              These numbers are only so high because it is such a small hassle to update even large Rust codebases between releases.

                              Rust is stable in the sense of “is backwards compatible” even if it isn’t in the sense of “doesn’t have a slow release cadence.”

                              1. 21

                                A new version of Rust comes out every 6 weeks. Rust has many so-called unstable features, which are not available by default and have to be opted into. Those features can change at any moment. But if you don’t opt in to such features, the language is extremely stable. Every time a new Rust version is being prepared, they compile and run the test suites of pretty much all known publicly available Rust code, just to make sure nothing went wrong.

                                Rust for Linux does opt into a few unstable features, but they’re pretty close to being finished and making sure R4L can work without unstable features is a high priority goal for the Rust Project.

                                1. 16

                                  Rust has stability guarantees written down and heavily tested since nearly 10 years to particularly allow evolution with very strong backwards guarantees.

                                  Language: https://rust-lang.github.io/rfcs/1122-language-semver.html Libraries: https://rust-lang.github.io/rfcs/1105-api-evolution.html

                                  Rust (ideally) never breaks backwards compatibility except in a very defined set of cases. (e.g. bugs in the language frontend that allow you to bypass it’s guarantees are bugs, not features and will be fixed, at the cost of breaking some notion of backwards compatibility). It has an extremely heavy hammer test suite to ensure no unwanted changes to go in and/or assess the impact of changes called crater - which literally runs for days. https://github.com/rust-lang/crater?tab=readme-ov-file#crater

                                  Recommendation: before you actually enter discussions about Rusts backwards guarantee (I know you haven’t, this is more for the reader), please read the two RFCs. They are relatively short, subtle and good. I often see discussions going wrong where people haven’t read those docs and have discussions that have no basis in reality.

                                  1. 15

                                    I don’t think that’s true. You get amazing backcompat by default and guaranteed compat by using Rust editions.

                                    Firefox doesn’t have to upgrade every six week. We enjoy upgrading. We can do so because the benefits outweigh the costs.

                                    1. -2

                                      the benefits to whom? goodness gracious…

                                      news flash: your users don’t enjoy you upgrading.

                                      1. 12

                                        Are you compiling Firefox by yourself?

                                        1. 1

                                          no; it’s impractical to do so.

                                          1. 3

                                            Help me understand. Why would you want to compile it yourself?

                                            1. 3

                                              Being a BSD masochist^Wuser, I generally maintain my own package repo. Firefox is included in that repo. I love having full control over the packages installed on my system. I’m probably an outlier here, though.

                                              1. 1

                                                well now we’re changing the topic, but one reason is that it removes the need to trust external builds. another would be to patch it e.g. to restore FTP support. another is that it’s a necessary on-ramp to contributing, or generally to exercising the rights granted by the license.

                                                1. 1

                                                  These are all good and valid points. Thanks for sharing :-). I can see how that’s important to you and the latter being really important for both of us.

                                      2. 5

                                        I don’t think this is correct. Rust would be experimental and it would be entirely on the RFL members to ensure it stays up to date and correct, without blocking others. None of this would have to do with the language.

                                        1. 3

                                          so if a casual linux user hears “linux runs on M1 macs,” then spends their whole paycheck on a computer with no ethernet or USB-A ports because of it, then too bad for them because they should have known that “it runs linux” no longer means it has the stability and longevity that could once be expected.

                                          1. 4

                                            Asahi linux and RFL are two different projects with totally unrelated stability guarantees, none of those guarantees have to do with the RFL code working.

                                            As for the user spending money or whatever, I have nothing to say on the matter. Are users responsible for purchasing decisions? I don’t know, I don’t think it’s relevant at all.

                                            1. 2

                                              Asahi linux and RFL are two different projects with totally unrelated stability guarantees, none of those guarantees have to do with the RFL code working.

                                              what I read is that a lot of the drivers for asahi linux are written in rust. are they not part of the main kernel tree?

                                              1. 3

                                                They’re in the Rust for Linux tree, but they’re not in Linus’s tree yet, because they’re blocked on other stuff getting merged first (like the DMA abstraction).

                                                1. 2

                                                  Asahi Linux is not upstream, that’s correct.

                                                  1. 1

                                                    is “upstream” synonymous with “the main kernel tree”?

                                                    are all of the rust drivers that an asahi linux system uses part of the asahi project and not in the main kernel tree?

                                                    I don’t know much about this so I want to make sure I understand.

                                                    1. 2

                                                      is “upstream” synonymous with “the main kernel tree”?

                                                      Yes. That is Linus’s tree, which is the ultimate upstream of the Linux project.

                                                      are all of the rust drivers that an asahi linux system uses part of the asahi project and not in the main kernel tree?

                                                      I think so. Most of their C drivers aren’t upstreamed yet either, I believe. The C drivers I don’t think are blocked by anything in particular, it’s just work that isn’t finished yet. The announcement blog post says that they are going to focus on upstreaming this year. The Rust drivers are blocked by needed Rust for Linux abstractions not being upstreamed yet.

                                                      1. 1

                                                        The Rust drivers are blocked by needed Rust for Linux abstractions not being upstreamed yet

                                                        so then none of the rust drivers are merged in the main tree. can we say that definitively?

                                                      2. 2

                                                        is “upstream” synonymous with “the main kernel tree”?

                                                        Yeah.

                                                        are all of the rust drivers that an asahi linux system uses part of the asahi project and not in the main kernel tree?

                                                        I’m not sure how much is actually merged, if it is though it’s experimental. I believe much of it is not merged, and it was naturally developed outside of the main tree (as anything would be). The GPU driver, often cited as a big success story for Asahi, is not merged, for example.

                                                        1. 1

                                                          Max reply depth reached. Perhaps blog about this and submit it?

                                                          (We have a max reply depth for technical reasons. But also, at this depth, previous discussions have always gone off the rails in topic or tone.)

                                                          a first?? continuing here…

                                                          presumably “experimental” code is more liable to have bugs.

                                                          Nope, not in this case. It just means it’s an experiment, if it isn’t successful it’ll get pushed back downstream.

                                                          I think I’ve been misunderstanding what is meant by “behavioral” or “API/ABI” stability. I thought this referred to the stability of the interface between the kernel and userland, and that being “experimental” would mean the external interface is not maintained in the same way as other parts of the kernel. but now I’m thinking that the API/ABI stability is between different components within the kernel, and they want to “experiment” with making those internal interfaces more stable. do I have that right now?

                                                          1. 2

                                                            That would be right, at least for the most relevant stuff like the filesystem discussions. It’s ensuring that when something changes on the kernel side (not userland facing) that the Rust code matches that.

                                                            1. 1

                                                              ok that helps. so to go back to what kicked off this discussion:

                                                              Rust would be experimental and it would be entirely on the RFL members to ensure it stays up to date and correct, without blocking others.

                                                              so this means that the C kernel devs could change things that break RFL code and aren’t obligated to fix it, whereas if it were any other part of the kernel, the C devs would be obligated to fix it or make sure it gets fixed. right?

                                                              1. 2

                                                                That’s about right, although there’s always some shared responsibility between the person changing the API and the callers (in RFL’s case it’s all on the Rust devs).

                                                                1. 3

                                                                  so if the reputation of linux is due to the C developers, then having a completely separate team develop code that people are told is linux could subvert the expectation that linux is developed by the same group that built its reputation.

                                                                  that’s what I was getting at when I said “‘it runs linux’ no longer means it has the stability and longevity that could once be expected.” of course it’s a matter of debate whether the rust effort will yield better or worse stability/longevity, but the development process for M1 macs is not time-tested in the same way as the development process for, say, the thinkpad x13s. is that fair to say?

                                                                  and do you think the recent debacle is a bad sign for the stability/longevity of linux on M1 macs? regardless of who you blame for it. I saw some other comments to that effect.

                                                                  1. 3

                                                                    I’m not sure that the reputation argument is going to hold much water with me. Linux has a pretty bad reputation in my mind, but how it’s marketed is completely disconnected from that. I can’t tell you how things should or shouldn’t be marketed, or the complexities of layman making technical purchasing decisions, or how the experimental branch might change things, or what is or is not ethical to say about support for a Mac on Linux. If you want to say that Asahi is somehow wrong for something they’re saying about support or stability, you can do so, I just don’t think it’s relevant to RFL. If you want to see their messaging, this seems to be the relevant page: https://asahilinux.org/about/

                                                                    but the development process for M1 macs is not time-tested in the same way as the development process for, say, the thinkpad x13s. is that fair to say?

                                                                    Maybe? It’s really hard to say. Many drivers are poorly supported or maintained by far fewer/ less motivated people than Asahi.

                                                                    and do you think the recent debacle is a bad sign for the stability/longevity of linux on M1 macs? regardless of who you blame for it. I saw some other comments to that effect.

                                                                    It’s a bad sign for Linux in general, which, as I’ve stated elsewhere, is not going to be a dominant OS in the future because of upstream’s behavior and fundamental structure. In the short term I think the major impact is going to be that Asahi probably isn’t interested in staying out of tree forever, they likely had loftier goals and wanted this to be a larger project as a part of RFL, and so by upstream pushing RFL away Asahi developers will be pushed away/ fail to gain as much traction. I think this will be bad for Asahi and bad for Linux users, sure.

                                                                    1. 1

                                                                      your answers have been duly noted. I take them as assurance that my original comment wasn’t completely off-base or reliant on a fundamental misunderstanding.

                                                                    2. 2

                                                                      so if the reputation of linux is due to the C developers, then having a completely separate team develop code that people are told is linux could subvert the expectation that linux is developed by the same group that built its reputation.

                                                                      There is no separate team. RfL is being done by regular kernel maintainers who happened to pick up an interest in Rust. Pretty much everyone (maybe literally everyone) contributing Rust code to Linux has contributed and continues to contribute C code.

                                                                      1. 1

                                                                        then why the need for a different policy on who is responsible for what in regards to RFL vs the rest of the kernel? seems like a separation to me.

                                                                        (please don’t mistake my use of “completely separate” as implying that there are no developers in both groups or that they are hermetically insulated from each other.)

                                                                        1. 2

                                                                          then why the need for a different policy on who is responsible for what in regards to RFL vs the rest of the kernel? seems like a separation to me.

                                                                          Because the vast majority of Linux maintainers do not know Rust, and therefore this experiment couldn’t be run at all if maintainers of C code were expected to fix Rust code when they make changes. And even if RfL is a complete success and Rust becomes a permanent part of the kernel, most likely there will need to continue to be a collaboration between Linux devs who know Rust and those who don’t, to keep Rust code up to date with changes in C interfaces.

                                                            2. 1

                                                              cool good to know. so can you clarify this part:

                                                              Asahi linux and RFL are two different projects with totally unrelated stability guarantees, none of those guarantees have to do with the RFL code working.

                                                              in general terms it seems like if any of the rust code used by asahi is merged with the main tree, then the stability of asahi linux does depend on the stability of that code.

                                                              1. 2

                                                                I’m wondering if we’re talking about stability in the same way? Are you talking about stability in terms of stable behaviors/ API/ ABI? Or do you mean in terms of crashing? The former is what’s being discussed with RFL, not the latter.

                                                                1. 1

                                                                  so if a casual linux user hears “linux runs on M1 macs,” then spends their whole paycheck on a computer with no ethernet or USB-A ports because of it, then too bad for them because they should have known that “it runs linux” no longer means it has the stability and longevity that could once be expected.

                                                                  here I was referring to “stability and longevity” in the sense that would be relevant to a casual linux user, who may have come to expect that if a system runs linux it will continue to run linux for a decade or more with fewer crashes and bugs than a proprietary OS.

                                                                  I am acutely aware that user concerns are conspicuously absent from the RFL discussion.

                                                                  1. 3

                                                                    I don’t really follow your point throughout this. Is your concern that Rust will be less stable in the sense of less reliable or something? Or that Asahi Linux will be? I don’t really know how that’s relevant to RFL or actually anything that’s been brought up, or why you’re bringing it up.

                                                                    1. 2

                                                                      as an uninformed casual linux user, over the past year I have gotten the impression that “linux runs on M1 macs,” and in another life this might have motivated me to buy one. with this news it seems like linux might not run on M1 macs for much longer, or might not gain the eventual reliability that I would normally expect if I hear that linux is getting ported to a platform.

                                                                      I was under the impression that Asahi Linux was hoping to eventually depend on RFL for the practical aspects of reliability, and if it’s “entirely on the RFL members to ensure it stays up to date and correct,” then that is qualitatively less assurance than if the code is supported by the kernel team as a whole, as was once implied if linux is said to run on a platform.

                                                                      1. 2

                                                                        I see. So yeah, I think it’s good to point out that the stability referred to in RFL conversations is in terms of behavioral stability, not things like bug fixes. Asahi is a downstream project, it is maintained separately, and its stability (in terms of not crashing) has nothing to do with RFL. Asahi users will continue to get patches for bugs as long as there are developers willing to do so, this is not related to RFL, although presumably Asahi wants RFL to happen to ease their development burdens etc.

                                                                        I was under the impression that Asahi Linux was hoping to eventually depend on RFL for the practical aspects of reliability, and if it’s “entirely on the RFL members to ensure it stays up to date and correct,” then that is qualitatively less assurance than if the code is supported by the kernel team as a whole,

                                                                        Right, so there are a few things here.

                                                                        1. With regards to bug fixes/ patching, it is not up to the RFL members at all, it’s up to the Asahi developers, who work downstream. This is totally separate from RFL.

                                                                        2. With regards to RFL, completely distinct from Asahi, the issue is not crashing it is behavioral and API stability. As RFL is “experimental” it is up for RFL to maintain interfaces that are up to date and not the C developers who maintain other code. Nothing to do with Asahi or bug fixes etc.

                                                                        3. You might be under the impression that the Linux kernel considers stability to be a shared responsibility (ie: “everyone is responsible for bug fixes”) but this is not the case at all. Typically it is one developer maintaining a suite of drivers, a filesystem, etc, and they are the only ones responsible. There isn’t a big difference in reliability between something upstream or downstream, what changes is primarily on the committers side in terms of maintaining patches, others contributing Rust, etc, which is what RFL is about. If one of those solo committers/ maintainers left users would be just as fucked as ever - and this happens.

                                                                        1. 1

                                                                          thanks for the overview.

                                                                          I see. So yeah, I think it’s good to point out that the stability referred to in RFL conversations is in terms of behavioral stability, not things like bug fixes. Asahi is a downstream project, it is maintained separately, and its stability (in terms of not crashing) has nothing to do with RFL. Asahi users will continue to get patches for bugs as long as there are developers willing to do so, this is not related to RFL, although presumably Asahi wants RFL to happen to ease their development burdens etc.

                                                                          I still don’t get this part. doesn’t a lack of behavioral stability risk creating bugs?

                                                                          if asahi’s stability “in terms of not crashing” doesn’t depend on RFL, how would RFL ease their development burdens? if some code that is needed for asahi to run is taken up by RFL, then woudn’t bugs in that RFL code affect the running of an asahi linux system?

                                                                          1. 2

                                                                            I still don’t get this part. doesn’t a lack of behavioral stability risk creating bugs?

                                                                            Maybe, but since Asahi is its own distro it picks its own kernel and userland, so the distro can just not upgrade userland components that would rely on some behavioral change in the kernel. They own that themselves so their users aren’t any worse off.

                                                                            if asahi’s stability “in terms of not crashing” doesn’t depend on RFL, how would RFL ease their development burdens?

                                                                            I think there’s a number of things that would play out longer term. One thing that RFL wants is to start to formalize some of the behaviors in the kernel so that their drivers can provide safe, stable interfaces, and reduce the burden for all developers in maintaining them. Asahi would then have less of a burden of development.

                                                                            if some code that is needed for asahi to run is taken up by RFL, then woudn’t bugs in that RFL code affect the running of an asahi linux system?

                                                                            Of course, but why would RFL change anything? The code has a bug or it doesn’t. The issue is things like relying on the behavior of the C driver vs the behavior of the Rust driver, which is why RFL wants to formalize those semantics.

                                                                            I think we’ll probably start hitting the limits of my knowledge if we get too into the details of why Asahi wants to do one thing or another, RFL has numerous motivations. The tl;dr is that the stability discussed is about ABI/API stability, RFL would help Asahi because it would help them formally adopt the behaviors of the C code (by stating those behaviors vs those behaviors just being assumed by one maintainer), and Asahi users aren’t going to experience a less stable OS by virtue of RFL being “experimental” since RFL’s impact would only mean that the code lives in one spot vs another.

                                                                            1. 1

                                                                              Maybe, but since Asahi is its own distro it picks its own kernel and userland, so the distro can just not upgrade userland components that would rely on some behavioral change in the kernel. They own that themselves so their users aren’t any worse off.

                                                                              all else being equal, a decrease in kernel ABI/API stability would mean more work to keep up with changes and identify which userland components are safe to upgrade. for the same amount of developer/maintainer time, the system would be either less reliable or less up to date and their users would be worse off.

                                                                              if asahi’s stability “in terms of not crashing” doesn’t depend on RFL, how would RFL ease their development burdens? I think there’s a number of things that would play out longer term. One thing that RFL wants is to start to formalize some of the behaviors in the kernel so that their drivers can provide safe, stable interfaces, and reduce the burden for all developers in maintaining them. Asahi would then have less of a burden of development.

                                                                              point taken.

                                                                              if some code that is needed for asahi to run is taken up by RFL, then woudn’t bugs in that RFL code affect the running of an asahi linux system?

                                                                              Of course, but why would RFL change anything? The code has a bug or it doesn’t.

                                                                              presumably “experimental” code is more liable to have bugs.

                                                                              Asahi users aren’t going to experience a less stable OS by virtue of RFL being “experimental” since RFL’s impact would only mean that the code lives in one spot vs another.

                                                                              are you saying that Asahi wouldn’t rely on RFL code if it weren’t experimental? or that all of the RFL code relied upon by Asahi will inevitably come from the Asahi project, and would be in Asahi if it weren’t in RFL?

                                                                              1. 2

                                                                                all else being equal, a decrease in kernel ABI/API stability would mean more work to keep up with changes and identify which userland components are safe to upgrade. for the same amount of developer/maintainer time, the system would be either less reliable or less up to date and their users would be worse off.

                                                                                Right, RFL would presumably help that, which is why they want RFL so that they can go upstream.

                                                                                presumably “experimental” code is more liable to have bugs.

                                                                                Nope, not in this case. It just means it’s an experiment, if it isn’t successful it’ll get pushed back downstream.

                                                                                are you saying that Asahi wouldn’t rely on RFL code if it weren’t experimental? or that all of the RFL code relied upon by Asahi will inevitably come from the Asahi project, and would be in Asahi if it weren’t in RFL?

                                                                                Hmmm, I’m not sure what you mean. Asahi relies on the code that it relies on, that’s all. RFL would allow their code to live upstream, and would imply some things that RFL is pushing for like more defined behaviors of existing interfaces. That’s what I’m saying. Asahi will rely on RFL code when their code makes it upstream as “experimental”, otherwise that same code would be downstream but otherwise the same. RFL code won’t necessarily come from Asahi, certainly, anyone is free to try to push their own Rust code to upstream.

                                                    2. 2

                                                      Why would you spend your whole paycheck on a computer? That’s irresponsible.

                                                      1. 2

                                                        if someone makes under $30k a year, buying a $1000 computer is irresponsible?

                                                        1. 2

                                                          If someone makes X a year, buying a X/12 computer is irresponsible.

                                                          1. 1

                                                            the paycheck I had in mind would be biweekly with taxes and benefits taken out.

                                                            but even your X/12 idea seems needlessly judgemental and paternalistic, and doesn’t account for savings and fluctuations in income. the average salaried indian makes $200-300 a month; median is probably much less. it would be irresponsible for them to spend that much on a computer?

                                                2. 1

                                                  the idea that you can have Rust in the kernel but it doesn’t have the same stability guarantees is just atrocious.

                                                  1. 11

                                                    what stability guarantees?

                                                    1. 7

                                                      The normal rules for intra-kernel interfaces are “you can break somebody else’s code, but you have to then fix it”. But if you break some Rust code, it’s the duty of the Rust for Linux people to fix it.

                                                3. 5

                                                  J is a great tragedy to me; Iverson dedicated his life to more elegant, compact tools of thought until he betrayed his life’s work…

                                                  K, on the other hand, carries no such baggage! Unfortunately, symbol overloading is quite excessive (as already APL was in a few notable cases, like geometry, due to technical limitations at the time.) The example from Wikipedia sees index, modulo, rotate (⍳, |, ⌽ in APL) all rendered with an exclamation mark: 2!!7!4! Nevertheless, true higher level functions (er, operations) are truly liberating and eye opening, does it matter what symbol’s they’re rendered as?

                                                  Am I just shallow? I’ve spilled enough ink on the importance of semantics while (minor differences in such expressive) syntax are mere aliases (though I too feel pangs of woe at every new language without s-expressions). Yet, there’s something missing… Anyway, “the killer app” is the 7 figure salary. (N.b. J’s the only one I’ve used in production.)

                                                  I’m rather curious about the various companies selling these products. Dyalog has been flourishing of late, hired more people and e.g. has a sales event in NY on April 7th to learn about earn more about “migrating from other APL platforms”. Wikipedia states, KX systems has 14 “offices” (but First Derivatives bought a controlling stake for $40mm)… Shakti has some high priced office space, who and how are the other players doing?

                                                  N.b. view Shakti.com’s source, it’s wildly gorgeous!

                                                  1. 2

                                                    …until he betrayed his life’s work…

                                                    Mind if I ask for the story here? I read through his bio on Wikipedia but didn’t find anything that seems like this.

                                                    1. 3

                                                      There are good things and bad things about J.

                                                      J is a powerful upgrade to pre-J APL, and introduced many valuable ideas that have since been taken up by other array languages, such as Dyalog APL. I’m not a J expert, but AFAIK this would include trains, leading axis theory, the rank operator, and apparently there are many new powerful operators that first appeared in J. J threw out backward compatibility with legacy APL and redesigned APL to make it more consistent and powerful.

                                                      On the downside, I find that the syntax of J is so hideous that I have no plans to learn or use it. Of course not everybody feels this way. I’ve mostly used APL and K, but BQN looks like it is worth investigating.

                                                      Innovation has also occurred outside of the J language. The “direct function” (dfn) syntax of Dyalog APL was game changing, and I wouldn’t use an APL without it. K and BQN have it, J doesn’t. (Another reason I won’t use J.)

                                                      1. 2

                                                        Could you explain how dfn differs from J direct definitions? In J you can write 1 {{ x + y }} 2, though this was only added circa 2020.

                                                        1. 2

                                                          Thank you! This wasn’t in the J language the last time I looked. Also, the APL wiki hasn’t been updated to state that J has the feature, and that’s a resource I’ve been relying on. (https://aplwiki.com/wiki/Dfn)

                                                          Dyalog direct functions have a rich feature set, which support a powerful set of idioms known from functional programming: https://www.dyalog.com/uploads/documents/Papers/dfns.pdf. (Error handling has since been added.) These idioms allow you to write terse code without boilerplate. It looks like J falls short.

                                                          Dfns are lexically scoped. If dfns are nested, then the inner dfn can capture local bindings from the outer dfn. I’m not sure if J works this way.

                                                          You can specify a default value for the left argument (α) by assigning to it. As a special case, this assignment has no effect if a left argument was given in the function call. I don’t see that in J.

                                                          A dfn may contain guards, which terminate function evaluation early if the guard condition is true. I don’t see that in the J documentation.

                                                          In Dyalog, the ∇ symbol is bound to the smallest enclosing dfn, and is used for specifying recursive functions when the dfn is anonymous (not bound to a name). Also, tail calls do not grow the stack, so you can use recursion to express iterative algorithms without blowing up the stack. In the functional programming literature, this feature is called “proper tail calls”, but some people call it “tail call optimization”. I don’t see any of this in the J documentation.

                                                          1. 1

                                                            dfns are like kinda dynamically scoped idk scoping is super weird and fucked up in dyalog. i’d like straight up proper lexical closures but it doesn’t have them. (or maybe it was like. downwards but not upwards funargs?) do concede it is nicer to use in some cases though. (i had a nice proposal for lexical closures in j but henry didn’t like it and didn’t see the need :c if i ever come back to apl i’ll probably do my own j-ish thing w/ glyphs closer to apl syntax)

                                                            j $: is dyalog ∇

                                                            j was classically more focused on tacit code, so where it falls short for explicit code sometimes, it has better tacit facilities than dyalog. e.g. @. instead of guards for branching, :: for error handling, : for ambivalent verbs (a&$: : v is a straightforward idiom for a verb with a default left argument of a). these can still of course be used inside of tacit verbs (and vice versa)

                                                      2. 2

                                                        J is the betrayal. Their life’s work was notation as a tool for thought. J abandons ‘notation’, ie. using custom glyphs, in hopes of making adoption easier.

                                                        1. 3

                                                          I don’t think this is why @veqq thinks J is a “betrayal”, because K also is ASCII-only.

                                                          I also don’t think that’s a “betrayal”, because notational is only a tiny part of “tools of thought”, and custom glyphs is only a tiny part of notation. J’s &. operator, where f&.g x is equivalent to g⁻¹ f g x, was eye-opening to me. I don’t think J loses anything by using &. over a glyph like ⊕. A example of a non-notational APL “tool of thought” is thinking of filters as bitmasks and sorting as graded permutations. I don’t think it matters what APL you use, you still get that idea.

                                                          The more interesting question to me is “does J contribute any new tools of thought?” I can’t answer this because I only know J and a tiny bit of uiua, so I don’t know is not present in APL or K. I couldn’t find an equivalent to &. on the dyalog wiki page or in the rosetta, but I might just not know where to find it.

                                                          1. 2

                                                            K also is ASCII-only

                                                            But you see, Iverson didn’t work on K!

                                                            1. 2

                                                              I think you took my explaination as a dig at J and replied to defend J, with points that although I mostly agree with have no relation with the at hand, which is about Iverson.

                                                              notational is only a tiny part of “tools of thought”

                                                              It is not. Iverson’s Turing award Lecture is titled: “Notation as a tool of thought”. It was about the importance of notation, not about “tools of thought”. Remeber that when Iverson published “A Programming Language” there wasnt any implementation of APL. The book is about using the notation to think about algorithms.

                                                              And it is not only Iverson that thought notation was important, which is why bqn uses its own notation or why Dyalog still introduces glyphs.

                                                              It is that notation that led people to useful ideas such as

                                                              filters as bitmasks and sorting as graded permutations

                                                              Yes, I think those things are useful on its own. Even outside of array languages.

                                                              Regarding the question about whether J improved on APL, my understanding is that it did and some of the improvements were adopted by Dyalog, but not all. I don’t know J so I dont know specifics.

                                                              1. 3

                                                                I still don’t agree that the Iversonian notation needs to be in the form of custom glyphs. His opening example in the lecture is +/⍳5 for “sum 1 to 5”. ⍳ is new notation that’s also a glyph, but +/ for “reduce plus” is also new notation that is not a new glyph. Similarly, f&.g is new notation, but it doesn’t use new glyphs. And one of Iverson’s biggest impacts on broader mathematics, the iverson bracket, is just written [P].

                                                                To make my own preference explicit, I’ve coded maybe 40 or 50 custom glyphs into my keyboard, because I much prefer writing ∀x: □◇P to “forall x: necessarily possibly P”. But I often wonder if we’re conflating “terse, compact, powerful notation” with the specifics of Iverson’s initial choices: 1/2-arity glyphs. I like how J and K tried to explore the former without committing to the latte, and I wouldn’t consider doing so a “betrayal”.

                                                            2. 2

                                                              Iverson’s Wiki page is a mess (I’ve never seen a page as disorganized as that without prominent banners), but if it is to be believed, here’s his statement on the creation of J:


                                                              When I retired from paid employment, I turned my attention back to this matter [the use of APL for teaching] and soon concluded that the essential tool required was a dialect of APL that:

                                                              • Is available as “shareware”, and is inexpensive enough to be acquired by students as well as by schools
                                                              • Can be printed on standard printers
                                                              • Runs on a wide variety of computers
                                                              • Provides the simplicity and the generality of the latest thinking in APL

                                                              The result has been J, first reported in [the APL 90 Conference Proceedings].


                                                              You can call that a betrayal, I see it as an admirable attempt in letting more people know about the power of APL.

                                                              Note that Iversen himself never finished high school as a teenager. He graduated after self-studying in the Air Force and later got a university education via Canada’s version of the GI Bill.

                                                              Of course, APL elitism has a long history. From the same source:

                                                              In one school the students became so eager that they broke into the school after hours to get more APL computer time; in another the APL enthusiasts steered newbies to BASIC so as to maximize their own APL time.

                                                              1. 2

                                                                You can call that a betrayal, I see it as an admirable attempt in letting more people know about the power of APL.

                                                                Note I didnt call it a betrayl. I was merely explaining what the OP meant.

                                                                I do think J is giving up on Iverson’s life work. It did for good reasons. I acknowleged the motivation for doing so. And at the time the decision was made there it made sense. FWIU in the 80s you needed a custom terminal to be able to input APL glyphs in the computer. And it was priced at ~1000 USD iirc. In the 90s one could buy custom keyboards that would emit the keycodes for APL glyphs. Not as high a barrier but a significant one.

                                                                We now know that it didnt increase the adopting of J, but it was a good hypothesis.

                                                                Nowadays, at least on Linux, custom input methods as well as custom keymaps are well supported so Custom Glyphs don’t pose the same challlenges.

                                                                So I do agree to some extent with the OP abour being a tragedy.

                                                            3. 1

                                                              [APL’s] most important use remains to be exploited: as a simple, precise, executable notation for the teaching of a wide range of subjects - Iverson

                                                              Not originally a programming language, APL was a more heroic intellectual venture; Iverson and friends published books on accounting and math, onboarded middle schools to use their curricula etc. They sought to reform mathematics itself! To @gerikson , while Iverson kept education in view, programming overtook the rest of the project with K.

                                                              It’s a veritable Graecian tragedy, a modern rendition’s recurring right now with: https://kdb.ai/


                                                              This is half tongue in cheek. J has cool features. Iverson sketched an APL 2, whose ideas became J and slowly made their way into APL. @hwayne ’s &. is , added in the 80s. But it is tragic too, making such a break with the loftier project and splitting the family.

                                                              1. 1

                                                                Interesting perspective: close one eye and see a future where APL is taught from grade school to everyone.

                                                                Close another, and see New Math.

                                                            4. 1

                                                              How did you get to see Shakti’s source, given that you haven’t used K in production (ie, you weren’t a Shakti employee)?

                                                              The closest I’ve got to the Shakti source is https://github.com/kparc/ksimple, which was published by Arthur Whitney last year for educational purposes. I assume the real Shakti K interpreter is written in C in this style, but is significantly larger, since there’s a lot more functionality.

                                                              1. 3

                                                                I meant shakti.com’s source in light of the currently popular “raw text” blog.

                                                              2. 1

                                                                You may appreciate BQN as a noncommercial alternative.

                                                              3. 2

                                                                It is surprising to what lengths one can go to avoid dependencies and still be worth it. For example pizzaauth, parses HTTP requests.

                                                                https://github.com/ltratt/pizauth/blob/master/src/server/http_server.rs

                                                                1. 8

                                                                  Was it worth it though?

                                                                  It’s written against the obsolete RFC 2616. It allows line continuations syntax that is deprecated and dangerous (useful for request smuggling attacks).

                                                                  It has no protection against malicious requests. It will keep eating RAM for as long as the sender sends something (allocates 49 bytes for every space received). It spawns a thread per request without apparent limits, and could be vulnerable to slow loris attack.

                                                                  It’s not efficient, and even needlessly complicated for what (little) it does, and quite a bit of time has been spent on debugging and rewriting that code.

                                                                  1. 5

                                                                    Was it worth it though?

                                                                    The author thinks so, they were using tokio and http for another crate. https://tratt.net/laurie/blog/2024/some_reflections_on_writing_unix_daemons.html

                                                                    It has no protection against malicious requests. It will keep eating RAM for as long as the sender sends something (allocates 49 bytes for every space received). It spawns a thread per request without apparent limits, and could be vulnerable to slow loris attack

                                                                    Have you seen what and where the server is used for? It is meant to be run on localhost, to receive the oauth2 callback for services you’ve configured. It is not exposed to the internet. The concerns you bring would be relevant for a web application.

                                                                    1. 4

                                                                      As @PuercoPop pointed out (https://lobste.rs/s/a5vkze/build_it_yourself#c_shcp2d), pizauth’s HTTP[S] server is localhost only and there’s nothing to smuggle in. It probably would be good to limit the request size just in case something goes wonky, though.

                                                                      It’s probably also worth pointing out (a) that the HTTP server probably represents (in total including debugging etc) about 4 hours work (b) it fixes problems that were caused by the dependencies it replaced, and on which I had spent vastly more than 4 hours investigating, without any idea what the cause was. These sorts of trade-offs are not something one can see from the source code.

                                                                      I also say this as someone that does not consider myself a “you should remove all dependencies” person – there’s a time and place for both approaches IMHO.

                                                                  2. 9

                                                                    Code is not an asset, it’s a liability.

                                                                    I feel like the Ruby line got a lot steeper after the “epoch” . It’s hard to tell because there seems to be a lot of noise in the Ruby line, but it feels a bit disingenuous to just say “look, we wrote less JavaScript than we would’ve” without addressing what you wrote instead and how much of it you wrote.

                                                                    1. 3

                                                                      I feel like the Ruby line got a lot steeper after the “epoch”

                                                                      Yes, the article says it grows superlinearly

                                                                      Our Ruby code continues to grow superlinearly, which is expected given the addition of engineers, customers, and features.

                                                                      Saying that it is ‘expected’ or good while also calling code a liability is contradictory imho.

                                                                      1. 3

                                                                        I don’t think it’s a contradiction. A business can grow liabilities and, as long as it is growing assets faster, that’s fine. If you take out a loan at 5% interest to be able to grow revenue at 50% annually, you’re growing liabilities but in a way that lets you grow the business. The problem is when your liabilities grow faster than your assets. If you’re adding useful features that users want, you’re growing assets, and the cost of this is that you also grow the amount of code you have (liabilities).

                                                                        1. 4

                                                                          I think the GP refers to the contradiction between the article’s sections, not between terms.

                                                                          In the Javascript/React section code was highlighted as being a liability, and they needed to get rid of it.

                                                                          In the Ruby/Rails section they simply acknowledge superlinear growth (part of which compensates for the decline in clientside code, which they do not address), and simply call this growth “expected”.

                                                                    2. 1

                                                                      The video gives an overview of the how to Nova project started and how it has progressed so far. Nova an GPU driver for Nvidia cards written in Rust.

                                                                      Around the 17:40 min mark the the speaker gives a positive account of how they collaborated with Greg Kroah-Hartman about the foundational Rust API/‘abstractions’.

                                                                      1. 2

                                                                        For these, we must instead decompose into smaller tasks. Providing an intermediate representation that can be stored (again, not locally!) somewhere and loaded, while keeping track of where in the sequence we are to resume processing after cancellation.

                                                                        https://github.com/shopify/job-iteration Shopify has a gem precisely for that. And even have another gem thar gives a UI to run one-off tasks that one often needs to run that takes advantage of the ‘resumability’

                                                                          1. 13

                                                                            I have the same issue where I’m very experienced in Django but chose to write a backend in Axum to have some finger practice in Rust.

                                                                            • I would have been 10-100x faster in Django
                                                                            • The Axum backend is very fast but also it hardly does anything
                                                                            • What isn’t fast is compiling the Rust code, Django reloads are much faster and much more productive
                                                                            • Setting up login/sessions took me the better part of the day and lots of boiler plate
                                                                            • You can test ‘something’ with Axum but Django’s testing capabilities are near perfect and low effort
                                                                            • All the ORMs in Rust are kinda shit compared to Django’s gold standard ORM
                                                                            • Having templates be precompiled sounds great but it slows your development speed down massively
                                                                            • The Rust code isn’t very malleable. If I want to move some stuff around, I first have to fix every error before it will run again.
                                                                            • Code organization in Rust requires reading book chapters and blog posts and I still couldn’t explain it to another person because it doesn’t make any sense.
                                                                            • Django has a DEBUG view which is extremely helpful and seems to be fairly unique

                                                                            I think it’s fair to say that the tool is not fit for the job. Fixing most of these issues will take the better part of the next 5 years at the current rate (that is even if any of it gets any priority).

                                                                            1. 19

                                                                              I’ve never written Rust, but I’m very experienced in Django and have started to get frustrated with the “faster to get up and running” focal point of the community. Once I’ve started to maintain more long lived applications I care much more about having guardrails and being able to easily refactor. The lack of safety around templates really hurts, the magic use of strings and keyword arguments e.g. pizza__toppings makes refactoring a nightmare, the hostility towards type checking is rough, etc. I like using Django a lot, but I’m definitely less of a proponent of the emphasis on typing less (like with CBVs) than I used to be.

                                                                              1. 4

                                                                                The lack of safety around templates really hurts

                                                                                I’ve found django-fastdev helps quite a bit with this.

                                                                                the magic use of strings and keyword arguments e.g. pizza__toppings makes refactoring a nightmare

                                                                                This is one of the biggest things that makes me pay my JetBrains bill every year. Their django support knocks it from nightmare to minor hassle most of the time. I still lean on tests more for this than I’d like to, though.

                                                                                but I’m definitely less of a proponent of the emphasis on typing less (like with CBVs) than I used to be

                                                                                I firmly agree with Luke Plant on this front.

                                                                                1. 2

                                                                                  You don’t need to learn any of the CBV APIs - TemplateView, ListView, DetailView, FormView, MultipleObjectMixin etc. etc. and all their inheritance trees or method flowcharts. They will only make your life harder.

                                                                                  Oh yeah. I tried CBVs once in my life and then found them to be a mistake and never touched them again.

                                                                                  1. 2

                                                                                    They are not really difficult if you take your time to learn it. I use CBV a lot.

                                                                                2. 1

                                                                                  Once I’ve started to maintain more long lived applications

                                                                                  I’d say sure once you get there you’ll have a different class of problems, but the point is to get there first.

                                                                                3. 7

                                                                                  To be fair, you’re comparing a 20 year old framework (which was already used in production when open sourced) against a 3 year old project.

                                                                                  If you’re not doing anything where the execution speed/memory consumption of your service doesn’t need the tradeoffs that Rust asks of you, you shouldn’t use Rust to begin with. But there are projects out there that do have those needs.

                                                                                  1. 13

                                                                                    If you’re not doing anything where the execution speed/memory consumption of your service doesn’t need the tradeoffs that Rust asks of you

                                                                                    There’s more to Rust than performance. Correctness and tooling ergonomics matter to a lot of people.

                                                                                    1. 5

                                                                                      Exactly. The biggest tension I feel between Django and Rust-based options (aside from how much of a VPS I have to pay for to get them to work) is that Django will get me up and running quickly, but Rust will be easier to maintain going forward as dependencies get bumped.

                                                                                      I’ve been burned by the Python ecosystem on more than one occasion on that front.

                                                                                      It’s why I do my little things without need for SQL in Rust, lament the lack of a Django ORM or SQLAlchemy+Alembic for my medium things, and do my big things in Django.

                                                                                      (Specifically, I insist on an ORM that can auto-generate draft migrations by diffing an authoritative #[derive(...)]-based schema and the current state of the dev database, and which knows how to abstract away the workarounds for the limitations in SQLite’s ALTER TABLE… Getting free support for also running on top of PostgreSQL is nice, but I always optimize for the “non-technician self-hosting with a py2exe-or-equivalent binary” case because I believe the first priority is to push back against SaaS and “the cloud”.)

                                                                                      1. 1

                                                                                        but Rust will be easier to maintain going forward as dependencies get bumped.

                                                                                        My experience after tracking 1-2 years of Axum and related upgrades has been that it’s much more of a pain than Django upgrades ever were (and I’ve been on Django since pre 1.0).

                                                                                        1. 2

                                                                                          That could be an Axum thing (I’ve been using actix-web since before Axum existed) or it could be a matter of perspective.

                                                                                          1. With Python, I’ve experienced far too much “Oops. Your virtualenv pins don’t like your distro bump for some reason. You’re going to get your dependency tree updated right now”.
                                                                                          2. I’m willing to accept more API churn if it comes with a significant enough reduction in the number of unit tests I need to burn myself out writing to feel confident that, when it compiles and passes the tests, the upgrade is done.
                                                                                    2. 4

                                                                                      The Rust code isn’t very malleable. If I want to move some stuff around, I first have to fix every error before it will run again.

                                                                                      I think this is one case of a general problem of enforcing build time checks (usually type checking) for web apps. Since web apps effectively have multiple entrypoints (versus one like the C main function), we really only care about one or maybe a small subset of those entrypoints being in working order within dev cycles. I’d like to see a rust web framework that encouraged development of entrypoints as separate bin crates with an tool that would transpose those bin sources into a single bin crate for deployed builds.

                                                                                      1. 1

                                                                                        That’s why I’m saying it’d take 5 years of sustained effort to get it there. I was happy to see at least that for sessions and authentication they had taken inspiration from Django.

                                                                                        I think for company internal services it’s fine to use dog frameworks. For anything that needs to be distributed, it’s probably better to make the effort to pack everything into a single binary.

                                                                                      2. 3

                                                                                        Code organization in Rust requires reading book chapters and blog posts and I still couldn’t explain it to another person because it doesn’t make any sense.

                                                                                        Can you ELI5? What’s the problem?

                                                                                        1. 4

                                                                                          No idea. Something with crates, packages and modules all being orthogonal with each other.

                                                                                          This is the book chapter but I found it entirely not useful: https://doc.rust-lang.org/beta/book/ch07-00-managing-growing-projects-with-packages-crates-and-modules.html

                                                                                          1. 5

                                                                                            The naming does not help. The de facto website distributing packages is named crates.io and those packages contain crates, even though the website calls packages crates. I spent weeks thinking nonsense like a package must be a particular type of crate and crates must be recursive.

                                                                                            1. 7

                                                                                              Some fun history here: it’s called crates.io because cargo.io was already taken by some startup. The startup has since died.

                                                                                              1. 1

                                                                                                Is it that difficult to think about packages containing packages? They just use the word crate instead of package.

                                                                                                1. 8

                                                                                                  Rust has three levels with distinct names and purposes:

                                                                                                  • modules for nested namespaces
                                                                                                  • crates are compilation units that contain modules
                                                                                                  • packages are units of distribution that contain crates
                                                                                                  1. 2

                                                                                                    Hm, I thought that definition of package were workspaces. That is confusing they use that terminology like that.

                                                                                                    1. 1

                                                                                                      Thanks for confirming that it’s not just a me-problem.

                                                                                                2. 3

                                                                                                  I found that chapter unhelpful as well. If you want just a working understanding of how to organize your rust projects I found this chapter from the cargo guide much more useful.

                                                                                                  https://doc.rust-lang.org/cargo/guide/project-layout.html

                                                                                                  There are a couple of ways to organize your project (main.rs, lib.rs). The one that I found most flexible and covers 99 of the use cases is to start with a file src/lib.rs. Any executable goes into src/bin/, ej. src/bin/foo.rs. If you want to add a module you need to add it to lib.rs so that cargo tries to build it. A module can be a file or a directory with lib.rs. That’s it.

                                                                                                  1. 1

                                                                                                    Thanks! That explanation is clear enough that I may revisit my setup.

                                                                                                    1. 1

                                                                                                      Turns out a module is a directory with mod.rs in it? It wouldn’t take it with lib.rs.

                                                                                                      Also no idea why the original thing I had was working.

                                                                                                      So I had a:

                                                                                                      server/admin.rs
                                                                                                      server/app.rs
                                                                                                      server.rs
                                                                                                      lib.rs
                                                                                                      

                                                                                                      lib.rs contains:

                                                                                                      pub mod entity;
                                                                                                      pub mod server;
                                                                                                      pub mod shared;
                                                                                                      

                                                                                                      server.rs contains:

                                                                                                      pub mod admin;
                                                                                                      pub mod app;
                                                                                                      

                                                                                                      Then I moved server.rs to server/lib.rs which didn’t work and it complained that it should be mod, so I made it server/mod.rs and that does work. The contents of the files stayed the same.

                                                                                                      I still think this is utter gibberish.

                                                                                                      1. 2

                                                                                                        Turns out a module is a directory with mod.rs in it?

                                                                                                        Yeah, I was wrong about that. I was under the impression that mod.rs was a thing of the path. That was not the case. lib.rs is a cargo-specific convention fwiu.

                                                                                                        I still think this is utter gibberish.

                                                                                                        It is, my goal was to provide an easy to follow guideline, not to invalidate your experience. Part of the reason for this afaiu is that there are some rust-level concepts (modules, paths, crates) and some cargo-level concepts (packages, workspaces) that are involve in how your rust code is compiled. cargo provides some default targets (similar to make’s implicit rules).


                                                                                                        Btw it is my impression that rust projects avoid nested modules and instead split things into multiple packages in the same repo. One reason is because each package can be compiled individually so it reduces the amount you need to re-compile each time.

                                                                                                        1. 1

                                                                                                          Yeah, it’s messy. As well as the three levels I mentioned previously, there’s the complication that even simple projects often have multiple crates sharing a src directory.

                                                                                                          • you can have multiple programs with entry points in src/bin/prog.rs
                                                                                                          • you can have one library with an entry point in src/lib.rs
                                                                                                          • tests are built as separate crates

                                                                                                          This not-the-same-crate weirdness shows up in the different names you must use when programs use modules in the library crate, compared to one module within the library using another.

                                                                                                3. 2

                                                                                                  A recent comment about computers being concurrent systems reminded me of this talk.

                                                                                                  The gist of the talk is that a modern computer resembles more a distributed system, which multiple processes (outside of the CPU uses) collaborating. Having a different view of physical address space, different architectures, are not guaranteed to be cache coherent, etc. So an operating system that limits itself to the CPU is not the operating system but part of one.

                                                                                                  The talk is more gauntlet than shovel, but it is still good for thought imho.

                                                                                                  1. 12

                                                                                                    you’ve probably heard the advice to never use a nonce twice—in fact, that’s where the word nonce (number used once) comes from

                                                                                                    Actually, no. nonce is a perfectly cromulent English word; the Oxford Dictionary defines it as

                                                                                                    (of a word or expression) coined for or used on one occasion. “a nonce usage”

                                                                                                    “Number used once” smells like folk etymology.

                                                                                                    1. 8

                                                                                                      Merriam-Webster has the following etymolgy

                                                                                                      Nonce first appeared in Middle English as a noun spelled “nanes.” The spelling likely came about from a misdivision of the phrase “then anes.” (“Then” was the Middle English equivalent of “the” and anes meant “one purpose.”) The word was especially used in the phrase for the nonce, meaning “for the one purpose,”

                                                                                                      https://www.merriam-webster.com/dictionary/nonce

                                                                                                      That saif it is possible that nonce also have a separate technical definition in an RFC or similar document

                                                                                                      1. 6

                                                                                                        There’s another folk etymology of this word, which leads brits like myself to laugh every time we read anything about cryptography… you can look it up in the urban dictionary…

                                                                                                        1. 4

                                                                                                          It isn’t a folk etymology, it’s a homonym. Wiktionary has its etymology:

                                                                                                          (1975) Unknown, derived from British criminal slang. Several origins have been proposed; possibly derived from dialectal nonce, nonse (“stupid, worthless individual”) (but this cannot be shown to predate nonce “child-molester” and is likely a toned-down usage of the same insult), or Nance, nance (“effeminate man, homosexual”), from nancy or nancyboy. The rhyme with ponce has also been noted.

                                                                                                          However, the slang meaning has a folk etymology, as many slang words do:

                                                                                                          As prison slang also said to be an acronym for “Not On Normal Communal Exercise” (Stevens 2012), but this is likely a backronym.

                                                                                                          1. 1

                                                                                                            Ah interesting, thanks for the clarification! A lot of slang words are like this I suppose, unknown true origins but a few theories.

                                                                                                            1. 2

                                                                                                              Another great place to find out more is Green’s dictionary of slang which (like the OED) is based on exemplar quotations.

                                                                                                        2. 2

                                                                                                          At university, we were also taught that nonce means “number used once”. Wikipedia seems to agree with you though:

                                                                                                          Nonce is a word dating back to Middle English for something only used once or temporarily (often with the construction “for the nonce”). It descends from the construction “then anes” (“the one [purpose]”).[3] A false etymology claiming it to mean “number used once” is incorrect.[4] In Britain the term may be avoided as “nonce” in modern British English means a paedophile.[3][5]

                                                                                                          https://en.wikipedia.org/wiki/Cryptographic_nonce

                                                                                                        3. 2

                                                                                                          I just found this project through https://framapiaf.org/@vindarel/113050855413715910

                                                                                                          It provides a more user friendly experience for the sbcl repl out of the box.

                                                                                                          1. 1

                                                                                                            not only the REPL :) It can run scripts (instant startup with batteries included), it can be used in Emacs and Slime (or any other editor), so we have access to all the libraries and utilities it comes with. When you use a core image, everything loads fast and is ready to use. You can also quickload everything as usual.

                                                                                                            thanks for sharing,

                                                                                                            cheers.

                                                                                                          2. 37

                                                                                                            Excellent. Copyleft is the right choice when one cares abour building a community

                                                                                                            1. 2

                                                                                                              Why is Copyleft the right choice? There are a lot of communities built around software that is more permissively licensed. How is Copyleft better at building a community?

                                                                                                              1. 3

                                                                                                                Copyleft licensing is more likely to attract people who are ideologically driven and seek community with like-minded people, rather than more corporate types who only want to use the software and don’t care about the community.

                                                                                                                1. 2

                                                                                                                  Copyleft licensing is more likely to attract people who are ideologically driven and seek community with like-minded people

                                                                                                                  Maybe, but even that I feel is suspect.

                                                                                                                  rather than more corporate types who only want to use the software and don’t care about the community

                                                                                                                  This is the part I’m not convinced of. All of this, to me, sounds more like ideological wishful thinking than anything born out of actual data. Again, there are a lot of very active communities out there built around permissively licensed code. How can you say one type of license attracts more than another?

                                                                                                                  1. 2

                                                                                                                    Well, one example is the Linux kernel, which dwarfs all the others in terms of community and global reach. Linux’s surrounding ecosystem has also rallied around copyleft licensing (eg glibc, GNU userland tools) and has much wider adoption than its BSD-land counterparts.

                                                                                                                    1. 3

                                                                                                                      Sure.

                                                                                                                      I’m not suggesting that copyleft is in anyway worse than the more permissive licenses. I’m only suggesting there’s no evidence it’s better either - at least in terms of community building. The Linux kernel, regardless of its license, is in my opinion, an outlier. But that also leads to a sticking point. When you say “community”, am I part of the kernel’s community? I don’t contribute code or even file bug reports, yet I use it every day. I really don’t want to discuss semantics here (and I won’t) but this gets tricky really quickly.

                                                                                                                      EDIT: I want to add that

                                                                                                                      ecosystem has also rallied around copyleft licensing (eg glibc, GNU userland tools)

                                                                                                                      Is not really true IMO. A lot of those projects, started by Stallman, predate the kernel. Torvalds picked what he felt was the best of the, admittingly limited, selection of licenses at that time. If anything I’d argue its the other way around. Linux and the community rallied around an existing GPLed toolkit.

                                                                                                                      1. 1

                                                                                                                        Linux and the community rallied around an existing GPLed toolkit.

                                                                                                                        Yes, why did they do that instead of rallying around the at-the-time more mature BSD userland toolkit?

                                                                                                                        1. 7

                                                                                                                          Because the BSDs were mired in legal controversy in the early 90s.

                                                                                                                          1. 1

                                                                                                                            Which was resolved by 1994. In the next 20 years, no one managed to expand its use and community compared to the GNU tools?

                                                                                                                            1. 1

                                                                                                                              Because GNU was designed to replace parts of Unix, it fitted the cobbled-together nature of Linux better than the more tightly integrated BSD userland.

                                                                                                            2. 8

                                                                                                              It’s funny, I’ve recently read something about how the Linux developers are aging and having trouble getting fresh blood. I don’t think this is an issue on how conversations within a project are mediated, I think that’s just a symptom. I think the core of this issue is the entire culture around a project, and the language and tooling it uses. I think as the open source community matures past being a pop-culture movement and begins to actually age into new generations of society, we’re beginning to learn the fallacy of the philosophy of “a project is immortal, just contribute or fork it”.

                                                                                                              When deciding if you will help maintain a project or simply write an entire new one, the primary factor to take into account is how much time and energy each option requires and which one will take less time and energy, especially energy. Is it more cost-effective to start from scratch, or to jump into a project? Starting from scratch is straight-forward, you were going to have to learn the ins and outs of a project anyway, so this actually a very effective way of doing that (for instance, I saw a story on lobsters recently about how you should write your own language as a learning exercise). If you decide you want to contribute to a project, that is far less straight-forward. You need to immerse yourself in it’s community, learn from the existing developers how the project is structured and what they need and expect from a contributor. And the older that community’s technology stack is, the more alienating it is for new people, because there are less people who are familiar with it.

                                                                                                              Thusly, it’s not a matter of Postgres using a mailing list, or “survivor bias” as the author endearingly calls it, it’s more about how the entire project is simply gradually disconnecting from the world. Outdated technology, uninteresting language/stack, antisocial developers. I’m not claiming the developers are hostile, I’m more referring to that “survivor bias”, they’re too attached to their old ways which are becoming less attractive to new users, but it’s not strictly their fault and I’m not blaming them for that, nor do I ask or expect them to “adapt”.

                                                                                                              When a project is created, it cements itself in a particular culture, and that project’s code is like that culture’s bible. When culture changes, the project doesn’t, and suddenly it finds itself talking in an alien language that nobody understands anymore, like with mailing lists. To the most extreme example, look how desperate banks, governments and hospitals are for COBOL developers. There aren’t even COBOL cultures anymore, nobody left around to teach it. Yeah, you can read documentation and find one or two oddballs online who know it, but there’s no more culture. COBOL is a macro example of this, but as I’ve stated, micro examples of projects like Linux and Postgres are going to increasingly become alienated by a constantly changing world they can’t keep up with, until governments are going to have to start asking anyone if they know how those projects work. It won’t happen tomorrow, or next decade, but it’ll happen. Another thing to take into account is the acceleration of culture. Postgres is 30 years old, but technology was relatively stagnant in that time. More people are online, more people are building off the shoulders of giants, more people are making technology to make more technology, it gets faster and faster. Postgres has been around for 30 years, but it doesn’t mean it will be around 30 years more. Sorry if this sounds pessimistic or dismissive to the Postgres community, I’m not saying Postgres will perish or that it should, I’m just making an observation about how it’s difficult to find new people to contribute to older projects. Maybe I’m wrong and I’m just being a hipster, who knows.

                                                                                                              1. 11

                                                                                                                If you decide you want to contribute to a project, that is far less straight-forward. You need to immerse yourself in it’s community, learn from the existing developers how the project is structured and what they need and expect from a contributor.

                                                                                                                Yes! Some large and famous projects like the Linux kernel, FreeBSD, and Debian have a fairly high barrier to entry and that is quite intentional. You can’t just drive by, throw a patch over the wall and expect it ship in the next release. These projects only accept contributions from people that have proven that they are able and willing to effectively communicate with the existing community, follow existing contribution processes and conventions, test their changes, and will stick around to voluntarily troubleshoot their own code if something goes wrong later.

                                                                                                                I spend quite a lot of time browsing small to mid-size projects on Github and the norm there is that 4 out of 5 PRs just get thrown over the wall and the submitter never comes back to fix any issues with the PR. These are not just worthless, but have negative value because every second a maintainer spends trying to work with an uncooperative contributor is time they don’t get to spend making actual improvements to the project.

                                                                                                                So yes, there is a high barrier to entry, where each participant is expected to take ownership and responsibility of their change, feature, module, or whatever. And that is one of the main ways that large projects are even able to function at all.

                                                                                                                And the older that community’s technology stack is, the more alienating it is for new people, because there are less people who are familiar with it.

                                                                                                                Are you referring to mailing lists here? Do “new people” find email to be alienating? Maybe I’m out of touch, but I’m struggling to imagine a person who takes the time to learn about how computers and the Internet work in general, further decides to take up an interest in programming, then further decides to fix a problem in a non-trivial open source application but then hit a wall and give up because they don’t know how to use email.

                                                                                                                I’m not claiming the developers are hostile, I’m more referring to that “survivor bias”, they’re too attached to their old ways which are becoming less attractive to new users, but it’s not strictly their fault and I’m not blaming them for that, nor do I ask or expect them to “adapt”.

                                                                                                                Er, but the underlying premise of the whole rest of your comment is that mature projects need to adapt or fade into irrelevancy. So which is it? :)

                                                                                                                Perhaps the fact that PostgreSQL has been around for 30 years and still continues to innovate in the space of relational database management systems has to mean that whatever they are doing is working absolutely great for them.

                                                                                                                1. 13

                                                                                                                  Do “new people” find email to be alienating? Maybe I’m out of touch, but I’m struggling to imagine a person who takes the time to learn about how computers and the Internet work in general, further decides to take up an interest in programming, then further decides to fix a problem in a non-trivial open source application but then hit a wall and give up because they don’t know how to use email.

                                                                                                                  I know how to use email. I just don’t want to. Keep in mind that most projects that use mailing lists require plain text. So this means I can’t actually use my normal mail tools because they have dubious, if any, plain text support. It also means that I have to work with diffs, which I normally don’t, etc. So it’s not “just use email”, it’s “use a bunch of tools you otherwise don’t use”.

                                                                                                                  1. 3

                                                                                                                    This is a big reason why I’m building https://pr.pico.sh in an effort to bridge the gap between mailing lists and GH PR workflows

                                                                                                                  2. 11

                                                                                                                    Are you referring to mailing lists here? Do “new people” find email to be alienating? Maybe I’m out of touch, but I’m struggling to imagine a person who takes the time to learn about how computers and the Internet work in general, further decides to take up an interest in programming, then further decides to fix a problem in a non-trivial open source application but then hit a wall and give up because they don’t know how to use email.

                                                                                                                    It’s not that people like me “don’t know how to use email”, with a little reading I could learn how to send and receive patches too. It’s alienating (to me at least) because I don’t know the social norms and implicit processes, and mailing lists don’t feel very easy to navigate and pick those things up.

                                                                                                                    You can sort of blunder your way through sending a GitHub pull request because the UI is set up for doing exactly that, but emails are free-form and I’d want to hang around a mailing list for a while to pick up the processes and expectations before I’d be comfortable participating.

                                                                                                                    If I really want to contribute to a project using mailing lists then of course I’ll spend that time learning the culture, but I’d have to really be invested first. Maybe that’s what projects want, idk. In the meantime there are tonnes of projects using processes I’m familiar with that I can work on instead.

                                                                                                                    1. 7

                                                                                                                      Are you referring to mailing lists here? Do “new people” find email to be alienating? Maybe I’m out of touch, but I’m struggling to imagine a person who takes the time to learn about how computers and the Internet work in general, further decides to take up an interest in programming, then further decides to fix a problem in a non-trivial open source application but then hit a wall and give up because they don’t know how to use email.

                                                                                                                      Hi, yes, this kind of person exists, I am among them. I can barely write a regular personal or business email, but once you add in the technical and cultural challenges of mailing lists it becomes too hard. The only times I’ve interacted with mailing lists was to send some bug reports to older C projects, and every time I’ve had an older friend or colleague go over the exact contents (and headers!) of my mail to make sure I didn’t make a complete fool of myself. I then used the web-based mail archive to look at the replies because I couldn’t figure out the “proper” way which I’m sure exists.

                                                                                                                      1. 5

                                                                                                                        but then hit a wall and give up because they don’t know how to use email.

                                                                                                                        It’s not email itself, it’s git send-email specifically. Knowing how to operate a MUA doesn’t help much with git send-email because it bypasses your MUA and sends the email directly, only invoking your $EDITOR so everything you know about how message threading works goes out the window.

                                                                                                                        It’s easy-ish to use for a single first draft patch once you dial in your SMTP settings, but in my experience, even motivated contributors have only about a 50% success rate at sending a follow-up patch to the right thread instead of starting a new thread.

                                                                                                                        Edit: for context, I started a mailing list for accepting patches for a project I run, and I prefer it to github pull requests, but I don’t think it’s actually very good. A flow which let me use my MUA I am already an expert at using would be much better. But even then it probably wouldn’t be as good as a flow which just used patches to communicate.

                                                                                                                        1. 2

                                                                                                                          I agree that that setting up things to send email as text from a popular provider like gmail is an unnecessary barrier of entry. I do think ‘gateway’ where you push a branch and it generates an email against the current head could increase the adoption of sourcehut-like flows. Something like agit but where the output is an email

                                                                                                                          https://forgejo.org/docs/latest/user/agit-support/

                                                                                                                        2. 3

                                                                                                                          I should probably clarify some intent in a few of my points, I kind of wrote that in a rush.

                                                                                                                          1. I didn’t mean to imply cultural change is strictly a good thing, or that it’s strictly a bad thing to take the time to contribute properly to an open source project. I just meant that it’s an inevitability that projects will lose contributors, one way or another. Not that “adapting” to modern standards can even strictly work to keep public interest in the same way it had it when the project was newer. It is a good thing that a project holds quality and standards controls, but that’s also a barrier for entry, just like any company might have a hard time finding qualified employees. Given that energy cost for the choosy young developer, and if a project seems less and less attractive to contribute to (which is inevitable over time), less people will be interested in contributing. I really don’t mean anything in particular with this point, more or less “entropy exists”. My primary motivation for making that a point is in my first paragraph, people assume open source projects are immortal, but they’re really not. Postgres doesn’t have a “mailing list” problem, it has an age problem. A societal age problem, not a technical age problem. It isn’t something anyone can do anything about, as far as I understand. Like I mentioned in my first paragraph, I feel like we’re just beginning to learn about this issue as a society, because IT is such a comparatively early field of science, a lot of it’s founders are still alive! We’re constantly grappling with how it fits into society, and we’re only getting more questions by the day. Henry Ford didn’t invent the seat belt, after all.

                                                                                                                          2. Not that email/mailing lists are so old that they’re an unknown technology now, but that the pool of people who use it for actual communication is shrinking. Most people really just use email as a way to register an account for a service, or as a 2FA method. Besides corporate emails, I never see anyone communicate through email anymore, it’s sort of “deprecated” in that sense. It’s not dead, just dying, slowly. Communications protocols seem to be an especially turbulent field of IT, because of how rapidly people change the way they communicate. It’s almost like every generation has a whole new method. Again, not really anything anyone can do about it, it just is, until we observe and learn more about how our technological society operates.

                                                                                                                          1. 2

                                                                                                                            Yes! Some large and famous projects like the Linux kernel, FreeBSD, and Debian have a fairly high barrier to entry and that is quite intentional.

                                                                                                                            It doesn’t start out as intentional, and it says more about the project and how it is effectively developed than it does about keeping the wrong thing out.

                                                                                                                            It should be easier to do the right thing, so that people that could be participating, can participate rather than get weeded out by a difficult process. The difficulty should be in how patches make it through an automated system, not how kids need to navigate an insular culture. CaC (Culture as Code).

                                                                                                                            1. 1

                                                                                                                              I have met 20-somethings who take pride in not using email at all.

                                                                                                                            2. 2

                                                                                                                              Your view is based on the premise that new ways are better. But that is the matter in discussion, it’s not a given. Web forums were all the rage 10-15 years ago and they are all dead now. Plenty of tech has come and gone. Plenty of young developers feel like they discovered gunpowder when they have a realization of something that was industry standard 40 years ago.

                                                                                                                              Don’t reduce a view different from yours to stubbornness. If anything, PostgreSQL is exploding in popularity and technological breakthroughs. I am not saying your ways are worse and theirs good. Just pointing out that just because something is not fashionable at the moment, it doesn’t mean that is dated or obsolete. Sometimes it even makes a comeback with bigger strength than ever. Popularity should be decoupled from engineering. Engineers should look primarily into science, social aspects of an engineering process should be of lower importance.

                                                                                                                              1. 4

                                                                                                                                I should state, both in the post you’re replying to and my reply to a reply, that I explicitly disowned the idea of a project adapting it’s tooling to more modern infrastructure, I merely said the majority of younger developers will not be interested in learning or adapting to older tooling. I’m sorry, but I can’t really respond to the rest of your post, as it’s essentially an argument against a strawman you unintentionally made up.

                                                                                                                                That said, I probably should agree that Postgres is technologically impressive and definitely head of the game. I feel like it went without saying which is why I didn’t explicitly state it, but I suppose my tone did feel like it was implying the opposite. My primary point is about existing projects having a difficult time gaining new developers, not about technological regression.

                                                                                                                            3. 2

                                                                                                                              The way I use worktrees currently:

                                                                                                                              • ~/Code/backend/choco-backend main repository checkout
                                                                                                                              • git worktree add ../branch-name
                                                                                                                              • cd ../branch-name

                                                                                                                              We have a big monorepo so every time I create a worktree it takes quite some time. That makes me think this approach of re-using worktrees may have some merit.

                                                                                                                              The main issue I have is that branch worktrees accumulate and it’s not clear which ones may have active work in them or what the original reason was to check them out.

                                                                                                                              I think it should be possible to add that metadata to the branch using: git config branch.my-branch.status "reviewing" and then have a command-line alias that prints out all worktrees, the branches that they are currently operating on and the custom field value.

                                                                                                                              1. 3

                                                                                                                                I think it should be possible to add that metadata to the branch using: git config branch.my-branch.status “reviewing” and then have a command-line alias that prints out all worktrees, the branches that they are currently operating on and the custom field value.

                                                                                                                                You can attach descriptions to braches. Maybe you can use as a self-note instead?

                                                                                                                                https://git-scm.com/docs/git-branch#Documentation/git-branch.txt---edit-description

                                                                                                                                1. 1

                                                                                                                                  That’s cool. So it’s either that or I learn jj which may also make this problem go away.

                                                                                                                              2. 3

                                                                                                                                Wow I am a pleb. Whenever I wanted to see something on an old branch I would go to the github web UI like a medieval serf. Not anymore. Thank you OP

                                                                                                                                Reminder: git is not a version control system, git is a toolbox for building a VCS. Do have a low-friction way to add your own scripts for common git operations.

                                                                                                                                Yeah my productivity 10xed once I created my own abstraction around git (~200 lines of Lua) it’s crazy I went so long without it.

                                                                                                                                1. 4

                                                                                                                                  (~200 lines of Lua)

                                                                                                                                  Much as I cringe at this ‘solution’ to sloppy workflow, I’m still curious about what these 200 lines of Lua are doing for you .. care to enlighten?

                                                                                                                                  1. 2

                                                                                                                                    Whenever I wanted to see something on an old branch I would go to the github web UI like a medieval serf

                                                                                                                                    🤣 I am the same with git blame! Still waiting for a blog post that explains how to use blame as conveniently locally.

                                                                                                                                    1. 3

                                                                                                                                      Various text editor plugins provide interfaces for blame. My favorite Git blame interface is the one of the JetBrains IDEs such as IntelliJ IDEA. I don’t remember exactly why I concluded it was my favorite – I think it was something about the ease of recursive blaming and revisiting intermediate versions so I could view the blame history for a different line in it.

                                                                                                                                      Some other local git blame wrapper tools I know of, though it’s been a while since I used many of them, so I don’t remember how they stack up against GitHub’s UI:

                                                                                                                                      1. 10

                                                                                                                                        I think I’ve tried most of those tools, and they are just not good. They blame files, so, when you click on “blame before this change”, you get a pseudo file opened at a different revision, divorced from the rest of the project.

                                                                                                                                        That’s in contrast to GitHub UI, where navigating through the blame chain changes the state of the entire repository.

                                                                                                                                        I actually wasn’t able to pinpoint my dissatisfaction with local tools quite like that before, thanks! It seems that what I really want is a nice UI for immediately resetting the repo to the commit to blame! That’s actually work nicely with my main worktree.

                                                                                                                                        1. 1

                                                                                                                                          Wow, did we come full circle and just need git blame to open a new work tree at the revision selected?

                                                                                                                                          1. 2

                                                                                                                                            I don’t think a new work tree is needed, I’d rather it operate in-place in a specific worktree of mine which is dedicated to code exploration.

                                                                                                                                            Practically, just check that working directory is clean before resetting.

                                                                                                                                            1. 2

                                                                                                                                              Perhaps I’m misunderstanding what you want, but to me this sounds like functionality that’s already available in most IDEs and editors.

                                                                                                                                              In Emacs I’m able to open up any file at any commit using magit-find-file, I’m able to navigate through file history using git-timemachine, I can view and navigate around a specific file or subdirectory’s commit log and view diffs and other metadata, etc. and none of that will affect the current checkout.

                                                                                                                                              1. 2

                                                                                                                                                and none of that will affect the current checkout.

                                                                                                                                                But I absolutely want it to affect the current checkout. While blaming, I want project wide grep, go to definition, and such, to just work.

                                                                                                                                                1. 1

                                                                                                                                                  Yeah, you can have that, too.

                                                                                                                                                  It’s worth it to learn your tools.

                                                                                                                                                  1. 1

                                                                                                                                                    Could you show an example? I believe neither magit, nor git-timemachine have this GitHub feature, where, in the blame view, you can click on a particular line to “blame prior this change”, which changes not only the current file, but the entire state of the repository, such that if you open a different file, using the standard open file shortcut, it is shown at the same revision.

                                                                                                                                                    I understand that I can hack that together easily enough by writing some elisp. But my claim is not that that’s impossible or hard to do. Is that for some reason only GitHub does that.

                                                                                                                                                    1. 2

                                                                                                                                                      in the blame view, you can click on a particular line to “blame prior this change”

                                                                                                                                                      Thanks for calling this feature out, I think that finally let me understand what magit-blame-mode’s b does: (emphasis mine)

                                                                                                                                                      If Magit-Blame mode is already turned on in the current buffer then blaming is done recursively, by visiting REVISION:FILE (using magit-find-file), where REVISION is a parent of the revision that added the current line or chunk of lines.

                                                                                                                                                      Ashamed to admit that up until today I spelled that b something like RET C-s <old line> RET SPC 🙈

                                                                                                                                                      the entire state of the repository, such that if you open a different file, using the standard open file shortcut, it is shown at the same revision

                                                                                                                                                      So, OT1H: magit-find-file will use the “blame chunk at point” to determine the default revision, so that, assuming you have that on a convenient binding, “opening a different file at the same revision” is reasonably convenient:

                                                                                                                                                      • <binding for magit-find-file>
                                                                                                                                                      • RET (accept default revision)
                                                                                                                                                      • <completing-read over files at that revision>

                                                                                                                                                      OTOH: this comment thread does make me realize “spawn worktree on revision at point” would be a neat thing to have (“switch current worktree to revision at point” already exists as <binding for magit-dispatch> b b RET). Nevermind git-grep, I want to throw my LSP, my testsuite, and all sorts of VCS-naive tools at that commit Magit is showing me…

                                                                                                                                                      Thanks for the food for thought (and the article btw 🙏 Funnily enough I spawned my first review worktree one week before you published this)

                                                                                                                                                      1. 2

                                                                                                                                                        So, OT1H: magit-find-file

                                                                                                                                                        Yeah, it’s a hard constraint for me that it must be standard find file, because find file is just one example of what I might want to do.

                                                                                                                                                        For example, I might want to grep for some string! I’d love the standard grep shortcut to work. I could use a different shortcut for something like git grep current-revision, but that’s again a needless difference.

                                                                                                                                                        Or I might want to “go to definition”. I am 90% sure this doesn’t exist. Though you could do it — there just needs to be a special shortcut which is like go to definition, but it sends project wide diff between current state and revision at blame to the language server.

                                                                                                                                                        And you can do the same for every single other project-wide navigation action!

                                                                                                                                                        Alternatively, you could just checkout the tree vision, and get all these actions, for free, using standard shortcuts!

                                                                                                                                              2. 1

                                                                                                                                                Yeah I feel your pain here as well, nothing comes close to the github UI for me.

                                                                                                                                                The best I found so far is this neovim plugin which can show the commit hash wherever you put your cursor. I then copy/paste the commit hash to my shell and use a custom Lua script that basically pipes git show to a temporary a file and jumps to the line number where the edits start. But this is the same as “pseudo file opened at a different revision, divorced from the rest of the project” as you described which has its limitations obviously… as soon as you try to do it recursively it falls flat on its face

                                                                                                                                                BUT now that I have this knowledge of worktrees, I think I could write a basic Neovim plugin that takes the blame for the current line, checks out that commit, reopens the file and ggs to the line (since it probably is a different line number)

                                                                                                                                                1. 1

                                                                                                                                                  I’ve seen people setup a post-commit hook to run long running tests and notify if they are failing. The post commit hook creates a worktree in /tmp/ so you dont need to worry about collisions.

                                                                                                                                                  Based on the article it looks like it would be a good fit for what you are trying to do. Or running the fuzzer too often a potential issue?

                                                                                                                                      2. 14

                                                                                                                                        I’ve been using jujutsu for the past month for work and I couldn’t be happier. It sits on top of git (not completely, it’s its own thing that happens to output git compatible objects). It does take a lot of my mental load out of using git. I don’t even need magit anymore. It’s super intuitive, it’s impossible to lose work, you have versioning for your commits, no staging area, jj undo, etc etc. It’s been a game changer for me.

                                                                                                                                        1. 8

                                                                                                                                          Boy I was SO close to being converted to JJ just a bit ago but the fact that it doesn’t support precommit hooks (and the open issue doesn’t look promising anytime soon) is an absolute deal killer as we have some very important hooks at $WORK.

                                                                                                                                          1. 3

                                                                                                                                            I have not had the opportunity to try it yet, but it looks like there is now jj fix (https://martinvonz.github.io/jj/latest/cli-reference/#jj-fix) which could be used to call the precommit hooks (depending on what they actually do).

                                                                                                                                            There’s also a hidden experimental jj run, which I’d play with to call pre-commit, but I wouldn’t necessarily recommend that kind of shenanigans.

                                                                                                                                            (oh, and also the pre-commit issue seems to be receiving work as of last week, so yay)

                                                                                                                                            1. 1

                                                                                                                                              If you look at jj run‘s code, it’s just a stub right now haha. It’s an IDEA they have for doing this kind of stuff.

                                                                                                                                            2. 2

                                                                                                                                              That’s a pity :(

                                                                                                                                              What kind of hooks are you running that are so important? We have some linting hooks and that’s about it

                                                                                                                                              1. 2

                                                                                                                                                We have (Python repo) Ruff format/check –fix, import-sorting, and Auto-generated OpenAPI specs based on changes to nested folder structure (using redocly) All things that we absolutely want done every commit, and fail CI pipelines for us if not done. I wouldn’t want to have to remember running them all every time…

                                                                                                                                              2. 2

                                                                                                                                                Why is it an issue? Wouldnt the same approach as merge conflicts work? That is commit first, lint and squash later?

                                                                                                                                                Keep in mind jj is committing every time you run a command, say jj status

                                                                                                                                                1. 1

                                                                                                                                                  Wait it doesn’t!? This is unironically the single anti-feature that would get me on board. Thank you for this info :)

                                                                                                                                                  1. 2

                                                                                                                                                    I mean if you hate pre-commit that’s totally fair, I can’t judge an opinion. But if you’re at work and your work repo has pre-commit hooks mandated then good luck remembering all the things to run before every commit with jj

                                                                                                                                              3. 31

                                                                                                                                                Happy Sourcehut customer here, if folks are looking for alternatives. I’d been planning to eventually migrate to self-hosted Sourcehut, but the hosted experience has been great (dDOS notwithstanding).

                                                                                                                                                1. 15

                                                                                                                                                  I don’t particularly care about GH’s badges, but sourcehut looks and feels like an 80s accounting GUI.

                                                                                                                                                  What github gave us (and cannot be revoked easily nowadays) is the social aspect of OSS, the cross-linking of issues and mentions etc.

                                                                                                                                                  1. 36

                                                                                                                                                    but sourcehut looks and feels like an 80s accounting GUI.

                                                                                                                                                    Isn’t it great? Fast, simple, accessible, works on non-monopoly browsers, …

                                                                                                                                                    1. 30

                                                                                                                                                      I don’t particularly care about GH’s badges, but sourcehut looks and feels like an 80s accounting GUI.

                                                                                                                                                      If there’s any “80s accounting” software with a GUI as consistent, simple, easy to read and clean as sourcehut’s, I’d love to use it.

                                                                                                                                                      1. 1

                                                                                                                                                        In my memory, DOS era software was incredible from a usability perspective (but not a feature availability one). Keyboard shortcuts were predictable and surfaced in obvious ways, you could memorise them, and every operation you could perform was either instant or very slow - unlike modern systems where most buttons take half a second to respond.

                                                                                                                                                      2. 3

                                                                                                                                                        What github gave us (and cannot be revoked easily nowadays) is the social aspect of OSS, the cross-linking of issues and mentions etc.

                                                                                                                                                        hard to see what you mean. we have had hyperlinks since the 90s and software development was significantly more social before code forges became the norm.

                                                                                                                                                        1. 7

                                                                                                                                                          The bidirectional aspect was proposed in a lot of hypertext systems but was never really part of the web (link back thinks excepted). The nice thing in GitHut is that, as a library maintainers, I get notified when someone files an issue on a project that references an issue on one of my projects. This gives me some useful visibility into the impact of bugs and is something I’ve used to decide to push out bug fix releases, rather than just roll the fixes into the next minor release.

                                                                                                                                                          1. 1

                                                                                                                                                            ah, bidirectional cross-linking. what did you mean by “link back thinks”?

                                                                                                                                                              1. 2

                                                                                                                                                                There was a thing on some blogs that would ping the target platform when you linked to them. Similar functionality was folded into activity pub. It might be possible to build a federated hosting thing with this kind of cross-linking, but you’d need to be careful about spam. At the very least, you’d want an admin to approve each project that linked to you, but Mastodon is building out a lot of the required tooling for sharing block lists so maybe it’s easier now.

                                                                                                                                                        2. 8

                                                                                                                                                          I’m a happy paying customer of SourceHut as well. However I do think that although their site design and performance is great. However I do wish they had a code search feature, specially across repos. I don’t think this is something that they are going to tackle any time soon as they seem to be focused on builds.sr.ht

                                                                                                                                                        3. 2

                                                                                                                                                          To me a “change” sounds like a lightweight branch, since it has a stable identifier that’s preserved across amending/rebasing, but I imagine that’s a naive first impression! Does every commit in a history have a unique change ID? So if I rebase a chain of commits, each one preserves its change ID? What happens if I squash multiple commits — do they get a new change ID?

                                                                                                                                                          This sounds cool, although I’m skeptical of how much better than Git it can do while still using the same data/file structures. It’s still based on a history of revisions, not a history of patches like, say, Darcs.

                                                                                                                                                          1. 5

                                                                                                                                                            To me a “change” sounds like a lightweight branch, since it has a stable identifier that’s preserved across amending/rebasing, but I imagine that’s a naive first impression!

                                                                                                                                                            A change is not like a branch. A branch is a way to name a particular tree. Informally, it also a way to refer to the set of commits that are not part of the base of the pull request, so it might refer to multiple commits (nodes in the tree).

                                                                                                                                                            A change is always a single node in the tree. jj records its history, so it tracks how to the node changed over time as well as how its position in the tree changed.

                                                                                                                                                            Does every commit in a history have a unique change ID?

                                                                                                                                                            Yes

                                                                                                                                                            So if I rebase a chain of commits, each one preserves its change ID?

                                                                                                                                                            Yes

                                                                                                                                                            What happens if I squash multiple commits — do they get a new change ID?

                                                                                                                                                            The answer is ‘intuitive’ depending on what your mental model of squashing is. If one thinks of squashing as taking a group of commits and creating a new, distinct one, then there is no clear answer to what the change ID should be.

                                                                                                                                                            If one thinks of squashing as moving the changes from a group of commits into another commit then the answer is clear, the change ID should be the the one of the commit you are squashing into.

                                                                                                                                                            The arguments to the CLI of jj squash reinforces the second metaphor, you squash --from a group of revisions --into a revision.

                                                                                                                                                            This sounds cool, although I’m skeptical of how much better than Git it can do while still using the same data/file structures. It’s still based on a history of revisions, not a history of patches like, say, Darcs.

                                                                                                                                                            The main selling points are:

                                                                                                                                                            • It has an undo command, which lets you go back to the previous state of the repository if you realized you did something you didn’t intend to. Think a reflog w/o having to care about what is the reflog. It also is more useful as the working copy is saved every time you run a jj command, ej. jj status.
                                                                                                                                                            • You can commit conflicts (think of mentoring people on how to resolve them)
                                                                                                                                                            • A richer set of commands to modify the relationship of commits.

                                                                                                                                                            Although it is not a history of patches, it does enable you to work that way. Although you are unlikely to be able to do so if your contribution flow is based around pull requests (unless you want to stack every pr). The flow in the article is a way to workaround those scenarios. ej. each parent of the merge commit would be an independent PR (or stack of).

                                                                                                                                                            Finally I’d like to point out that the article is not an introduction to jj. The target audience [imho] is people that are already sold on using jj. Showing a way using jj w GitHub-like forges.

                                                                                                                                                            1. 1

                                                                                                                                                              Does every commit in a history have a unique change ID?

                                                                                                                                                              Yes

                                                                                                                                                              @snej To expand on this, this is true, but only for visible commits (and not if there’s any divergence). Once a change has, err, changed, the previous visible commit is hidden, and only the new commit is visible.

                                                                                                                                                              All the hidden commits will share change IDs with other commits, so they’re not strictly unique.