Threads for composite_higgs

  1. 2

    I use Docker extensively as a research/data scientist in order to have portal development environments. I teach Data Science to graduate students and also require them to use Docker, despite the security concerns. Our usual model of usage has us setting up the development environment (typically Rstudio) inside the container but mounting the project directory from the host inside using -v.

    I recently tried to get Podman to work for this workflow and had some major challenges with the permissions on the mounts. With much finagling with user IDs I could get it it so that I could read/write the files from inside the container but not outside OR from outside but not inside. With Docker I can trivially read/write from both inside and outside.

    Podman’s ability to run rootless is very attractive to me, but I can’t resolve this issue. Anyone know how?

    1. 3

      Depends on what the permission issue is but probably first look at the z/Z volume options in the man page where it describes the –volume parameter.

      The same option is needed for Docker as well with SELinux enabled, so that may be the actual difference if you tried them on different distributions.

      1. 2

        What security concerns do you have with Docker?

        1. 8

          A user in the docker group can access the docker socket, meaning they can start privileged containers that run as full root without even running sudo first. It’s sudo NOPASSWD exposed over a UNIX socket for any process to exploit at their leisure.

          This is not automatically bad, I use passwordless sudo on machines on my local network. And cloud instances often give the default user passwordless sudo. But I doubt most people who use docker on their workstations know they have effectively enabled passwordless root.

      1. 6

        I started my scientific computing life on Matlab and I’ve never liked Python. As far as “simple” languages go, Python is actually very complicated (much more so than Matlab, I would say). There is just something about the style that Python encourages that I strongly dislike.

        I recall using Pandas and being shocked to find that the “groupby” method returned a sequence of values with non-uniform, non-hierarchically related types seemingly at random. Presumably these types were all “duck like” enough according someone’s understanding of how the sequence would be used, but it broke my code. Even with all the syntactic shenanigans, its easier to understand what is going on with the Tidyverse. I also dislike that slices in Numpy can side effect the arrays they come from. Copy on write/pure behavior in Matlab is much simpler to reason about.

        Python is also really slow for no good reason. Its all network effects and I have just accepted that I need to program in Python, but I just don’t like it.

        1. 33

          More broadly, Rust’s complexity makes Rust harder to learn, which is an unnecessary burden placed on those learning it.

          My experience learning Rust has near-universally been positive in this regard. Most complexity exists for a reason, and in the end I’m happy it’s there because it makes the Rust experience dramatically better than Go. Of course Rust does have some things that are just awkward and obnoxious for little benefit, but many of those have gone away as Rust continues to mature. Picking up Rust 5 years ago was much harder than it is today.

          If in 2050, programmers are sitting around waiting for their Rust code to compile just to be told that we failed to sacrifice enough to appease the borrow-checker gods,

          cargo check?

          1. 19

            Most complexity exists for a reason

            It is extremely comforting to assume this, but I believe most complexity exists because someone couldn’t (for time, brains or money) make it less complex.

            If you have the time, it is usually worth thinking about how to reduce complexity, instead of accepting it and assuming it “exists for a reason”, because if you can find a way to reduce complexity, you will free your brains and money to do work on other things.

            I am aware there are people who just like being busy and like working hard. Those people love complexity, and there’s little you can do about that except not believe them and talk about other things besides programming.

            And listen, I don’t mean that this is the necessarily case with rust[1], only that you (and others) shouldn’t be so accepting of complexity thrust upon you because there are things you can do about it.

            [1]: …although I do happen to think so, I’m also aware this is just my opinion. If rust makes you happy, be happy!

            1. 21

              only that you (and others) shouldn’t be so accepting of complexity thrust upon you because there are things you can do about it.

              I think it’s pretty clear I was talking about complexity in Rust, and I resent the implication that I blindly accept complexity without any sort of critical thinking.

              Rust the language does have a lot of up-front complexity. But that complexity comes with up-front safety and correctness. It’s compiler errors vs core dumps, it’s borrow checking vs pointer aliasing bugs.

              I once spent 2 weeks chasing a pointer aliasing bug. After pouring over thousands of lines of code, customer provided core dumps, and live debugging sessions once I could reproduce the issue, I finally noticed that two pointers had the same hex value when they shouldn’t have. It never caused a crash, never corrupted data, only reduced throughout in a certain edge case. And it would have been a compiler error in Rust.

              Unsafe languages aren’t actually less complex, they just push the complexity down the road.

              1. 7

                Unsafe languages aren’t actually less complex, they just push the complexity down the road.

                I don’t think I can agree that all things are equally complex. I hope this is not what you mean, but if it is not what you mean, I cannot understand why you would say this.

                We have to choose where we want our complexity, for sure, and like entropy it cannot be avoided completely and get any work done, but some things are more complex than others for all kinds of complex things.

                I think it’s pretty clear I was talking about complexity in Rust, and I resent the implication that I blindly accept complexity without any sort of critical thinking.

                I am very sorry that I have offended you in any way, however if you think that you never have accepted complexity in your life, without, (as you say) “any sort of” critical thinking, then you should probably think whoever changed your diapers when you were a child. We all do it, and there’s nothing wrong with it sometimes.

                I just think it’s something we programmers should watch out for- like our own invisible biases, or getting value alignment before asking someone to review our work. If we think we don’t do it – or we get offended at the implication that we’ve done it, we can be blind to something that if we had discovered in other circumstances (or as they say, with different priors), that we would find very important.

                It is awful hard for me to convince myself that rust represents the best we can do at anything in particular, only perhaps the best we have done so far at some things, but so what? I feel that way about a lot of languages. If rust makes you happy, I think you should be happy as well!

                1. 8

                  PLT person here. Outside of async (which I haven’t used much but hear is very complex), Rust has very little accidental complexity. If you make a language with borrow checking, ADTs, traits (interfaces), and emphasis on zero-cost abstractions, it will necessarily have most of the complexity of Rust.

                  1. 2

                    Yes and no. There’s a bunch of choices in Rust to make things implicit, which could have been explicit. This somewhat reduces the syntax one needs to learn at the cost of making it much harder to understand the text of any given program.

                    1. 3

                      That’s funny, I think of Rust as being an unusually explicit language. Which things are you thinking of? My list would be:

                      • Automatic dereferencing and automatic ‘ref’ insertion in pattern matching and automatic ‘.drop()’ insertion. Though the language would get very verbose without these.
                      • A couple auto-derived traits. Though again, it might get really old really fast if you had to write ‘#[derive(Sized)]’ for all of your types.
                      • Functions that return a type not dependent on its argument, such as .into() and .parse().
                      • Operator overloading
                      • There are a couple “magic” fat-pointer types like &str

                      Things that are explicit in Rust but often implicit in other languages:

                      • No implicit type coercions like in C or JS
                      • VTables are marked with ‘dyn’
                      • Semantically distinct values have distinct types more often than in most languages. E.g. String vs. OSString vs. PathBuf.
                      • The “default” import style is to name all of your imports instead of using globs.
              2. 8

                It is extremely comforting to assume this, but I believe most complexity exists because someone couldn’t (for time, brains or money) make it less complex.

                I think this is often true. But also, a ton of complexity totally has reasons for existing. It’s just that the linear benefit of simplifying one use case is generally outweighed by the exponential downside of combinatorial complexity of the language.

                Go didn’t manage complexity by coming up with a smarter simpler language design - they managed it by giving things up and saying no. The end result is a lot of quality-of-life things are missing in the small, but the standard of living in a large Go code base is, on average, better.

                1. 5

                  couldn’t (for time, brains or money) make it less complex shouldn’t be so accepting of complexity thrust upon you because there are things you can do about it

                  I think we can all hope that future rust or a successor may come up with a language that provides the same flexibility, performance and safety without the possible* hassle you have when learning rust. But currently there is none (without a required GC, with native compilation, with thread safety, with C-interop that doesn’t suck, with [upcoming] embedded support, with async, with …).

                  *Also, I think that many people simply underestimate how much they’re used to the C/C++/Java style of programming, where you either have a GC, have OO or can simply write the code assuming “the compiler will get it” (speaking of stuff the borrow checker currently can’t prove, and thus accept). Or they weren’t actually define behavior in the first place. Something i’ve seen from people trying to port their whacky c-code over to rust, only to find out they simply memory leaked their stuff. Which was fine for the oneshot CLI, but actually never defined or good behavior. On the other side I’ve had students that learned programming the racket way, and then stumbled upon a myriad of complexity when learning java (this vs this@.., static, field init vs constructor init and the order of them, ArrayList vs array vs List, primitives vs Objects vs autoboxing).

                  I think it’s a little dismissive to say others didn’t make it as good as they could have, it may be true, but it’s harsh to assume they didn’t try and people are worshiping that.

                  1. 3

                    But currently there is none (without a required GC, with native compilation, with thread safety, with C-interop that doesn’t suck, with [upcoming] embedded support, with async, with …).

                    Quite possible indeed. Zig maybe. Zeta-C. D. I think some of those things are pretty subjective, so maybe others? But in any event, not everybody needs all of those things, and even when people do, they don’t necessarily need them all at once. I think it’s important to keep that in mind.

                    I think it’s a little dismissive to say others didn’t make it as good as they could have, it may be true, but it’s harsh to assume they didn’t try and people are worshiping that.

                    I think they did make it as good as they could have (with the time, money and brains that they had), only that some of those constraints aren’t anything “reasoning” can do anything about at all. Where do you think I said the opposite?

                    1. 2

                      they don’t necessarily need them all at once

                      That’s maybe true. But on the opposite people program everything in C++ (Embedded, Kernels,Webbrowsers,Servers,Games), everything in JS (Embedded,Electron,WebGL,CLI..), everything in Java (embedded processors, DVD, android and the whole CRUD stack) and everything in C. So if you can get rust high level enough for (web) applications, and opt-in low level enough for embedded (core/alloc) - why not ? I don’t think there are actually quirks in rust that are originating out of the broad range of supported use cases. Obviously you can always make a better DSL for specific use cases, in the way Ruby on Rails is nothing else than a DSL for this. (And then go back to C interfaces when you actually need ML/CompSci/.. )

                      Regarding “I said the opposite”: I think it’s a combination of these lines

                      I believe most complexity exists because someone couldn’t (for time, brains or money) make it less complex you (and others) shouldn’t be so accepting of complexity thrust upon you because there are things you can do about it

                      Though I just wanted to push back the idea that “I” can just do something about, or that it’s only because people were lazy, because that’s one way I can read the above sentence.

                      1. 1

                        But on the opposite people program everything in …

                        I can’t really speak for people who think learning a new language for a specific domain is beyond the brains, money and time they have; I mean, I get there’s an appeal of having one language that can do everything, but there’s also an appeal in having many languages which each do certain things best.

                        I don’t really know which is better overall, but (more to the point) I don’t think anyone does, so I urge vigilance.

                        I just wanted to push back the idea that “I” can just do something about, or that it’s only because people were lazy, because that’s one way I can read the above sentence.

                        I hope you can read it another way now; Of course there is something you can do about it. You can do whatever you want!

                  2. 3

                    It is extremely comforting to assume this, but I believe most complexity exists because someone couldn’t (for time, brains or money) make it less complex.

                    I think coloured functions are a perfect example of this phenomenon. For the sake of my argument I’ll assume that everyone agrees with the premise of this article.

                    If we compare two approaches to asynchronous code, Rust’s traditional async/await, and Zig’s “colorblind async/await”, I would argue that Andrew Kelley applied the effort to make Zig’s approach less complex, while Rust did not.

                    1. 2

                      Honestly I think zig’s async/await is a bit of a cheat (in a good way) because it’s not actually async but doing a different, low level thing that can map to “some concept of async”. It’s low level, so you can ab-use it for something else (but please don’t).

                      The nice thing about it is that you can really get a solid mental model of what the CPU is doing under the hood.

                      1. 1

                        Yes. And that low level thing is a theory of our software science, and a compelling one because that one gives you different ideas than the other; Our languages “look” different, can produce many of the same results, but using one instead of another means making different kinds of changes to those things in one language will simply be easier than making those changes in another.

                        Software can be so beautiful!

                    2. 2

                      This just depends on your priors. Perhaps in general its a good assumption that reasonable work will eliminate complexity, but in many situations (for instance, almost any situation where people are routinely paying the cost of extant complexity) it may make sense to adopt a prior which leans more towards “this complexity exists for a reason.”

                      Rust would be a great example of the latter case. Its a programming language that was intentionally designed to reduce the complexity of programming in domains where C/C++ are used. Since both C and C++ are very complex, its reasonable to assume that design process was complexity focused. And since rust was designed in a commercial context its also reasonable to assume that some skin was in the game.

                      These facts suggest that its more reasonable to accept that the complexity in Rust is there for a reason.

                      In my career, when I have discovered some apparently complex artifact I’ve almost always found that there were compelling (sometimes non-technical, but nevertheless real) reasons that complexity hadn’t been eliminated.

                      1. 2

                        In my career, when I have discovered some apparently complex artifact I’ve almost always found that there were compelling (sometimes non-technical, but nevertheless real) reasons that complexity hadn’t been eliminated.

                        I think this manages to confuse what I mean by trying to distinguish between “complexity exists for a reason” and “complexity is usually an estimated trade-off between {brains,money,time}” – because for sure the real limitations of our minds and wallets and health are as good a “reason” as any, but I don’t think it’s useful to think about things like this because it suggests thinking of complexity as inevitable.

                        These facts suggest that its more reasonable to accept that the complexity in Rust is there for a reason.

                        They don’t though. These “facts” are really just your opinions, and many of these opinions are opinions about other peoples’ opinions. They’re fine opinions, and I think if rust makes you happy you should be happy.

                        But I for example, don’t think C is very complex.

                        1. 4

                          C is exceptionally complex, in a couple of different ways*. It is one of the most complex programming languages that has ever existed. I am baffled whenever I see someone claim it is not very complex. Rust is massively less complex than C.

                          *: The most relevant one being that in terms of memory management you have actually even more difficulty and complexity in C than the Rust borrow checker, as you have to solve the issues the borrow checker solves but without a borrow checker to do it, and also with much weaker guarantees about the behavior of eg pointers and memory.

                          1. 2

                            That’s fantastic you have a different opinion than me.

                            1. 1

                              you have actually even more difficulty and complexity in C than the Rust borrow checker, as you have to solve the issues the borrow checker solves but without a borrow checker to do it

                              While it is arguably true that you have to solve the same issues with or without a borrow checker (and I could argue with that, but won’t), it’s definitely not true that the borrow-checker is a magical tool that helps you to solve problems without creating any of its own. You had a memory management problem. Now you have both a memory management problem and a proof burden. It may be that the proof framework (burden, tooling, and language for expressing the proof) helps you in solving the memory management problem, but to pretend that there is no tradeoff is unreasonable.

                    1. 1

                      Not sure where I fall on this essay as a whole, but it is not easy to implement a conforming Common Lisp or Scheme. The toy meta-circular evaluator in SICP is just a toy. Both languages are big (CL bigger) and both entail very serious technical challenges.

                      1. 3

                        It’s relatively easy to build a Lisp dialect if you can lean heavily on the host language. I think that’s really the point - most of the early MIT AI lab research like e.g. PLANNER, CONNIVER and of course Scheme itself were all done on top of earlier Lisps (mostly Lisp 1.5 or MacLisp, I think). And think what you will of Paul Graham’s Arc what you like, but it was also built on top of another Lisp (Racket, in this case, which, ironically, is now using Chez Scheme as a host language, but didn’t start out that way). You can start small and build it out (Scheme wasn’t as fully featured as it is today, of course). Also, it doesn’t have to be s-expression based; there’s a JavaScript written in Guile and a teaching subset of Java in Racket. Also, Julia is partially written in Scheme. Of course, at least starting out with s-expressions is a lot easier since you don’t need to write a parser.

                        1. 1

                          Common lisp is quite large, and r7rs-large grows by the month; but their core operational models are not huge (this is more true of scheme than cl, given e.g. clos), and an implementation of the interesting bits is not prohibitively difficult.

                          1. 1

                            I’d agree about Scheme except for tail calls and continuations and hygienic macros. From my point of view a Scheme is next to useless without tail calls and syntax-case, the latter of which is definitely not trivial.

                            1. 1

                              So, for context, I knew nothing about hygienic macros before today. I was vaguely aware of their purpose, and of scheme’s syntax for them, but that was it. Here is a quick-‘n’-dirty implementation of syntax-rules for s7 scheme I was able to devise in a little over two hours, based on r7rs and a little googling to explain the behaviour of nested captures (which I think I got right). It doesn’t actually implement hygiene—ironic, but it is fairly trivial—nor a couple of other features, but I do not think any major pieces are missing.

                              I am willing to aver that it is a bit more complicated than I thought, but if someone like me, whose prior exposure to hygienic macros was effectively nil, is able to construct a near-passable implementation in 2 hours, I think my original statement that it is not prohibitively difficult stands.

                        1. 2

                          I’m excited for this. I started looking into pijul recently, to see if I could use it as my main VCS. I think there’s some differences in how the Pijul devs think about version control – at least, different from how I do.

                          They seem to not be too big on branches. I haven’t quite figured this one out yet; it seems pretty widely accepted in the programming world.

                          It seems to be very much written for people who understand the pijul internals. doing a pijul diff shows metadata needed if…you are making a commit out of the diff?

                          I would think a “what’s changed in this repository” is a pretty base-level query. They seem to not think it’s especially important; the suggested replacement of pijul diff --short works but is not documented for this. For example, it shows information that is not in pijul diff – namely, commits not added to the repository yet.

                          I also want to see if I can replicate git’s staging area, or have a similarly safe, friendly workflow for interactive committing. It seems like most VCSs other than git don’t understand the use cases for the staging area.

                          1. 3

                            They seem to not be too big on branches. I haven’t quite figured this one out yet; it seems pretty widely accepted in the programming world.

                            Curious about where you got that from, I even wrote the most painful thing ever, called Sanakirja, just so we could fork databases and have branches in Pijul.

                            Now, branches in Git are the only way to work somewhat asynchronously. Branches have multiple uses, but one of them is to keep your work separate and delay your merges. Pijul has a different mechanism for that, called patches. It is much simpler and more powerful, since you can cherry-pick and rebase patches even if you didn’t fork in the first place. In other words, you can “branch after the fact”, to speak in Git terms.

                            I would think a “what’s changed in this repository” is a pretty base-level query

                            So do the authors, they just think slightly differently from Git’s authors. pijul diff shows a draft of the patch you would get if you recorded. There is no real equivalent of that in Git, because a draft of a commit doesn’t make sense.

                            I also want to see if I can replicate git’s staging area

                            One thing you can do (which I find easier than the index) is record and edit your records in the text editor before saving.

                            1. 7

                              (Thanks pmeunier for the interesting work!)

                              I found the discussion of branches in your post rather confusing. (I use git daily, and I used darcs heavily years ago and forgot large parts of it.) And in fact I’m also confused the About channels mention in the README, and the Channels documentation in the manual. I’m trying to explain this here in case precise feedback can be useful to improve the documentation.

                              Your explanation, here and in the manual, focuses on differences in use-cases between Git branches and channels. This is confusing because (1) the question is rather “how can we do branches in Pijul?”, not “what are fine-grained differences between what you do and git branches?”, and because (2) the answer goes into technical subtleties or advanced ideas rather quickly. At the end I’m not sure I have understood the answer (I guess I would if I was very familiar with Pijul already), and it’s not an answer to the question I had.

                              My main use of branches in git is to give names to separate repository states that correspond to separate development activities that should occur independently of each other. In one branch I’m trying to fix bug X, in another branch I’m working on implementing feature Y. Most branches end up with commits/changes that are badly written / buggy / etc., that I’m refining other time, and I don’t want to have them in the index when working on something else.

                              So this is my question: “how do you work on separate stuff in Pijul?”. I think this should be the main focus of your documentation.

                              There are other use-cases for branches in git. Typically “I’m about to start a difficult rebase/merge/whatever, let me create a new branch foo-old to have a name for what I had before in case something blows up.”, and sometimes “I want to hand-pick only commit X, Y and Z of my current work, and be able to show them separately easily”. I agree that most of those uses are not necessary in patch-based systems, but I think you shouldn’t spend too much answer surface to point that out. (And I mostly forget about those uses of branches, because they are ugly so I don’t generally think about them. So having them vaguely mentioned in the documentation was more distracting than helfpul.)

                              To summarize:

                              • There is a “good” use-case for branches, namely keeping track of separate development activities on the same repository that should remain independent, and some “bad” use-cases, namely all the rest.
                              • I think that when people ask “how do we do branches?”, they have the good use-case in mind, so please start by answering about this clearly
                              • It’s okay to mention that the bad use-cases are mostly not needed in Pijul anymore, but I think to most people they are an afterthought so I wouldn’t focus on that.

                              The Pijul documentation writes: “However, channels are different from Git branches, and do not serve the same purpose.”. I think that if Channels are useful for the “good use case” given above, then we should instead consider than they basically serve the same purpose as branches.

                              Note: the darcs documentation has a better explanation of “The darcs way of (non-)branching”, showing in an example-based way a situation where talking about patches is enough. I think it’s close to what you describe in your documentation, but it is much clearer because it is example-based. I still think that they spend too much focus on this less-common aspect of branches.

                              Finally a question: with darcs, the obvious answer to “how to do branches?” is to simply use several clones of the same repository in different directories of my system, and push/pull between them. I assume that the same approach would work fine with pijul. What are the benefits of introducing channels as an extra concept? (I guess the data representation is more compact, the dcvs state is not duplicated in each directory?) It would be nice if the documentation of channels would answer this question.

                              1. 2

                                So this is my question: “how do you work on separate stuff in Pijul?”

                                This all depends on what you want to do. The reason for your confusion could be because Pijul doesn’t enforce a strict workflow, you can do whatever you want.

                                If you want to fork, then so be it! If you’re like me and don’t want to worry about channels/branches, you can as well: I do all my reviewing work on main, and often write drafts of patches together in the same channel, even on independent features. Then, I can still push and pull whatever I want, without having to push the drafts.

                                However, if you prefer to use a more “traditional” Git-like way of working, you can too. The differences between these two ways isn’t a huge as a Git user would think.

                                Edit: I do use channels sometimes, for example when I want to expose two different versions of the same project, for example if that project depends on an fast-moving library, and I want to have a version compatible with the different versions of that library.

                                1. 2

                                  But if you work on different drafts of patches in the same channel, do they apply simultaneously in your working copy? I want to work on patches, but then leave them on the side and not have them in the working copy.

                                  Re. channels: why not just copy the repository to different directories?

                                  1. 1

                                    They do apply to the same working copy, and you may need multiple channels if you don’t want to do that.

                                    Re. channels: why not just copy the repository to different directories?

                                    Channel fork copies exactly 0 byte, copying a repository might copy gigabytes.

                              2. 1

                                I use git and don’t typically branch that much. All a branch is a sequence of patches and since git lets me chop and slice patches in whatever way I want to, it seems like its usually overkill to create branches for things. Just makes your changes and build the patch chains you want when you want to, how you want to.

                                1. 1

                                  Then you might feel at home with Pijul. Pijul will give you the additional ability to push your patches independently from each other, potentially to different remote channels. Conversely, you’ll be able to cherry-pick for free (we simply call that “pulling” in Pijul).

                              3. 1

                                They seem to not think it’s especially important; the suggested replacement of pijul diff –short works but is not documented for this.

                                A bit lower in the conversation the author agrees that a git status command would be useful but they don’t have the time to work on it at the time of writing. My guess is that it is coming and the focus is on a working back-end at the moment.

                              1. 2

                                Gosh darn I’d love to have a reason to write something in Factor. Its an amazing language.

                                1. 3

                                  I wrote a 4-function calculator in it once https://junglecoder.com/blog/factorlang-review

                                  It’s quite interesting as a programming language, and an environment, but it also expects quite a lot of the developer, IMO. Which isn’t good or bad, but something to be aware of.

                                1. 5

                                  This suggests the schemas are working exactly as intended. Given a set of rules a Qalified Entry will follow, make sure innocent mistakes breaking the norm don’t happen.

                                  That’d be why we use schemas. They are working exactly as designed.

                                  Of course the corollary to this reality is stated in the post. If you don’t care, don’t care. Don’t put schemas around things where you actually don’t care.

                                  1. 11

                                    Yes. The schema is forcing the data to be in a certain format; it’s forcing people to provide a first name, middle name and last name (or first name and last name). The very problem is that such a schema doesn’t conform to reality. Your schema exists to model reality. The problem that’s being complained about in this article and many like it is that the schema poorly models reality, because a whole lot of people don’t have names which fits the schema.

                                    I don’t understand what you’re trying to say.

                                    1. 6

                                      See my comment above about “Seeing Like A State”. Schemas don’t exist just to model reality, they exist to create and enforce a particular kind of reality. Companies and states don’t want you to have total freedom to identify yourself because such a freedom imposes extra costs on their endeavors to enforce laws (very arguably good) or exploit you (bad).

                                      1. 1

                                        Reality includes bad actors. Schemas don’t exist to enforce reality, it exists to preclude the easiest sort of bad actions (willfully disregarding reality).

                                        When reality and good actors conflict with schemas, there are tickets to open, sure. There can be no innate expectation that all of the acceptable inputs are permitted by the schema, only that the easily bad ones are blocked.

                                    1. 7

                                      Xe doesn’t mention this but is surely aware of this being an aspect of power. It isn’t just that people make cultural assumptions about names or are simply lazy. A name is one of the primary ways a state or similar system rationalizes you into a resource. If everyone wanted to change their name or the nature of their name every year, the job of tracking ownership, collecting taxes, enforcing contracts, collecting and selling your personal data, would all become much more expensive.

                                      https://en.wikipedia.org/wiki/Seeing_Like_a_State

                                      In the end its about achieving economies of scale.

                                      This is unhuman at its best and inhumane at its worst but its also the ocean in which we all swim in terms of our political and economic circumstances. It will take tremendous effort and thought to solve this kind of problem.

                                      1. 3

                                        Its a little disingenuous to say that this behavior indicates that Python has “pointers” since most of the other accoutrements of pointers (pointer arithmetic, for instance) are still missing. Better to say Python has “references” and that sometimes they can trip you up.

                                        1. 2

                                          They also sometimes enable substantial memory savings, as long as you understand that you’re dealing with a reference or view onto underlying shared, mutable buffers. Not going to argue that it isn’t more often a footgun though.

                                        1. 1

                                          This would drive me crazy. Reading anything written by an AI grates profoundly on my nerves.

                                          1. 3

                                            This is so cool. Although not having a GC is kind of cheating!

                                            As a LISP dummy, I didn’t understand this bit:

                                            Here it becomes clear that, in its most bare essential form, beneath the macros and abstractions, LISP actually has an unpleasant nature where name bindings (or assignments) look like a venus fly trap.

                                            My guess is it means that once a symbol is bound it can’t be unbound (or re-bound?)

                                            This is probably the best evidence that LISP is a natural discovery rather than something someone designed, since no one would choose to design something unpleasant.

                                            This may be sarcasm? I’ll have to try it as an excuse something I design has problems — “I didn’t design it so much as discover it, so it’s not my fault.”

                                            1. 10

                                              My guess is it means that once a symbol is bound it can’t be unbound (or re-bound?)

                                              Names are still scoped and they go out of scope, since each recursion of the evaluator has its own pointer to a. The commentary on bindings is purely a question of notation aesthetics. Not having GC only means that physical memory isn’t reclaimed, so the interpreter’s lifespan can be limited based on whatever arbitrary memory limit the system may have, sort of like how a Turing machine is universal if you imagine it as having unlimited tape.

                                              I’ll have to try it as an excuse something I design has problems — “I didn’t design it so much as discover it, so it’s not my fault.”

                                              Natural discovery is a huge compliment, intended to put JMC on a pedestal with guys like Newton who also discovered profound concepts. I thought it was tragic reading the history of LISP how much guilt JMC had about the struggle and political pressure to dress it up like FORTRAN. Everyone understands and accepts that there’s certain unpleasant things about nature. It’s why, for example, we live in homes rather than sleeping in the woods without shelter. The thing we’ve always prided ourselves on the most, as humans, which is also the thing that makes us human, is our ability to understand nature, thereby giving us the means rise above it. Just like how every language that evolved out of LISP has defined its own macros and abstractions to make it better.

                                              1. 2

                                                “The thing we’ve always prided ourselves on the most, as humans, which is also the thing that makes us human, is our ability to understand nature, thereby giving us the means rise above it.”

                                                This is really an absurdly modern and culture bound idea. Even the ideas which it depends on aren’t more than 500 years old.

                                                1. 2

                                                  Veering way off topic, but while I agree this POV is not universal, I’d say it goes back at least 2500 years or so, viz the Genesis account of God giving humans dominion over nature.

                                                  And the human behavior of figuring out and altering nature goes back much further, to epic hacks like stone tools, fire and agriculture.

                                                2. 1

                                                  it was tragic reading the history of LISP how much guilt JMC had

                                                  Can you share pointers to the history you read? I’d like to add that to my pile of (too-)many computer history books.

                                                3. 5

                                                  Here it becomes clear that, in its most bare essential form, beneath the macros and abstractions, LISP actually has an unpleasant nature where name bindings (or assignments) look like a venus fly trap.

                                                  I think this has to do specifically with the method of using associative lists as the environment, (where definitions are stored) the best explanation of which I’ve come across being found in the book The Little Schemer.

                                                  This is in some sense “essential” in that it’s the most primitive way to do it, and is in line with how it has been done historically. But it is not at all “essential” in the sense that it has anything to do with “the essence of lisp” or that lisps which choose to represent the environment in a more performant way are “not a real lisp”. Honestly I don’t think “essential” is a good word to describe it and that “fundamental” or “primitive” is much clearer.

                                                  1. 4

                                                    Author here. I’ve reworded things slightly and I think it comes across much better. Please take a look?

                                                    1. 2

                                                      I reloaded the page but this section still reads the same, with the same reference to Venus’ Fly-Trap. Could you explain that metaphor? It’s a plant that snaps shut on insects … how does that relate to name binding or associative lists?

                                                      1. 1

                                                        I’m sorry you didn’t like the article! I hope you found my response to your earlier feedback helpful.

                                                        1. 2

                                                          I liked the article quite a bit! Even forwarded it to my kid. Just wasn’t clear on that one simile :)

                                                1. 3

                                                  Call me when it runs IMessage.

                                                  1. 1

                                                    It won’t ever. iMessage is bound to iOS hardware identifiers for authentication. It’s not the porting / RE that’s hard in that case, it’s that you can’t use it without real Apple hardware.

                                                    1. 2

                                                      It’s definitely possible to run iMessage on a Hackintosh, where you just need a valid combination of device and logic board serial numbers and a few other magic boot loader values—there are tools to automate generating these. On the other hand, getting Messages.app plus all of the frameworks it requires to run on Linux wouldn’t be easy (remember that iMessage makes heavy use of the system keychain, APNs, and other fancy stuff that you can’t easily reimplement without pulling in half of macOS).

                                                      1. 1

                                                        Ok, i phrased that wrong - you need to get an iPhone, but you can copy the identifiers out, which decreases the number of people interested in that solution since… they already have an iDevice.

                                                        I haven’t heard of anyone generating new valid numbers though. Have you got a link?

                                                        1. 1

                                                          iMessage itself (the blue bubbles) still works just fine without an iPhone, you just don’t have an associated phone number to receive messages at without using your email address.

                                                          The serial numbers and SMBIOS stuff are for emulating a real Mac; they don’t come from an iPhone. The process is a bit more of a pain if you’ve never associated your Apple ID with a real Apple device or spent real money on the App Store (you usually have to make a purchase or call support to get your account permitted to use iMessage so as to cut down on spam), but it’s certainly possible.

                                                          Here’s a more detailed link on that topic: https://dortania.github.io/OpenCore-Post-Install/universal/iservices.html

                                                  1. 3

                                                    I ported a relatively large game from JS to Gambit-C. I could post about it if anyone cares.

                                                    1. 1

                                                      I’d be interested.

                                                    1. 1

                                                      I’ve been using Emacs for about 18 years. When I started I devoted considerable time to my emacs configuration. Oddly, now I almost always cruise with as close to the bare bones configuration as possible, although I do install packages from melpa for whatever tasks I am up to. Other than that, though, I barely touch my init.el.

                                                      1. 18

                                                        All I want in my technical life at this point is to use less google software.

                                                        1. 3

                                                          I understand staying away from Chrome, Gmail, and Android due to the surveillance, but what’s inherently wrong with a new OS kernel, even if it is Google that’s developing it?

                                                        1. 23

                                                          I enjoyed this quite a bit when I first saw it, and I still do kinda enjoy the original - it was never meant to be taken too seriously of course and it succeeds as a joke - but since then, the word “wat” has become a trigger of rage for me.

                                                          These things tend to have a reasonable explanation if you take the time to understand why it does what it does. They tend to be predictable consequences of actually generally useful rules, just used in a different way than originally intended and coming out a bit silly. You might laugh but you can also be educated.

                                                          But you know how often I see people taking the time to try to understand the actual why? Not nearly as often as I see people saying “WAT” and just dismissing things and calling the designers all kinds of unkind things. And that attitude is both useless and annoying.

                                                          1. 38

                                                            These things tend to have a reasonable explanation if you take the time to understand why it does what it does. They tend to be predictable consequences of actually generally useful rules, just used in a different way than originally intended and coming out a bit silly.

                                                            For me the most important takeaway is that rules might make sense by themselves, but you have to consider them in the bigger picture, as part of a whole. When you design something, you must keep this in mind to avoid bringing about a huge mess in the completed system.

                                                            1. 4

                                                              Exactly. It is generally underappreciated how incredible hard language design is. The cases Bernhardt points out are genuine design mistakes and not just the unfortunate side effects of otherwise reasonable decisions.

                                                              That’s why there are very few languages that don’t suffer from ugly corner cases which don’t fit into the whole or turn out to have absurd oddities. Programming languages are different, contrary to the “well, does it really matter?” mindset.

                                                              1. 3

                                                                I don’t know. What I always think about JS is that R4RS Scheme existed at the time JS was created and the world would be tremendously better off if they had just used that as the scripting system. Scheme isn’t perfect but it is much more regular and comprehensible than JS.

                                                                1. 3

                                                                  I think one have to remember the context in which JavaScript was created; I’m guessing the main use case was to show some funny “alert” pop-ups here and there.

                                                                  In that context a lot of the design decisions start to make sense; avoid crashes whenever possible and have a “do what I mean” approach to type coercion.

                                                                  But yeah, I agree; we would’ve all been better off with a Scheme as the substrate for maybe 80% of today’s end-user applications. OTOH, if someone would’ve told Mozilla how successful JS would become we could very well have ended up with some bloated, Java-like design-by-committee monstrosity instead

                                                                2. 2

                                                                  I don’t think I know a single (nontrivial - thinking about brainfuck/assembly maybe) programming language with no “unexpected behaviour”.

                                                                  But some just have more or less than others. Haskell, for example, has a lot of these unexpected behaviours but you tend not to fall on these corner cases by mistake. While in javascript and Perl, it is more common to see such a “surprise behaviour” in the wild.

                                                                  Another lesson I gather from this talk is that you should try to stick as much as possible in your well-known territory if you want to predict the behaviour of your program. In particular, try not to play too much with “auto coercion of types”. If a function expects a string, I tend not to give it a random object even if when I tried it, it perform string coercion which will most of the time be what I would expect.

                                                                  1. 1

                                                                    Well, there are several non-trivial languages that try hard not to surprise you. One should also distinguish betweem “unexpected behaviour” and convenience features that turn out to be counterproductive by producing edge-cases. This is a general problem with many dynamically typed languages, especially recent inventions: auto-coercion will remove opportunities for error checking (and run-time checks are what make dynamically typed languages type-safe). By automatic conversion of value types and also by using catch-all values like the pervasive use of maps in (say) Clojure, you effectively end up with untyped data. If a function expects a string, give it a string. The coercion might save some typing in the REPL, but hides bugs in production code.

                                                                3. 3

                                                                  In javascript, overloading the + operator and the optional semicolon rules I would call unforced errors in the language and those propagate through to a few other places. Visual Basic used & for concatenation, and it was very much a contemporary of JS when it was new, but they surely just copied Java’s design (which I still think is a mistake but less so given the type system).

                                                                  Anyway, the rest of the things shown talk I actually think are pretty useful and not much of a problem when combined. The NaNNaN Batman one is just directly useful - it converts a thing that is not a number to a numeric type, so NaN is a reasonable return, then it converts to string to join them, which is again reasonable.

                                                                  People like to hate on == vs === but…. == is just more useful. In a dynamic, weakly typed language, things get mixed. You prompt for a number from the user and technically it is a string, but you want to compare it with numbers. So that’s pretty useful. Then if you don’t want that, you could coerce or be more specific and they made === as a shortcut for that. This is pretty reasonable. And the [object Object] thing comes from these generally useful conversions.

                                                                  1. 3

                                                                    == vs ===

                                                                    It definitely makes sense to have multiple comparison operators. Lisp has = (numeric equality), eq (object identity), eql (union of the previous two), equal (structural equality).

                                                                    The problem is that js comes from a context (c) in which == is the ‘default’ comparison operator. And since === is just ==, but more, it is difficult to be intentional about which comparison you choose to make.

                                                                    1. 1

                                                                      Well, a lot of these things boil down to implicit type coercion and strange results due to mismatched intuitive expectations. It’s also been shown time and again (especially in PHP) that implicit type coercions are lurking security problems, mostly because intuition does not match reality (especially regarding == and the bizarre coercion rules). So perhaps the underlying issue of most of the WATs in here simply is that implicit type coercion should be avoided as much as possible in languages because it results in difficult to predict behaviour in code.

                                                                      1. 1

                                                                        Yeah, I perfer a stronger, static type system and that’s my first choice in languages. But if it is dynamically typed… I prefer it weaker, with these implicit coercion. It is absurd to me to get a runtime error when you do like var a = prompt("number"); a - whatever; A compile time error, sure. But a runtime one? What a pain, just make it work.

                                                                        1. 3

                                                                          Lots of dynamic languages do this (e.g. Python, Ruby, all Lisp dialects that spring to mind), and IME it’s actually helpful in catching bugs early. And like I said, it prevents security issues due to type confusions.

                                                                  2. 10

                                                                    Yeah, I think that this talk was well-intentioned enough, but I definitely think that programmers suffer from too much “noping” and too little appreciation for the complexity that goes into real-world designs, and that this talk was a contributor… or maybe just a leading indicator.

                                                                    1. 6

                                                                      There was a good talk along these lines a couple of years ago, explaining why javascript behaves the way it does in those scenarios, and then presenting similar ‘WAT’s from other languages and explaining their origins. Taking the attitude of ‘ok this seems bizarre and funny, but let’s not just point and laugh, let’s also figure out why it’s actually fairly sensible in context’.

                                                                      Sadly I can’t find it now, though I do remember the person who delivered it was associated with ruby (maybe this rings a bell for somebody else).

                                                                      1. 1

                                                                        Isn’t the linked talk exactly the talk you’re thinking about? Gary is ‘associated with’ Ruby and does give examples from other languages as well.

                                                                        1. 2

                                                                          No. I was thinking of this, linked else-thread.

                                                                      2. 3

                                                                        While things might have an explanation, I do strongly prefer systems and languages that stick to the principle of least surprise: if your standard library has a function called ‘max’ that returns the maximum value in an array and a function called ‘min’ that returns the position of the minimum element instead, you are making your language less discoverable and putting a lot of unnecessary cognitive load on the user.

                                                                        As someone who has been programming for over 20 years and is now a CTO of a small company that uses your average stack of like 5 programming languages on a regular basis I don’t want to learn why anymore, I just want to use the functionality and be productive. My mind is cluttered with useless trivia about inconsistent APIs I learned 20, 15, 10 years ago, the last thing I need is learning more of that.

                                                                      1. 19

                                                                        You’ve rediscovered Petri nets (WP, nLab).

                                                                        It’s a cycle, not a straight line. … In fact, the more we looked at it, the more we realized that there never was really any such thing as cause and effect, it was simply a useful fiction. … We are forced to use cause and effect because we can’t process things otherwise.

                                                                        Cause and effect come from the causal structure of spacetime. It is true that our personal perceptions of significant causes are fictional, but the physical transfer of information between regions of spacetime does induce a causet, a special case of DAGs.

                                                                        This is crucial for demystifying the topic. Actions in space can form feedback loops, but actions in time are acyclic. There is a deep connection arising from the idea that each feedback loop can be decomposed into discrete actions; it does not mean that the concept is bullshit, but that a feedback loop requires both space and time in order to function. For a vivid graphical example, look at the animations for circular polarization; we see moving waves in time and static circles in space.

                                                                        As far as we know, every atom in the universe is gravitationally affected by every other atom.

                                                                        Once we consider that time and space behave differently, then the possibility of gravitational waves arises, and we have experimentally determined the speed of gravity. It is not enough to notice that everything is connected; everything is moving in time, creating waves and signals.

                                                                        1. 2

                                                                          No, I didn’t want to include them, but you are correct that we’re talking about Petri nets, or rather a form of them.

                                                                          Great dive-down on the science side of things. I may steal this and expound on it. I could take physical systems to this lower level and write the same essay. Not sure if this would make it easier or more difficult to consume.

                                                                          I am interested in the way you misunderstood me. Thank you.

                                                                          1. 11

                                                                            I read through the essay and I don’t think I understand what you’re trying to say either. Is your argument that mathematical models are always simplifications? Or that programming languages and flow charts are a poor way of describing the behavior of system dynamics?

                                                                            1. 1

                                                                              Formal, consistent, computable, complete symbolic systems (we’ll just call that “math”) are by nature context-free. In other words, 1+1=2 is a useful statement in many cases no matter what the actual numbers and symbols represent. We are quite lucky in that vast swaths of mathematics map directly over to observed reality. However, it’s humans and our language ability that does that mapping, not the math. The math stays the same. We’re the one attaching labels and doing the mapping. That turns out to be much more important than many people realize.

                                                                              Everything we’ve learned so far says that life is multivariate and complex, working through probability webs. We’re stuck, however with describing things in simple ways: language, text, pictures, graphs, etc. Such is the nature of human communications. [yes, we are able to string together long chains of this type of communication into more complex things. The purpose here is to discuss the process of commonly reasoning about and discussing complex things, not whether they are ultimately knowable or not] This means that what we consistently do, as in the SAFe example, is put together various words we have into some sort of pseudo-mathematical diagram. Then we view such sets of symbols as reliable as the 1+1=2 set we mentioned earlier.

                                                                              This leads us to all sorts of problems. I can look at a flowchart, for instance, and reason about cause and effect. Looking at abstractions and guessing the details, after all, is one of the things that makes us intelligent. But the exact thing I map the flowchart to determines whether or not it maps well to reality. One drop of water plus another drop of water equals just a bigger drop of water. My first statement wasn’t wrong. It depends completely on how people map the words to the situation. We’re off-the-rails already with simple math. It gets worse from there.

                                                                              My argument is neither. Mathematical models are not always simplifications, they’re actually existing outside of our personal conception of reality. I don’t think the labels “true”, “false”, and so on have any application here. Yes, programming languages and flow charts are a poor way to describe system dynamics, but math [as defined previously] is all we have and all we’re ever going to get.

                                                                              This is not a nihilistic position. It’s just a restatement of the problem so that I can continue forward with a solution. If folks don’t agree on the problem, they’ll never agree on various proposed solutions. Math is very useful to explore reality. Yay science. Now I need to explain how that’s done.

                                                                              1. 3

                                                                                Everything we’ve learned so far says that life is multivariate and complex, working through probability webs.

                                                                                Probability webs are just another mathematical formulation of the system. If you really believe that formal mathematical methods are inadequate to describe the world then there isn’t any particularly good reason to think that describing things as “probability webs” is any better than any other descriptions.

                                                                                Perhaps rather than deliver your critique in such a way as to suggest you think there is a problem with mathematization or modeling, you could just say that you think DAGs or other structures are inadequate to most common engineering tasks. That indeed seems to be what you are getting at later in the essay, but the introductory philosophical material really confuses and obscures the point.

                                                                                1. 2

                                                                                  I don’t think he’s critiquing mathematical structures so much as highlighting the fact that any structure that truly fully models a problem will be too large for us to comprehend.

                                                                                  1. 2

                                                                                    Yeah, but that isn’t really true except in a trivial kind of way. There are many, many, many problems so amenable to mathematical and/or computational modeling that humans will almost certainly never run out of them.

                                                                                    1. 2

                                                                                      You are correctly identifying that for most cases we can definitely model a “good enough” system to make progress. But it’s also certainly true that you are ignoring, to good effect, a very large part of the system when you do that. It’s just that those parts are not necessary to understand to make progress on the problem usually.

                                                                                      1. 1

                                                                                        You may be interested in this interview with a cognitive neurologist: https://www.youtube.com/embed/4HFFr0-ybg0

                                                                                        Isomorphism and correlation work great for all kinds of things. We should never get them confused with what’s actually happening.

                                                                                  2. 3

                                                                                    In my book ‘Into the Sciences’ (https://madhadron.com/into_the_sciences.html) I called out the idea of primitive notions, which are the elements of a theory we use mathematics or other formal language to relate, and that we use trials to measure values of. We have some notion of an idealized trial that links the primitive notions to the world, and reasonable practice to do our best to approximate idealized trials.

                                                                                    In terms of communicating this stuff, the key notion I have found is from Wittgenstein’s ‘Philosophical Investigations’: our communication is made up of language games and we have a capacity for rule following to try to cooperate in playing those games. Communicating an understanding of systems is about leading someone reproducibly into the same language games that you are using. ‘Philosophical Investigations’ is one of the best books on software design I’ve ever read, honestly.

                                                                                    1. 1

                                                                                      Most def Wittgenstein is the go-to guy here. Having said that, I don’t think his work stands alone without a similar look at C.S. Peirce. Peirce’s tripartite semiotics really closes the loop between interior and exterior qualia. It is our desire and need to assign symbols that both allows mathematics and limits our ability to use it as well as we might. It is the language games we play that make it all hang together.

                                                                                      Thanks for the book link.

                                                                              2. 2

                                                                                … but actions in time are acyclic

                                                                                Until you introduce economics / strategy / betting.

                                                                                1. 2

                                                                                  Is that the case? When I make a prediction, I’m doing so with only past knowledge. When I think many moves ahead, that’s still an event that happens at the present time.

                                                                                  1. 1

                                                                                    You speak in the first person with a concept of “present time” but time is relative with many actors and can be cyclic when you have an economy because you are “pricing in” all available information and equilibriums are formed by routine-following zealots on each side. Really you could say the theory of yin and yang is a “cyclic time” concept… but I like things like “the one electron theory” so I guess it depends how you want to model things.

                                                                                2. 1

                                                                                  Thanks for this. I’ve been using DAGs as a model for causal relationships for a while, but I was perturbed that cycles often appeared when I’d try to use them to communicate about process. Your comment helped me separate the idea of a process plan from how its instances play out over time.

                                                                                  1. 2

                                                                                    Yeah. Sometimes this is called interactivity. We could imagine two processes evolving in spacetime such that they meet for a while, then move apart in space for a while, and then meet again. Each time they meet, they have a window of spacetime where they can interact with each other, exchanging information and quanta. The cycles that you saw were topological indicators of where the interactive portions must occur.

                                                                                1. 7

                                                                                  This is a fascinating read! I had no idea this was possible.

                                                                                  However, I would caution against generalizing here to lisps more broadly; the ability to embed a function value directly in a macroexpansion seems to be a quirk of CL and Janet as far as I can tell; even other lisps sharing close ancestry with CL like Emacs Lisp don’t support it.

                                                                                  1. 6

                                                                                    Turns out I had made a typo and it does in fact work in Clojure.

                                                                                    However, the rationale for doing it does not really apply in Clojure since the macro system integrates smoothly with the namespace system, and backquote fully-qualifies all symbols by default with the namespace in which the intended function is found, so while it’s possible to use this technique, it’s a solution for a problem that doesn’t exist; introducing shadowed names in the context of the macro caller cannot cause the macroexpansion to resolve to the wrong function.

                                                                                    1. 3

                                                                                      Or is the implicit namespace-qualification a solution to a problem that doesn’t exist? :)

                                                                                      Common Lisp does the same thing, actually — maybe Clojure copied this from Common Lisp (?). It is a totally valid solution, but (at least in Common Lisp; not sure if Clojure does something more clever) you can still run into issues if your macros are defined in your own package, reference functions in that same package, and are also expanded in that same package — everything is in the same namespace. Which like… yeah then you oughtta know what your macros look like, I guess. But “lexically scoped macros” or whatever work regardless of the namespace structure.

                                                                                      (Also, strong caveat: I have no idea what I’m actually talking about and am basing those statements on what I read in On Lisp and have never written production lisp in my life.)

                                                                                      1. 5

                                                                                        It is a totally valid solution, but (at least in Common Lisp; not sure if Clojure does something more clever) you can still run into issues if your macros are defined in your own package, reference functions in that same package, and are also expanded in that same package — everything is in the same namespace.

                                                                                        Yeah, this doesn’t happen at all in Clojure. Even if you’re referencing something from the current namespace it gets fully expanded into an unambiguous reference in the quoted form. It’s basically impossible to write an unhygenic macro in Clojure unintentionally.

                                                                                        1. 6

                                                                                          It has its weird issues, though. You can unintentionally write a macro that doesn’t want to expand due to hygiene errors:

                                                                                          (ns foobar)
                                                                                          
                                                                                          (def x 10)
                                                                                          
                                                                                          ;; ...Perhaps a lot of code...
                                                                                          
                                                                                          (defmacro foo [arg]
                                                                                            `(let [x 1]
                                                                                               (+ x 1)))
                                                                                          

                                                                                          If you try to use foo, it will complain that the x in the let bindings is not a “simple symbol” (because it gets expanded to (let [foobar/x 1] (+ foobar/x 1)) which is thankfully not valid). And fair enough, you will hit this issue as soon as you try to use the macro, so it should be relatively easy to debug.

                                                                                          Also, the system breaks down when you’re trying to write macro-writing macros. Something like this simply fails with the same error, that foo is not a “simple symbol”:

                                                                                          (defmacro make-foo []
                                                                                            `(defmacro foo [arg]
                                                                                               `(let [y 1]
                                                                                                  (+ y 1))))
                                                                                          

                                                                                          The same happens if you change make-foo to accept the name of the macro but still use quasiquotation (not exactly sure why that is, though). The only thing that seems to work is if you convert the let to a manual list building exercise:

                                                                                          (defmacro make-foo [name]
                                                                                            (let [y-name 'y]
                                                                                              (list 'defmacro name ['arg]
                                                                                                 (list 'let [,y-name 1]
                                                                                                       (list '+ y-name 'arg)))))
                                                                                          
                                                                                          (make-foo bar)
                                                                                          (bar 2) => 3
                                                                                          

                                                                                          But this breaks down as soon as you try to pass in identifiers as arguments:

                                                                                          (let [x 1] (bar x)) ;; Error: class clojure.lang.Symbol cannot be cast to class java.lang.Number
                                                                                          
                                                                                          1. 2

                                                                                            You can unintentionally write a macro that doesn’t want to expand due to hygiene errors:

                                                                                            That’s kind of the whole point; you made an error (bound a symbol without gensym) and the compiler flagged it as such. Much better than an accidental symbol capture.

                                                                                            Something like this simply fails with the same error, that foo is not a “simple symbol”

                                                                                            Yeah, because it’s anaphoric. The entire system is designed around getting you to avoid this. (Though you can fight it if you are very persistent.) The correct way to write that kind of macro is to accept the name as an argument (as you did in the second version) but your second version is much uglier than it needs to be because you dropped quasiquote unnecessarily:

                                                                                            (defmacro make-foo [name]
                                                                                              `(defmacro ~name []
                                                                                                 `(let [y# 1]
                                                                                                    (+ y# 1))))
                                                                                            
                                                                                            1. 3

                                                                                              Thanks for explaining how to make this work, I stand corrected!

                                                                                          2. 3

                                                                                            That’s an elegant solution to hygene. I might have to give this Clojure language a try, it sounds pretty great!

                                                                                            Are there other Lisps that work this way, or is Clojure unique in this regard?

                                                                                            1. 4

                                                                                              Both Clojure and Common Lisp’s Macro systems seem like a huge kludge after learning syntax-case.

                                                                                              1. 2

                                                                                                Fennel works similarly in that it prevents you from using quoted symbols as identifiers without gensym/auto-gensym. However, it does not tie directly into the namespace system (because Fennel is designed to avoid globals and its modules are very different from Clojure namespaces anyway) but works entirely lexically instead, so if you want a value from a module, your macroexpansion has to locally require the module.

                                                                                                https://fennel-lang.org/macros

                                                                                            2. 1

                                                                                              What happens in Janet if you rebind the injected variable to a different value? It seems to me that this shouldn’t work in the general case. Also, I don’t see how this could work if you inject a variable which is declared later in the file.

                                                                                              1. 1

                                                                                                Janet inline values, you can’t redefine something that isn’t specifically a var - if it is a var, it is accessed via indirection.

                                                                                        1. 14

                                                                                          John Earnest says on reddit (I agree):

                                                                                          Alternatively, look to APL (k, j, q), Smalltalk, Forth (factor, postscript), and Lisp: all of these languages offer their own take on uniform operator precedence. It’s always obvious to a reader, without any need to memorize. It’s also simpler for the machine to parse. Solve the problem by removing it.

                                                                                          Despite having written a c parser, I frequently get its precedence confused.

                                                                                          1. 9

                                                                                            I’ve hit bugs caused by people getting it wrong and (much more often) I’ve had to consult a table to find out which way it works. It’s pretty easy to remember that multiply and divide are higher precedence than add and subtract, since we learned it at school and were tested on it over many years. It’s much harder to remember what the relative precedence of pre-increment and post-increment or bitwise and, and left shift are. I consider a language trying to make me remember these things to be a bit of a usability mistake and generally just put brackets everywhere.

                                                                                            In Verona, we’re experimenting with no operator precedence. a op1 b op2 c is a parse error. You can chain sequences of the same operator but any pair of different operators needs brackets. You can write (a op1 b op1 c) op2 d or a op1 b op1 (c op2 d) and it’s obvious to the reader what the order of application is. This is a bit more important for us because we allow function names to include symbol characters and allow any function to be used infix, so statically defining rules for application order of a append b £$%£ c would lead to confusion.

                                                                                            1. 5

                                                                                              just put brackets everywhere.

                                                                                              This is my go-to approach as well not just because I can’t remember some of the less-frequently-used precedence rules, but also because I assume the person maintaining the code after me will be similarly fuzzy on those details and making it explicit will make my code easier to understand. For that reason, I often add parentheses even in cases where I am quite certain of the precedence rules.

                                                                                              1. 2

                                                                                                Not all binary operators are associative, though.

                                                                                                1. 2

                                                                                                  That’s true. For a single operator, the evaluation order is exactly the same as for other expressions (left to right) so it should be easy to remember the rule, even if not all evaluation orders would give the same result.

                                                                                            1. 6

                                                                                              The best operator precedence is no operator precedence. The fact that people get hung up on this is nuts to me, although I can obviously understand it.