Threads for varjag

  1. 3

    the Memory Protection Unit. This piece of hardware in many ARM and RISC-V chips allows the kernel to lock down whether various segments of memory are readable, writable, and executable.

    To clarify, is this the bona-fide MMU, with S-mode/EL1 fine-grained page tables/trees? Or is this the PMP (not sure of the arm term) with M-mode/EL3 coars(er)-grained memory ranges?

    And see also: tock, also written in rust and using PMP-style memory protection.

    1. 9
      1. 2

        OK, so the second style.

    1. 3

      On Thursday am off to Norway’s national championship in 10m air pistol. Bit of a dread as it’s my first time at a high level competition.

      At work up to then it’s filing a patent and setting up a network for customer-accessible simulator system.

      1. 9

        This kind of code makes me very nervous. Reasoning about the flow control is a lot of effort. If you want to write code that’s maintainable and auditable, then you should use option types for anything that can store a valid value or a not-present placeholder. This is easy in C for pointers because all pointers are option types (just badly designed ones that explode if you forget the not-present check). You can do it for a lot of integer types if you’re willing to define something like 0 or INT_MAX as the not-present marker. For other structs then you can often find a bit or even a whole char spare somewhere to indicate the not-present value.

        If you do that, then you have a single cleanup function that unconditionally tries to destroy every field. The cleanup functions for the type used in every field do the check for the not-present value. Your code ends up looking something like this:

        struct driver_data
        {
        	struct A *a;
        	struct B b;
        	opaque_t c;
        };
        
        // Each of these either initialises the pointed-to value and returns true,
        // or sets rc_out to the return code and returns false.
        _Bool acquire_a(struct A ** a, int *rc_out);
        _Bool register_b(struct B* b, int *rc_out);
        _Bool make_c(opaque_t *c, int *rc_out);
        
        // Each of these destroys the pointed-to object and sets the value 
        // to the not-valid state.
        void destroy_c(opaque_t *c);
        void unregister_b(struct B* b);
        void release_a(struct A**a);
        
        int driver_init(struct parent_dev *parent)
        {
        	if (!(parent->private_data = kzalloc(sizeof struct driver_data,
        	                                   GFP_KERNEL));
        	{
        		return -ENOMEM;
        	}
        	int rc = 0;
        	if (acquire_a(&parent->private_data->a, &rc) &&
        	    register_b(&parent->private_data->b, &rc) &&
        	    make_c(&parent->private_data->c, &rc)
        	{
        		return 0;
        	}
               
        	driver_cleanup(parent);
        
        	return rc;
        }
        
        void driver_cleanup(struct parent_dev *parent)
        {
        	if (parent->data)
        	{
        		destroy_c(&parent->data->c);
        		unregister_b(&data->b);
        		release_a(&data->a);
        	}
        	kfree(parent->data);
        }
        

        Even this is a bit messy and is one of the reasons that I much prefer C++11 or newer to C for low-level development. The C++ version of this provides a constructor and a destructor for each of these and looks like this:

        struct driver_data : public generic_driver_data
        {
        	std::unique_ptr<A> *a = { acquire_a() };
        	std::optional<struct B> b = { register_b() };
        	std::optional<opaque_t> c = { make_c() };
                explicit operator bool() const
        	{
        		return a && b && c;
        	}
        };
        
        int driver_init(parent_dev *parent)
        {
        	std::unique_ptr<driver_data> data{new (GFP_KERNEL) driver_data()};
        	if (!data)
        	{
        		return -ENOMEM;
        	}
        	if (!(*data))
        	{
        		return -ENOSPC;
        	}
        	parent->private_data = std::move(data);
        	return 0;
        }
        

        This is much less code and is far harder to get wrong. This assumes the existence of an operator new that takes the flags that kzalloc takes as extra arguments beyond the required size_t and is marked noexcept. Such an implementation is allowed to fail and return nullptr. If it does, then the constructor is not called. If this succeeds, the constructor is called and is a default implementation, which will construct each field, in turn, by calling the functions from the original example.

        The user-defined conversion operator on driver_data determines if the struct is fully initialised by checking that each field is initialised. The default destructor will destroy the fields in the reverse order to their creation. Because each of these is some form of option type, the real cleanup code will be called only if the field has been initialised.

        Note the complete lack of any cleanup code in the C++ version. This is all generated automatically from the RAII pattern. If data goes out of scope without the pointee being moved to the parent device, then the pointee is destroyed (if it is not null). When the pointee is destroyed, each field is destroyed. No resource leaks and no complex flow control to follow in the source. There may be some space overhead for std::optional with some of the fields, but you can create a custom version with your own not-present placeholder for common types if you need to avoid this.

        This assumes that parent_dev has a field that looks like this:

        	std::unique_ptr<generic_driver_data> private_data;
        

        Depending on the following definition:

        struct generic_driver_data
        {
        	virtual ~generic_driver_data() = 0;
        };
        

        Now there is no need for the explicit cleanup at all because the destructor on driver_data will be called when the parent_dev is destroyed (or when it explicitly resets its private_data field). If you want to avoid a vtable on driver_data then you can hoist this one level up by making parent_dev a templated class that you specialise on the device type (you need this vtable somewhere. In Linux each device has a pointer to one already as an explicit field).

        The C++11 version is less code, is trivial to review, and has far fewer things that can go wrong. It will generate almost identical code to a correct implementation of the C version (if compiled without exceptions or RTTI, neither of which it needs). There’s really no good reason for choosing to write systems code in C these days.

        1. 2

          The article alludes to an idea similiar to the one you present with the acquire/register/make functions.

          I think having some kind of validity flag is a nice property and can really help when you are debugging. When you are in control of the code base in question, you can also design the data structures to support this kind of usage. In my case, the types are given by whatever Linux kernel my driver has to interface with. There are usually about zero guarantees that a data structure won’t change between even patches. Because of this I did not and do not feel comfortable to pick a combination of fields in any given struct as “non-valid marker”.

          I think the C code you give will only work if the zero-initialized value is also the one that gets recognized as invalid. The reason is that when e.g. acquire_a fails, make_c will not run. Therefore, c is still left in zero-initialized state by kzalloc. In order for destroy_c to be no-op, it needs to recognize zero-initialized as invalid. Is this what you intended or am I missing something? This would also imply constraints on data structures I do not control.

          I consider the given alternative a workable workalike in the case where you can control the data structures. I would feel inclined to use it if the acquire/release, register/unregister, make/release function pairs already existed with the somewhat funky proposed signature.

          I would also expect that the code generated by a C++ compiler for the same thing done with RAII to be pretty much the same as the one generated for my C code. I agree that with C++ it’s easy to get the desired behavior without having to work on the control flow myself.

          I came up with the described structure in a context, i.e. writing Linux drivers in C, where I don’t control all the code and I think that for this context the described approach holds utility.

          1. 2

            Yup, if you’re writing code for the Linux kernel then you’re basically stuck writing code that’s verbose and difficult to maintain. With FreeBSD, the kernel headers generally work in C++ code, so you can use C++ within your own module but Linux uses a lot of things that are not in the common subset of C and C++ throughout the headers and so it’s basically impossible to use any language other than C.

          2. 1

            There C version above will get flagged on code review by maintainers. And the C++ part is of course is wholly irrelevant to Linux kernel development, could as well be in Haskell.

          1. 17

            Perhaps I shouldn’t have been, but I was surprised to see that this was patentable:

            There’s another algorithm that doesn’t depend on knowing which value is larger, the U.S. patent for which expired in 2016:

            unsigned average(unsigned a, unsigned b)
            {
                return (a / 2) + (b / 2) + (a & b & 1);
            }
            
            1. 9

              I’m not sure what the policy is now, but in the ‘90s there was the pretty awful situation where the US patent office had a policy that if a software patent doesn’t have prior art then it should be granted and the courts can figure out if it’s valid (in part because the patent office didn’t have the expertise to judge whether something was non-obvious and in part because patent filing is expensive and the US government is broadly in favour of receiving money). Simultaneously, the courts had a policy that a granted patent was probably valid and the burden of proof was for the people challenging it to do so.

              The cost of either defending or challenging a patent was around $1m, so everyone filed a load of patents on anything that they could, no matter how silly (Microsoft had one on the is-not-identical-to operator in Visual Basic, for example), so that they could guarantee that all of their competitors were violating a load of patents and it was mutually assured destruction for anyone to actually try to enforce a patent.

              The MAD strategy started to fail with the dot-com crash. A load of companies that had filed patents during this era went bankrupt and their assets were sold for next to nothing to patent troll companies (non-practicing entities). These companies weren’t worried about MAD because they were not violating any patents because they didn’t produce anything.

              I think the situation has improved a bit since then.

              1. 1

                From the patent title it sounds like it was about a hardware implementation, which is slightly more plausible, but yeah.

                Pretty clever, though!

                1. 2

                  That could also be because “software” was not patentable (it gets copyright), but a machine is. So you got a absurd number of parents describing computers that did like a single thing, which is daft whether you think software patents are BS or not.

                2. 1

                  Software patents were really out of hand in 1990s America. I heard that the situation somewhat improved now.

                1. 46

                  It is remarkable how a 2Gb RAM, 1.8GHz system is considered barebones for desktop Linux now.

                  1. 18

                    I agree, but then one looks on “modern web” and starts to make sense.

                    1. 8

                      My first “desktop” (i.e. with a GUI) Linux machine was a 486 with 12MB of RAM. You can hardly fit the kernel in that space now.

                      1. 8

                        There’s a continuum of these things but I think we hit diminishing returns some time ago.

                        My first system with a GUI was an 8086 with 640KiB of RAM. It ran Windows 3.0 or GEM, on top of DOS. There was no memory protection and no pre-emptive multitasking so a single buggy app could kill everything. Nothing in the GUI had a spellcheck. I had WordStar for DOS that did support spell checking, so I’d write things in that and then use Write to add fonts and print. Doing anything graphical could easily exhaust memory: a large bitmap wouldn’t fit.

                        My next machine was a 386 with 5 MiB of RAM (1 MiB soldered to the board, 4 matched 1 MiB SIMMs). It ran DOS and Windows 3.11 and either Word 2.0 or ClarisWorks 1.0. Word had a few more features but ClarisWorks was more modular and so the vector drawing application worked well as a DTP app because every text field embedded all of the functionality of the word processor component. Editing large images caused a lot of swapping. Spell checking happened offline so that the dictionary didn’t need to be memory resident when not in use. Still no memory protection or preemptive multitasking. Minix and early Linux could just about run on this machine but I never tried and so have no idea how well they worked.

                        My next machine was a 133MHz Pentium clone with 32 MiB of RAM. This ran NT 4 and later dual booted Linux (Red Hat Linux 5) and Office 95. Both operating systems had memory protection and preemptive multitasking, so no buggy app could bring down the whole system (without also triggering an OS bug). This was also my first machine with a sound card and a 3D accelerator. The 3D card wasn’t used for anything except games back then. It had enough RAM to handle large documents, online spell checking, editing large images, and so on. It could play back tiny postage-stamp videos and video editing was so painful that I gave up almost immediately. This machine became a lot less responsive after I installed Internet Explorer 4. In hindsight, this was the harbinger of Electron: a load of things that were previously very fast and low-memory bits of the UI were replaced by slow HTML components.

                        This machine got a load of incremental upgrades. The upgrade that replaced most of the remaining bits (including the case) ended up with a 550 MHz Pentium III with 512 MiB of RAM running Windows 2000 (and, I think, Office 97 / StarOffice) and dual booting FreeBSD 4.x. I had an ATi All-in-Wonder 128, which had a TV tuner and some video encode / decode hardware but video editing was still too slow on this to be useful.

                        After a couple of upgrades, my next new machine was a PowerBook G4 (1.25GHz I think - the logic board was replaced a few times over its lifetime and so I don’t remember what it was originally) with 1 GiB of RAM. This could do non-destructive video editing (though the render times were quite long) and basically everything that I do today. Video editing and compiling large programs were the only things that were slow. 3D games would probably have been slow, but this was an early 2000s Mac, so there weren’t any.

                        Every machine before the first Mac (the fact that it’s a Mac was largely coincidental) noticeably increased the set of things that I could do comfortably. The later upgrades were all incremental speed increases. My mobile phone is significantly faster than that Mac and has more RAM. My current laptop is several times faster, uses a lot more CPU cycles and RAM to do the same things and the only things that I do that are slow are… video editing, games, and compiling large programs. I can’t think of anything I do today that I wasn’t doing in a fairly similar way with Mac OS X 10.6 with a much smaller hardware budget. The only difference is that web apps are now doing some things that native apps used to do and are, in spite of massive improvements in JIT compilers, much more RAM and CPU-hungry.

                        1. 4

                          For me, there have been significant improvements more recently in terms of battery life and portability. Yes, they’re not really “things I can do”, but they’re still meaningful. For example, the battery in my M1 MBA lasts, effectively, all day.

                          1. 7

                            To give Apple credit, some of that is part of the software stack. They’ve done quite a bit of work at the lowest level on things like timer coalescing, where you can tell kqueue how much slack a timer notification will tolerate and it will then try to deliver a lot of callbacks while in a high power state and then put the CPU to sleep, rather than delivering them all over a longer window. They’ve done quite a bit of work at the middle layers to do things like schedule book keeping things while you’re on mains power and avoid doing them on batteries. They’ve also done some UI things to name and shame apps that keep the CPU in high-power states so that people complain about apps that are burning more power than they should. They’ve also done a lot with the ‘power nap’ functionality that allows apps to schedule some background processing (e.g. periodically polling for email) to run on a low-power core that’s woken up occasionally while in deep sleep states.

                            That said, a lot more of it comes from the hardware. LPDDR4 is a big reduction in power consumption and the M1 has a lot of optimisation here.

                        2. 3

                          I think I paid as much in real terms for the RPi4 w/ 4GB and case as it cost me to buy the 2MB RAM I needed to install Debian on my 386… (it came with 2MB which was just about sufficient to run Win 3.1)

                        3. 5

                          I remember muLinux - a distro that fit an X desktop on two superformatted floppy disks.

                          1. 3

                            Yes, but muLinux when booting from floppy still required 16 MiB of RAM. I started out with linux on a 386DX with 3 MiB of RAM… that was workable but ‘interesting’ in that SLS’ installer (as well as Slackware’s, a bit later) assumed 4 MiB. Well, it did make for a strong motivation to learn the internals to get the stuff installed in the first place. ;)

                          2. 1

                            And then there’s collapseOS

                          1. 11

                            If I’m reading this article correctly, glibc is compliant by default when compiling for 64 bit architectures, it is only when building for 32 bit platforms that it does not use the new flag. The article makes it sound like most GNU/Linux systems are going to explode, but given that many distros (including my stomping grounds of Arch) aren’t even building for 32 bit platforms any more this might not turn out to be that big a deal.

                            1. 16

                              A better title would’ve been “glibc in 32 bit user space is still not Y2038 compliant by default” as suggested by someone on HN, which would be less clickbait.

                              1. 4

                                Based on previous posts submitted here, the author of the blog seems mainly interested in making posts that highlight specific deficiencies in glibc vs musl and using that to imply that you should never use glibc for any reason ever.

                                1. 2

                                  I have made the same observation, with Alpine the obviously superior distributions, why aren’t all people using Alpine?

                                2. 4

                                  A more accurate headline would be ‘being Y2038 compliant on 32-bit platforms is an ABI break and glibc requires you to opt into ABI-breaking changes’. Any distro that wants to do an ABI break is free to toggle the default (though doing so within a release series is probably a bad idea). Given that none of the LTS Linux distros has a support window that will last until 2038, it isn’t yet urgent for anyone.

                                  1. 3

                                    “glibc in embedded/I(di)oT will bite you in 2038”

                                  2. 7

                                    Some embedded boards on x86 / 32bit ARM deployed right now will still be working into 2038. This is about them.

                                    1. 12

                                      Sure. But that brings up two things the article should have addressed:

                                      1. State the scope of the problem rather than generalizing to all GNU/Linux except Alpine.

                                      2. Note that fixing the compiler may or may not fix boards that are already deployed anyway, the stuff that is likely to still be running in 17 years is also the stuff that never ever gets firmware updates. The real issue here is “what has been deployed before compilers started fixing this” and/or “what is currently being deployed to a never-updated long term deployment without proper compiler options turned on”.

                                      1. 3

                                        Fortunately the share of embedded Linux systems still using glibc is tiny.

                                        1. 4

                                          Quite so! And of those that do, the number of them that are 32 bit and don’t get upgraded ever is also tiny. Maybe a single system in a non critical role, like a greenhouse watering system watchdog, actually falls under this criteria.

                                        2. 2

                                          Fortunately there’s more awareness about things like that in embedded development. It’s not perfect of course, but you tend to deal with custom glibc and similar issues more often, so a lot of people deploying things that depend on real dates today will know to compile the right way.

                                      1. 1

                                        Configuring a virtual bridge for hundreds of netns separated veth interface pairs for our project simulator system.

                                        Have a stupefying case when they all send UDP broadcasts (approx simultaneously), the packets on the bridge get source address of the first veth to come up but the correct payload. This is why am not a sysadmin!

                                        1. 7

                                          I wish there was a programming language that had a stable version that only changed for security reasons.

                                          1. 6

                                            If you avoid undocumented APIs, Java code has amazing longevity. It’s not quite what you’re asking for (the language and the runtime are both evolving) and sometimes “avoid undocumented APIs” is harder than it should be because innocent-looking dependencies might be doing funny business under the covers. But vanilla Java code compiled 20 years ago still runs perfectly well today.

                                            Libraries are where it starts to get tricky, though. “Only changes for security reasons” can look pretty similar to, “Only works on obsolete OS versions and hardware that’s no longer being manufactured” for some kinds of libraries.

                                            1. 6

                                              Common Lisp. Largely because the standards committee came up with a quite decent base specification, then packed up and turned the lights off.

                                              1. 2

                                                I like programming in common lisp because I know that if I find a snippet of code from 20 years ago, there’s very little chance it won’t work today. And more importantly, that the code I wrote 10 years ago will still work in 10 years (unless it interacts with something outside (like a C library) that’s now out of date).

                                                1. 3

                                                  Better yet, the code I write today is guaranteed to work if I can time travel back 30 years!

                                              2. 4

                                                C and FORTRAN still have good support for ancient codebases.

                                                1. 3

                                                  So maybe an LTS? Nobody is going to support a project indefinitely without financial support, so the best you can get is extension of the lifecycle.

                                                  1. 3

                                                    And companies like Redhat will happily sell you that and use your money to pay developers to backport and test fixes. The system works!

                                                  2. 1

                                                    JavaScript does this. Node.js doesn’t, and browsers don’t (though they’re reasonably close), and the ecosystem definitely doesn’t. But the core language from TC39 does

                                                  1. 3

                                                    It looked interesting enough: the design appears to be basically Rust-assisted key tenets of MISRA and isolation of tasks. So I cloned and built it. The binary and an STM32 board are now waiting for, uh, me finding that usb-c to usb-a dongle.

                                                    1. 9

                                                      Two remarks:

                                                      • As the blog post points out, there are large companies with massive codebases in scripting-league languages (Python, PHP, Ruby, etc.; Javascript!) out there. But a surprising number of these companies are investing millions in trying to (1) implement some static typing on top of the language for better maintainability (performance is not cited as the concern usually), or (2) faster implementation techniques than the mainstream language implementation. (Companies seem to have some success doing (1), more than (2); because optimizing a dynamic language to go faster is surprisingly difficult.) This could be taken positively as in the blog post, “there are tools to make your Python codebase more maintainable / faster anyway”, but also negatively: “companies following this advice are now stuck in a very expensive hole they dug for themselves”.

                                                      • “Computation is cheap, developers are expensive” is an argument, but I get uneasy thinking about the environmental costs of order-of-magnitude-slower-than-necessary programs running in datacenters right now. (I’m not suggesting that they are all Python programs; I’m sure there are tons of slow-as-hell Java or C++ or whatever programs running in the cloud.) I wish that we would collectively settle on tools that are productive and energy-efficient, and try to think a bit more about the environment than “not at all” as in the linked post.

                                                      1. 5

                                                         We did once an estimate, that if one our embedded product consumed 3W more per unit we’d have burned nearly 500MWh extra energy over deployed units’ then-lifetime.

                                                        Inefficient code in prod is irresponsible, and unlike crypto mining it is not shamed enough. You might think your slow script that just scratches your itch is nbd but before you know it it’s cloned and used at ten thousand instances..

                                                        1. 5

                                                          “Computation is cheap, developers are expensive” is an argument, but I get uneasy thinking about the environmental costs of order-of-magnitude-slower-than-necessary programs running in datacenters right now.

                                                          Not to mention the poor users who have to wait ten times longer for a command to finish. Their time is also expensive, and they usually outnumber the developers.

                                                          1. 2

                                                            “companies following this advice are now stuck in a very expensive hole they dug for themselves”.

                                                            One could argue that this is “a good problem to have.” I mean, it didn’t become a problem for Facebook or Dropbox or Stripe or Shopify until they were already wildly successful, right?

                                                            1. 4

                                                              There is a strong selection bias here as we hear much less about which technical issues plagued less-successful organisations. It would be dangerous to consider any choice of a large company to be a good choice based on similar reasoning.

                                                          1. 2

                                                            Pushing out my first ever watchOS app.

                                                            1. 2

                                                              Cracked open Xcode to write a small app for isometric hold training. Useful in precision sports like target pistol/rifle and possibly archery. It’s quite possible something like that already exists but it is almost easier writing your own than sifting thru the app store!

                                                              Swift has changed quite a bit since the last time I touched it: a quality I find unsettling in a programming language. Nonetheless the iOS version is almost finished now, and WatchOS is perhaps possible if I convince my wife to requisition her device.

                                                              1. 13

                                                                My daughter was born last Friday. I was fortunate enough to arrange three weeks parental leave where I work, and that’s naturally what am at this weekend too.

                                                                Interestingly enough I found dabbing into my pet projects in between the chores and the little one’s feeding is quite doable. It feels like permanent tiredness and slight sleep deprivation allows for easier focusing, somehow? Or perhaps it’s just less coffee throughout the day.

                                                                1. 5

                                                                  Congrats

                                                                  1. 2

                                                                    Thanks!

                                                                  2. 4

                                                                    Congratulations :)

                                                                    1. 2

                                                                      Thank you!

                                                                    2. 2

                                                                      Congratulations! Glad you’re able to find time for tech as well.

                                                                      1. 2

                                                                        Thanks! Was bit afraid that the rest of my personal life will be completely derailed but it’s going good.

                                                                      2. 2

                                                                        Congrats and best of luck. 🎉 Mine are both teenagers but I still remember those early days.

                                                                        1. 1

                                                                          Thanks! Our older is 17 now, so our recollections are quite dim :)

                                                                      1. 5

                                                                        It’s a good list! One thing I would add: speed vs size. Speed matters a lot less in an SoC because adding an overpowered core consumes less real estate than adding more memory. You have to unlearn things like inlining code (consuming more flash) or optimizing algorithms (trading memory for speed).

                                                                        As someone who moved from distributed systems into embedded firmware, I actually found the change refreshing and easier. Kinda like painting figurines or maintaining a garden, there’s joy in the act of writing and debugging the code, which I had lost in the world of ruby/java. (It probably helps that my earliest coding was on an Apple II, which is way more constrained than any modern embedded system.)

                                                                        1. 3

                                                                          I would disagree that inlining on unrolling would make a difference on any system where you have a luxury of multiple cores. The memory is in multiple megabytes and what’s eating it up is not executable code generally. On something like AVR though? Perhaps.

                                                                          As someone who moved from distributed systems into embedded firmware, I actually found the change refreshing and easier.

                                                                          Get into distributed embedded systems for the best of both worlds! :)

                                                                          1. 1

                                                                            Yeah, on most of these SoCs, we’re not even talking about one megabyte of RAM, and you’re lucky to get that in flash (which must be able to hold at least 2 copies of the app) either. It really makes your priorities shift! :)

                                                                            1. 1

                                                                              Oh, you’re talking about SIPs. Alright then. On a usual SoC + DRAM spin it simply makes no sense cost wise going multicore and memory-starved.

                                                                        1. 6

                                                                          If you enjoy this kind of interviews, I would wholeheartedly recommend Siebel’s Coders at Work. Some really in-depth conversations.

                                                                          1. 2

                                                                            “Here, Margaret is shown standing beside listings of the software developed by her and the team she was in charge of, the LM [lunar module] and CM [command module] on-board flight software team.”

                                                                            1. 2

                                                                              Looking at the “Motivating Example” section and its conclusion it pleases to see there indeed has been some progress since 1979.

                                                                              1. 1

                                                                                Shooting a magnum pistol match tomorrow.

                                                                                Also going to have a check on my Frankenspectrum that hasn’t been powered on for over 20 years.

                                                                                1. 2

                                                                                  We’re hit again by supply chain issues, so banging my head on the desk I suppose.

                                                                                  1. 4

                                                                                    There’s so much wrong with this article I don’t know where to start.

                                                                                    “lisp-1 vs lisp-2”? One of the things that lispers forever ado about.

                                                                                    I guess this depends on who you talk to–on the whole for lispers the only people who don’t consider lisp-2 to be a mistake are the hardcore CL fans. Emacs Lisp is the only other lisp-2 with a large userbase, and if you talk to elisp users, most of them are annoyed or embarrased about elisp being a lisp-2. If you look at new lisps that have been created this century, the only lisp-2 you’ll find is LFE.

                                                                                    Not a Important Language Issue […] For another example, consider today’s PHP language. Linguistically, it is one of the most badly designed language, with many inconsistencies, WITH NO NAMESPACE MECHANISM, yet, it is so widely used that it is in fact one of the top 5 most used languages.

                                                                                    You can use this same argument to justify classifying literally any language issue as unimportant. This argument is so full of holes I’m honestly kind of annoyed at myself at wasting time refuting it.

                                                                                    Now, as i mentioned before, this (single/multi)-value-space issue, with respect to human animal’s computing activities, or with respect to the set of computer design decisions, is one of the trivial, having almost no practical impact.

                                                                                    Anyone who has tried to use higher-order functions in emacs lisp will tell you this is nonsense. Having one namespace for “real data” and another namespace for “functions” means that any time you try to use a function as data you’re forced to deal with this mismatch that has no reason to exist.

                                                                                    I could go on but I won’t because if I were to find all the mistakes in this article I’d be here all day.

                                                                                    1. 9

                                                                                      I guess this depends on who you talk to–on the whole for lispers the only people who don’t consider lisp-2 to be a mistake are the hardcore CL fans.

                                                                                      This only is doing a lot of work here, given that CL is where the majority of practice happens in the (admittedly tiny) Lisp world.

                                                                                      1. 4

                                                                                        I know anecdote is not data but I know far more people who work at Clojure shops than I do Common Lisp shops. How would we quantify “majority of practice?”

                                                                                        1. 2

                                                                                          It’s more to do with whether qualify Clojure as a dialect of Java or a dialect of Lisp.

                                                                                          Clojure proclaims itself a dialect of Lisp while maintaining largely Java semantics.

                                                                                          1. 2

                                                                                            CL programmers are so predictable with their tedious purity tests. I wish they’d move on past their grudges.

                                                                                            1. 3

                                                                                              Dude you literally wrote a purity rant upthread.

                                                                                              1. 2

                                                                                                Arguing about technical merits is different from regurgitating the same tired old textbook No True Scotsman.

                                                                                                1. 3

                                                                                                  Look, (like everyone else) I wrote a couple Scheme interpreters. I worked on porting a JVM when Sun was still around. I did a JVM-targeting “Lisp-like” language compiler and even was paid for doing it. I look on Clojure and immediately see all the same warts and know precisely why they are unavoidable. I realize some people look at these things and see Lisp lineage, but I can’t help seeing some sort of Kotlin with parens through it.

                                                                                                  And it’s not just me really: half of the people who sat on RxRS were also on X3J13, and apparently noone had a split personality. So no need to be hostile about technical preferences of others. When you talk to your peers it helps to build a more complicated theory of mind than “they are with me or they are wrong/malicious”.

                                                                                                  1. 2

                                                                                                    Sure, you can have whatever prreferences you want. But if you go around unilaterally redefining terms like “lisp” and expecting everyone to be OK with it, well, that’s not going to work out so well.

                                                                                                    1. 2

                                                                                                      If you hang around long enough you hear people calling about anything as “Lisp-like”. Forth, Python, Javascript, Smalltalk, you name it. Clojure is a rather major departure from lisps both in syntax and semantics, so this is not a super unusual point.

                                                                                      2. 6

                                                                                        on the whole for lispers the only people who don’t consider lisp-2 to be a mistake are the hardcore CL fans.

                                                                                        That folks who use a Lisp-1 prefer a Lisp-1 (to the extent that non-Common Lisp, non-Emacs Lisp Lisp-like languages such as Scheme or Closure can fairly be termed ‘Lisps’ in the first place) is hardly news, though, is it? ‘On the whole, for pet owners the only people who don’t consider leashes to be a mistake are the hardcore dog owners.’

                                                                                        Emacs Lisp is the only other lisp-2 with a large userbase, and if you talk to elisp users, most of them are annoyed or embarrased about elisp being a lisp-2.

                                                                                        Is that actually true? If so, what skill level are these users?

                                                                                        For my own part, my biggest problem with Emacs is that it was not written in Common Lisp. And I think that Lisp-N (because Common Lisp has more than just two namespaces, and users can easily add more) is, indeed, preferable to Lisp-1.

                                                                                        1. 4

                                                                                          Is that actually true? If so, what skill level are these users?

                                                                                          This is based on my experience of participating in the #emacs channel since 2005 or so. The only exceptions have been people coming to elisp from CL. This has held true across all skill levels I’ve seen including maintainers of popular, widely-used packages.

                                                                                        2. 4

                                                                                          I dunno. I think the article is a bit awkward but I think the author is absolutely right: in practice, to the language user, it doesn’t really make a difference.

                                                                                          I am a full-time user of a lisp-1. When I use it, I appreciate the lack of sharps and the like for when it’s time to use higher order functions or call variables as functions. The same language has non-hygienic macros, which Dick Gabriel rather famously claimed more or less require a separate function namespace, and have almost never found my macro usage to be hampered.

                                                                                          At the same time, I was for three years a professional user of Elixir, a language with both syntactic macros and separate namespaces. I found it mildly convenient that I could declare an variable without worrying about shadowing a function, and never found the syntax for function reference or for invoking variables as funs to be particularly burdensome at all.

                                                                                          To the user, it really doesn’t have to matter one way or the other.