Threads for yorickpeterse

  1. 4

    Is it comfortable to use the thumb to move all the time? I ask cause I have some pain to my thumbs after texting too much on my phone…

    I personally use a vertical mouse, and it changed my life. Used to have chronic wrist inflammations, they’re gone now.

    1. 6

      I use a kensington expert trackball for that reason. It was very alien at first, but now I love it.

      1. 4

        Same here, I am addicted to using the ring to scroll. I find it much easier on my wrist, but to be honest i have both a mouse and this guy which i’ll alternate between during the day.

        1. 3

          Ya same setup here, I use a regular mouse for gaming since I just can’t get used to using a trackball for that… but use the trackball for everything else. The kensington’s ring scroll is the bomb!

          1. 1

            I’m looking for a trackball to buy but I heard bad things about the kensington’s scroll ring. Can any of you confirm if it’s easy to scroll accidentally or not, or if it has any other flaws?

              1. 1

                I don’t think I’ve ever accidentally scrolled the ring.. Maybe with bad posture it’s easier to? But after looking at mine and just now trying to get it to scroll accidentally… I just don’t see an obvious way to do that with how I place my hand on it when in use. 🤷‍♂️

        2. 4

          I got thumb tendinitis from using one. I use a vertical mouse now, super happy.

          1. 1

            Vertical mice make my shoulder seize up something fierce, but I’m really happy with an old CST L-Trac finger trackball. It’s funny how wildly people’s ergonomic needs can vary.

            1. 1

              CST L-Trac here too! I bought one based only on the internets and I wish it was a bit smaller. Definitely something to try out if you can, especially if your hands ain’t super big. Bought another for symmetry so I don’t end up in a rat race finding something as good but just a bit more fitting.

              And there were the accessories aspect!

              CST’s business is now owned by someone else who I don’t think have the back/forward-button accessory. I kinda regret not having got those. ISTR checking out what they had and it was lame.

              What I’d really like to see are some specs and community creations for those ports, like horizontal scroll wheels, but I think Linux doesn’t really support that anyway.

          2. 4

            Having used an extensive range of input devices (regular mice, vertical mice, thumb trackballs, finger trackballs, touchpads, drawing tablets, and mouse keys), my thoughts on this are as follows:

            Regular mice are the worst for your health. Vertical mice are a bit better, but not that much. Thumb balls are a nice entry into trackballs, but you’ll develop thumb fatigue and it will suck (thumb fatigue can make you want to rip your thumb off). Finger balls don’t suffer from these issues, but often come in weird shapes and sizes that completely nullify their benefits. The build quality is usually also a mess. Gameball is a good finger trackball (probably the best out there), and even that one has issues. I also had a Ploopy and while OK, mine made a lot of noise and I eventually sold it.

            Touchpads are nice on paper, but in practice I find they have similar problems to regular mice, due to the need for moving your arm around. Drawing tablets in theory could be interesting as you can just tap a corner and the cursor jumps over there. Unfortunately you still need to move your arms/wrist around, and they take up a ton of space.

            Mouse keys are my current approach to the above problems, coupled with trying to rely on pointing devices as little as possible. It’s a bit clunky and takes some getting used to, but so far I hate it the least compared to the alternatives.

            QMK supposedly supports digitizer functionality (= you can have the cursor jump around, instead of having to essentially move it pixel by pixel), but I haven’t gotten it to work reliably thus far. There are also some issues with GNOME sadly.

            Assuming these issues are resolved, and you have a QMK capable keyboard, I think this could be very interesting. In particular you could use a set of hotkeys to move the cursor to a fixed place (e.g. you divide your screen in four areas, and use hotkeys to jump to the center of these areas), then use regular movement from there. Maybe one day this will actually work :)

            1. 1

              you could use a set of hotkeys to move the cursor to a fixed place (e.g. you divide your screen in four areas, and use hotkeys to jump to the center of these areas),

              isn’t it what keynav does? Never succeeded to get used to it though, couldn’t abandon my mouse.

            2. 2

              I use an Elecom Deft Pro where the mouse is in the middle of the mouse. I generally use my index & middle finger to move the ball. For me, I find it more comfortable than a normal mouse or one with the ball on the side (thumb operated).

              1. 1

                everyone is probably different but I have a standard trackball mouse (Logitech, probably older version of this post) and it’s very comfortable. The main thing is to up the sensitivity a lot. Your thumb is precise, so little movement is needed!

                No good for games, perfect for almost everything else.

                (I have used fancy trackballs that a coworker has. It’s terrible for me, I do not get it at all even when trying for hours on end)

                1. 1

                  Anything you overdo is bad for you.

                  I swap between a trackpad, a mouse and an M570 every few days.

                1. 5

                  This discussion was inspired in part by this discussion on lobsters. Thanks to @arp242 and @spc476 in that discussion who helped clarify some of my views.

                  Code width standards are part of a class of questions about coding idioms and shared practices that I find particularly interesting but also surprisingly divisive.

                  1. 12

                    It an interesting history. The anecdote early on about a vim user needing to scroll was surprising to me because vim has supported wrapping long lines for ages. This is more of a problem for GUI IDEs that are built on top of a scrolling text field to start with.

                    Line limit arguments generally come from one of two places:

                    • Technical limits
                    • Perceptual limits

                    If you have an 80-column TTY and no ability to wrap lines in your editor, lines longer than 80 columns are painful. Even if your editor wraps, it typically doesn’t wrap with the current indentation and so now you’re in the territory of perceptual limits: you’ve just broken the ability for readers of the code understand the structure by looking at indentation.

                    The bigger perceptual limit is that readability drops for very long lines. The rule of thumb that TeX uses is 66 non-whitespace characters per line, which is based on some data and is well within the region that all of the studies I’ve read agree is the range that’s optimal for readability. I tried with Etoile having a style guide that said no more than 66 non-whitespace characters per line. This composes very well with a rule that says tabs for indentation, spaces for alignment, because it means that your maximum line width isn’t a function of your favourite indent width. Unfortunately, no tooling supported it and so it was very hard for people to follow (including me).

                    Scrolling isn’t a problem on modern large monitor but losing your place reading a very long line is.

                    1. 9

                      My only problem with <= 80 wide (truthfully anything under ~99 or so), is that people name things worse ala “employee_number” vs “emp_num”. Which I know from “Fixing Faults in C and Java Source Code: Abbreviated vs. Full-Word Identifier Names” and What We Know We Don’t Know: Introduction to Empirical Software Engineering the science suggests that doesn’t matter, but it does matter to me. Maybe it’s just what I’m used to, but 80 line codebases are so damn dense, and generally not enjoyable to work in for myself.

                      This kind of goes back to the clang-format discussion you and I had over the weekend over here.

                      1. 11

                        While Vim can definitely do soft wrapping, the resulting code is often a pain to read. This is because the line is just wrapped across a word boundary, but for code that’s not often what you want. Basically you’d want to soft-wrap it according to the language’s formatting guidelines, but I don’t think there’s any editor out there that supports this.

                        What I always liked about 80 characters per line is that it scales up and down display sizes really well: on a 15” laptop screen with a font size of 8 pt, it lets you have two vertical splits next to each other, while still being able to read all the code (horizontally at least). On a 27” display you can have four vertical splits next to each other, and still have enough space for about 70-ish characters per line. Of course you can also have fewer splits, or put Vim next to another program. Ultimately, the point is that it’s a surprisingly effective line limit.

                        The usual argument “bUt EvErYbOdY sHoUld uSe A 4k MoNiToR” is exhausting, as (at least to me) it shows a complete lack of understanding why these limits exist, and a lack of understanding that it’s not just about display sizes but also about how you use them, what devices are used to view the code (good luck viewing your 200 characters per line code on a phone while taking a poop), etc.

                        1. 17

                          The usual argument “bUt EvErYbOdY sHoUld uSe A 4k MoNiToR” is exhausting, as (at least to me) it shows a complete lack of understanding why these limits exist, and a lack of understanding that it’s not just about display sizes but also about how you use them

                          It’s a very stup annoyi exhausting argument because IMHO it also betrays a general lack of understanding about the act of writing software. You write software for the computers your users have, not for the ones you wish they had, or the computers you think they should have. In some cases, if you’re really operating at the edge of current technology, you may be writing software for the computers they’ll have by the time you’re done, but even that’s a preeeetty risky move, which ought to be done based on as little wishful thinking and as much hard data as possible.

                          This unwillingness to deal with user devices is why so many software interfaces today have awful visual accessibility. Poor contrast (“but everybody should be using a monitor with a good contrast ratio”, “you should be working in a well-lit room anyway”), poor space efficiency (“maybe not now, in 2014, but everyone will be using a touch display by 2019”, “but everybody should use a 4k monitor”), illegible text (“but everybody should be using a high DPI monitor”) are brought about not just by poor design in general, but by an unwillingness to either consider anything other than what looks best in presentations or to let people customize interfaces to fit the limits of their devices.

                          The bit about 4K monitors is particularly annoying since it’s not a generic argument, but a very specific one, about a very specific product line. 4K monitors aren’t particularly cheap. Whenever someone says “you should use a 4K monitor if you want to do any serious programming work”, I imagine them wearing a “stop being poor lol” shirt.

                          1. 5

                            The 4k argument is just laughable anyway, most software works terribly with them. I’ve had to set up 2x scaling on mine (Windows and Linux) because just about every major application is terrible in 4k: at best some of the UI elements scale with system DPI, leading to weird graphics misalignment. This includes popular software like Steam and Firefox.

                            1. 2

                              So not all the time I write software that’s only used by a few people and have had developers come on and do a bunch of customization to match monitors when it is literally cheaper to send them new laptops. I don’t know how common this is but when you’re talking about fairly large budgets shipping out and supporting laptops for 2-3 users is actually not that hard and sometimes get lost. I learned this trick trying to support a CEO’s laptop and was finally like uh can we just get you a new one.

                            2. 8

                              On phones, worse than code is writings hard wrapped at 80 characters with the assumption that everyone has at least 80 character displays

                              1. 6

                                And even worse are people who say you should hard-wrap text because it’s more accessible.

                              2. 2

                                Basically you’d want to soft-wrap it according to the language’s formatting guidelines, but I don’t think there’s any editor out there that supports this.

                                Only because people don’t invest the effort to implement it, because they’re attached to the idea of hard-wrapped 80-character lines. The existence of myriad formatters and linters and associated editor/IDE integration mean that all of the hard parts (parsing, analysis, style application) already exist, and the only thing lacking is integration.

                                I don’t think that one can make a rational argument that hard-wrapped 80-character lines are better than a syntactically aware soft-wrapping algorithm, in which case the only thing left to do is take the existing pieces and implement them. Arguments for 80-character hard-wrap feel somewhat akin to arguments for untyped programming languages - the only benefit is the saved one-time effort necessary to implement the upgrade (to syntactical softwrap/a typed language), and then everyone benefits from that point onward.

                              3. 4

                                Thanks for reading!

                                I’m not a vim user, so I’m not totally sure if the problem was horizontal scrolling, soft wrapping being hard to follow, or what. But definitely, we had some lines that were too long, and having a more heterogeneous set of text-editor used on the team highlighted that.

                                Personally, I like the rustftm way, of wrapping at 99. But any enforced standard makes an improvement in a team setting.

                              4. 2

                                I am an 80 column purist thread had a lot of good discussion too.

                              1. 4

                                “ As a user, you can force allow zooming”

                                Isn’t this problem solved, then?

                                1. 21

                                  No. Just because there’s an option to enable it, that doesn’t mean disabling it should be encouraged. Not everyone knows about the option, for one thing.

                                  1. 10

                                    You’ve identified a web browser UI design problem, which can be solved by the probably-less-than-double-digits number of teams developing popular web browsers, rather than by asking tens of millions of web content creators to change their behavior.

                                    1. 5

                                      Perhaps browser makers can treat it like a potentially undesirable thing. Similar to “(site) wants to know your location. Allow/Block” or “(site) tried to open a pop up. [Open it]”

                                      So: “(site) is trying to disable zooming. [Agree to Disable] [Never agree]” or similar.

                                    2. 8

                                      I think the better question is why can you disable this in the first place. It shouldn’t be possible to disable accessibility features, as website authors have time and time again proven to make the wrong decisions when given such capabilities.

                                      1. 3

                                        I mean, what’s an accessibility feature? Everything, roughly, is an accessibility feature for someone. CSS lets you set a font for your document. People with dyslexia may prefer to use a system font that is set as Dyslexie. Should it not be ok to provide a stylesheet that will override system preferences (unless the proper settings are chosen on the client)?

                                        1. 3

                                          Slippery slope fallacies aren’t really productive. There’s a pretty clear definition of the usual accessibility features, such as being able to zoom in or meta data to aid screen readers. Developers should only be able to aid such features, not outright disable them.

                                          1. 6

                                            I think this is a misunderstanding of what “accessibility” means. It’s not about making things usable for a specific set of abilities and disabilities. It’s about making things usable for ALL users. Color, font, size, audio or visual modality, language, whatever. It’s all accessibility.

                                          2. 1

                                            https://xkcd.com/1172/

                                            (That said, I don’t understand why browsers let sites disable zoom at all.)

                                        2. 6

                                          Hi. Partially blind user here - I, for one, can’t figure out how to do this in Safari on IOS.

                                          1. 3

                                            “Based on some quick tests by me and friendly people on Twitter, Safari seems to ignore maximum-scale=1 and user-scalable=no, which is great”

                                            I think what the author is asking for is already accomplished on Safari. If it isn’t, then the author has not made a clear ask to the millions of people they are speaking to.

                                            1. 4

                                              I am a web dev dilettante / newbie, so I will take your word for it. I just know that more and more web pages make browsing them with my crazy pants busted eyes are becoming nearly impossible to view on mobile, or wildly difficult enough so as to be equivalent to impossible in any case :)

                                              1. 4

                                                And that is a huge accessibility problem. This zoom setting is a huge accessibility problem.

                                                My point is that the solution to this accessibility problem (and almost all accessibility problems) is to make the browser ignore this setting, not to ask tens of millions of fallible humans to update literally trillions of web pages.

                                                1. 4

                                                  As another partially blind person, I fully agree with you. Expecting millions of developers and designers to be fully responsible for accessibility is just unrealistic; the platforms and development tools should be doing more to automatically take care of this. Maybe if the web wasn’t such a “wild west” environment where lots of developers roll their own implementations of things that should be standardized, then this wouldn’t be such a problem.

                                                  1. 2

                                                    Agreed. Front end development is only 50% coding. The rest is design, encompassing UX, encompassing human factors, encompassing accessibility. You can’t apply an “I’m just a programmer” or “works on my machine” mindset when your code is running on someone else’s computer.

                                                    1. 2

                                                      Developers and designers do have to be responsible for accessibility. I’m not suggesting that we aren’t.

                                                      But very often, the accessibility ask is either “Hey, Millions of people, don’t do this” or “Hey, three people, let me ignore it when millions of people do this”. And you’re much better off lobbying the three people that control the web browsers to either always, or via setting, ignore the problem.

                                          1. 2

                                            Dumb question; but why can’t we do something about GIL if it hurts parallelism? Maybe option to remove/disable it? I think it must’ve been done somewhere.

                                            1. 14

                                              One reason it is hard technologically is because at the moment: any operation that involves only a single Python bytecode op, OR any call into a C extension which doesn’t release the GIL or re-enter the Python interpreter, is atomic. (Re-entering the Python interpreter may release the GIL.)

                                              This means all kinds of things are atomic operations in Python. Like dict reads/writes and list.append(), either of which may call malloc or realloc in the middle.

                                              You can write many data race-y programs in Python that have well-defined (messy, but still well defined) semantics. I think nobody in the world has an idea of how much code there might be in the wild that (possibly accidentally) abuses this. So making data races be undefined behaviour would be quite a large backwards compatibility break, in my opinion.

                                              You don’t want to “just” slap a mutex on every object because then the lock/unlock calls world kill performance.

                                              I believe the PyPy developers are/were looking at shipping an STM implementation and the GILectomy fork involves a lot of cleverness of which I can remember no details.

                                              1. 6

                                                There have been (more than) a few experiments to remove the GIL in the past 20 years. To my knowledge they end up performing worse or being less safe.

                                                There’s a new PEP to get a more granular GIL.

                                                1. 11

                                                  There is an exciting new approach by Sam Gross (https://github.com/colesbury) who has made an extremely good NOGIL version of Python 3.9 (https://github.com/colesbury/nogil) It performs almost without any overhead on my 24 core MacPro test machine.

                                                  It is a sensational piece of work, especially as you mentions there have been so many other experiments. I know Sam has been approached by ThePSF. I am crossing my fingers and hope they will merge his code.

                                                  1. 9

                                                    I’ve been struggling with a Python performance issue today that I suspected might relate to the GIL.

                                                    Your comment here inspired me to try running my code against that nogil fork… and it worked! It fixed my problem! I’m stunned at how far along it is.

                                                    Details here: https://simonwillison.net/2022/Apr/29/nogil/

                                                  2. 6

                                                    They tend to perform worse on single threaded workloads. Probably not all, but I’m quite sure that several attempts, even rather naive ones, produced multi-threaded speed ups, but at the cost of being slower when running on a single thread.

                                                    Even ideas that succeeded to improve multi thread performance got shot down because the core team believes this (slower single core for fast multi core) is not an acceptable trade off

                                                    1. 4

                                                      IIRC the position was taken fairly early on by Guido that proposals to remove the GIL would not be accepted if they imposed slowdowns on single threaded Python on the order of… i think a cutoff of about 5% or 10% might have been suggested?

                                                      1. 1

                                                        That’s kind of what I remember too.

                                                  3. 4

                                                    There are experiments underway, e.g. https://lukasz.langa.pl/5d044f91-49c1-4170-aed1-62b6763e6ad0/, and there have been previous attempts that failed.

                                                    1. 3

                                                      Because alegedly, the gain in safety is greater than that of efficiency of concurrency.

                                                      It is a reliable, albeit heavy handed, way of ensuring simple threaded code generally works without headaches. But yes, it does so by eroding the gains of multithreading to the point of questioning if it should exist at all. Arguably.

                                                      Some async libraries mimic the threading API while resoursing to lower level async primitives. Eventlet and gevent come to mind.

                                                      1. 2

                                                        No, it’s about performance and a little bit about compatibility.

                                                        Most Python programs are single-threaded, and removing the GIL would not cause most of those to want to become multi-threaded, since their average Python program’s workload is not something that benefits from being multi-threaded. And basically every GIL removal attempt has caused performance regression for single-threaded Python programs. This has been declared unacceptable.

                                                        Secondarily, there would be a compatibility issue for things which relied on the GIL and can’t handle having the acquire/release turned into no-ops, but the performance issue is the big one.

                                                        1. 2

                                                          And basically every GIL removal attempt has caused performance regression for single-threaded Python programs. This has been declared unacceptable.

                                                          Why does this happen?

                                                          1. 5

                                                            Most of the time when a GIL removal slows down single-threaded code, it’s because of the GC. Right now Python has a reference-counting GC that relies on the GIL to make incref/decref effectively atomic. Without a GIL they would have to be replaced by more cumbersome actually-atomic operations, and those operations would have to be used all the time, even in single-threaded programs.

                                                            Swapping for another form of GC is also difficult because of the amount of existing extension code in C that already is built for the current reference-counting Python GC.

                                                      2. 2

                                                        Because significant portions of the Python ecosystem are built with a GIL in mind, and would probably break the moment that GIL is removed. You’d essentially end up with another case of Python 2 vs Python 3, except now it’s a lot more difficult to change/debug everything.

                                                        1. 2

                                                          A heavy-handed approach is to use multiprocessing instead of multithreading. Then each subprocess gets its own independent GIL, although that creates a new problem of communicating across process boundaries.

                                                        1. 1

                                                          I thought they had threads before? What is the change here?

                                                          1. 2

                                                            While you could reply to messages, they weren’t threads. Instead, they were more like regular messages that quoted whatever you replied to. The new threads feature is essentially that plus the ability to group all the replies together.

                                                          1. 3

                                                            I wrote a small break reminder program in Fish, mostly because I could. Scripting in Fish isn’t too bad, though I generally prefer not to use any kind of shell language for that.

                                                            1. 13

                                                              I have never understood why KDE isn’t the default VM for any serious linux distribution. It feels so much more professional than anything else.

                                                              Every time I see it, it makes me want to run Linux on the desktop again.

                                                              1. 11

                                                                I suspect because:

                                                                1. IIRC Gnome has a lot more funding/momentum
                                                                2. Plasma suffers from a lot of papercuts

                                                                Regarding the second reason: Plasma overall looks pretty nice, at least at first glance. Once you start using it, you’ll notice a lot of UI inconsistencies (misaligned UI elements, having to go through 15 layers of settings, unclear icons, applications using radically different styles, etc) and rather lackluster KDE first-party applications. Gnome takes a radically different approach, and having used both (and using Gnome currently), I prefer Gnome precisely because of its consistency.

                                                                1. 14

                                                                  There’s also a lot of politics involved. Most of the Linux desktop ecosystem is still driven by RedHat and they employ a lot of FSF evangalists. GNOME had GNU in its name and was originally created because of the FSF’s objections to Qt (prior to its license change) and that led to Red Hat preferring it.

                                                                  1. 6

                                                                    Plus GNOME and all its core components are truly community FLOSS projects, whereas Qt is a corporate, for-profit project which the Qt company happens to also provide as open source (but where you’re seriously railroaded into buying their ridiculously expensive licenses if you try to do anything serious with it or need stable releases).

                                                                    1. 7

                                                                      No one ever talks about cinnamon mint but I really like it. It looks exactly like all the screenshots in the article. Some of the customisation is maybe a little less convenient but I have always managed to get things looking exactly how I want them to and I am hardly a linux power user (recent windows refugee). Given that it seems the majority of arguments for plasma are that it is more user friendly and easier to customise, I would be interested to hear people’s opinions on cinnamon vs plasma. I had mobile plasma on my pinephone for a day or two but it was too glitchy and I ended up switching to Mobian. This is not a criticism of plasma, rather an admission that I have not really used it and have no first hand knowledge.

                                                                      1. 7

                                                                        I have not used either in anger but there’s also a C/C++ split with GTK vs Qt-based things. C is a truly horrible language for application development. Modern C++ is a mediocre language for application development. Both have some support for higher-level languages (GTK is used by Mono, for example, and GNOME also has Vala) but both are losing out to things like Electron that give you JavaScript / TypeScript environments and neither has anything like the developer base of iOS (Objective-C/Swift) or Android (Java/Kotlin).

                                                                        1. 4

                                                                          As an unrelated sidenote, C is also a decent binding language, which matters when you are trying to use one of those frameworks from a language that is not C/C++. I wish Qt had a well-maintained C interface.

                                                                          1. 8

                                                                            I don’t really agree there. C is an adequate binding language if you are writing something like an image decoder, where your interface is expressed as functions that take buffers. It’s pretty terrible for something with a rich interface that needs to pass complex types across the boundary, which is the case for GUI toolkits.

                                                                            For example, consider something like ICU’s UText interface, for exposing character storage representations for things like regex matching. It is a C interface that defines a structure that you must create with a bunch of callback functions defined as function pointers. One of the functions is required to set up a pointer in the struct to contain the next set of characters, either by copying from your internal representation into a static buffer in the structure or providing a pointer and setting the length to allow direct access to a contiguous run of characters in your internal representation. Automatically bridging this from a higher-level language is incredibly hard.

                                                                            Or consider any of the delegate interfaces in OpenStep, which in C would be a void* and a struct containing a load of function pointers. Bridging this with a type-safe language is probably possible to do automatically but it loses type safety at the interfaces.

                                                                            C interfaces don’t contain anything at the source level to describe memory ownership. If a function takes a char*, is that a pointer to a C string, or a pointer to a buffer whose length is specified elsewhere? Is the callee responsible for freeing it or the caller? With C++, smart pointers can convey this information and so binding generators can use it. Something like SWIG or Sol3 can get the ownership semantics right with no additional information.

                                                                            Objective-C is a much better language for transparent bridging. Python, Ruby, and even Rust can transparently consume Objective-C APIs because it provides a single memory ownership model (everything is reference counted) and rich introspection functionality.

                                                                            1. 2

                                                                              Fair enough. I haven’t really been looking at Objective-C headers as a binding source. I agree that C’s interface is anemic. I was thinking more from an ABI perspective, ie. C++ interfaces tend to be more reliant on inlining, or have weird things like exceptions, as well as being totally compiler dependent. Note how for instance SWIG still generates a C interface with autogenerated glue. Also the full abi is defined in like 15 pages. So while it’s hard to make a high-level to high-level interface in C, you can manually compensate from the target language; with C++ you need a large amount of compiler support to even get started. Maybe Obj-C strikes a balance there, I haven’t really looked into it much. Can you call Obj-C from C? If not, it’s gonna be a hard sell to a project as a “secondary api” like llvm-c, because you don’t even get the larger group of C users.

                                                                              1. 6

                                                                                Also the full abi is defined in like 15

                                                                                That’s a blessing and a curse. It’s also an exaggeration, the SysV x86-64 psABI is 68 pages. On x86-32 there are subtle differences in calling convention between Linux, FreeBSD, and macOS, for example, and Windows is completely different. Bitfields are implementation dependent and so you need to either avoid them or understand what the target compiler does. All of this adds up to embedding a lot of a C compiler in your other language, or just generating C and delegating to the C compiler.

                                                                                Even ignoring all of that, the fact that the ABI is so small is a problem because it means that the ABI doesn’t fully specify everything. Yes, I can look at a C function definition and know from reading a 68-page doc how to lower the arguments for x86-64 but I don’t know anything about who owns the pointers. Subtyping relationships are not exposed.

                                                                                To give a trivial example from POSIX, the connect function takes three arguments: int, const struct sockaddr, and socklen_t. Nothing in this tells me:

                                                                                • That the second argument is never actually a pointer to a sockaddr structure, it is a pointer to some other structure that starts with the same fields as the sockaddr.
                                                                                • That the third argument must be the size of the real structure that I point to with the second argument.
                                                                                • That the second parameter is not captured and I remain responsible for freeing it (you could assume this from const and you’d be right most of the time).
                                                                                • That the first parameter is not an arbitrary integer, it must be a file descriptor (and for it to actually work, that file descriptor must be a socket).

                                                                                I need to know all of these things to be able to bridge from another language. The C header tells me none of these.

                                                                                Apple worked around a lot of these problems with CoreFoundation by adding annotations that basically expose the Objective-C object and ownership model into C. Both Microsoft and Apple worked around it for their core libraries by providing IDL files (in completely different formats) that describe their interfaces.

                                                                                So while it’s hard to make a high-level to high-level interface in C, you can manually compensate from the target language; with C++ you need a large amount of compiler support to even get started

                                                                                You do for C as well. Parsing C header files and extracting enough information to be able to reliably expose everything with anything less than a full C compiler is not going to work and every tool that I’ve seen that tries fails in exciting ways. But that isn’t enough.

                                                                                In contrast, embedding something like clang’s libraries is sufficient for bridging a modern C++ or Objective-C codebase because all of the information that you need is present in the header files.

                                                                                Can you call Obj-C from C?

                                                                                Yes. Objective-C methods are invoked by calling objc_msgSend with the receiver as the first parameter and the selector as the second. The Objective-C runtime provides an API for looking up selectors from their name. Many years ago, I wrote a trivial libClang tool that took an Objective-C header and emitted a C header that exposed all of the methods as static inline functions. I can’t remember what I did with it but it was on the order of 100 lines of code, so rewriting it would be pretty trivial.

                                                                                If not, it’s gonna be a hard sell to a project as a “secondary api” like llvm-c, because you don’t even get the larger group of C users.

                                                                                There are fewer C programmers than C++ programmers these days. This is one of the problems that projects like Linux and FreeBSD have attracting new talent: the intersection between good programmers and people who choose C over C++ is rapidly shrinking and includes very few people under the age of 35.

                                                                                LLVM has llvm-c for two reasons. The most important one is that it’s a stable ABI. LLVM does not have a policy of providing a stable ABI for any of the C++ classes. This is a design decision that is completely orthogonal to the language. There’s been discussion about making llvm-c a thin (machine-generated) wrapper around a stable C++ interface to core LLVM functionality. That’s probably the direction that the project will go eventually, once someone bothers to do the work.

                                                                                1. 1

                                                                                  I’ve been discounting memory management because it can be foisted off onto the user. On the other hand something like register or memory passing or how x86-64 uses SSE regs for doubles cannot be done by the user unless you want to manually generate calling code in memory.

                                                                                  You do for C as well. Parsing C header files and extracting enough information to be able to reliably expose everything with anything less than a full C compiler is not going to work and every tool that I’ve seen that tries fails in exciting ways. But that isn’t enough.

                                                                                  Sure but there again you can foist things off onto the user. For instance, D only recently gained a proper C header frontend; until now it got along fine enough by just manually declaring extern(C) functions. I believe JNI and CFFI do the same. It’s annoying but it’s possible, which is more than can be said for many C++ bindings.

                                                                                  There are fewer C programmers than C++ programmers these days.

                                                                                  I meant C as a secondary API, ie. C++ as primary then C as auxiliary, as opposed to Objective-C as auxiliary.

                                                                                  Yes. Objective-C methods are invoked by calling objc_msgSend with the receiver as the first parameter and the selector as the second. The Objective-C runtime provides an API for looking up selectors from their name.

                                                                                  I don’t know the requirements for deploying with the ObjC runtime. Still, nice!

                                                                                  1. 2

                                                                                    I’ve been discounting memory management because it can be foisted off onto the user.

                                                                                    That’s true only if you’re bridging two languages with manual memory management, which is not the common case for interop. If you are exposing a library to a language with a GC, automatic reference counting, or ownership-based memory management then you need to handle this. Or you end up with an interop layer that everyone hates (e.g JNI).

                                                                                    Sure but there again you can foist things off onto the user. For instance, D only recently gained a proper C header frontend; until now it got along fine enough by just manually declaring extern(C) functions. I believe JNI and CFFI do the same. It’s annoying but it’s possible, which is more than can be said for many C++ bindings.

                                                                                    Which works for simple cases. For some counterexamples, C has _Complex types, which typically follow different rules for argument passing and returning to structures of the same layout (though they sometimes don’t, depending on the ABI). Most languages don’t adopt this stupidity and so you need to make sure that your custom C parser can express some C complex type. The same applies if you want to define bitfields in C structures in another language, or if the C structure that you’re exposing uses packed pagmas or attributes, uses _Alignas, and so on. There’s a phenomenal amount of complexity that you can punt on if you want to handle only trivial cases, but then you’re using a very restricted subset of C.

                                                                                    JNI doesn’t allow calling arbitrary C functions, it requires that you write C functions that implement native methods on a Java object. This scopes the problem such that the JVM needs to be able to handle calling only C functions that use Java types (8 to 64-bit signed integers or pointers) as arguments return values. These can then call back into the JVM to access fields, call methods, allocate objects, and so on. If you want to return a C structure into Java then you must create a buffer to store it and an object that owns the buffer and exposes native methods for accessing the fields. It’s pretty easy to use JNI to expose Java classes into other languages that don’t run in the JVM, it’s much harder to use it to expose C libraries into Java (and that’s why everyone who uses it hates it).

                                                                                    I meant C as a secondary API, ie. C++ as primary then C as auxiliary, as opposed to Objective-C as auxiliary.

                                                                                    If you have a stable C++ API, then bridging C++ provides you more semantic information for your compat layer than a C wrapper around the stable C++ API would. Take a look at Sol3 for an example: it can expose C++ objects directly into Lua, with correct memory management, without any C wrappers. C++ libraries often conflate a C API with an ABI-stable API but this is not necessary.

                                                                                    I don’t know the requirements for deploying with the ObjC runtime. Still, nice!

                                                                                    The requirements for the runtime are pretty small but for it to be useful you want a decent implementation of at least the Foundation framework, which provides types like arrays, dictionaries, and strings. That’s a bit harder.

                                                                                    1. 2

                                                                                      I don’t know. I feel like you massively overvalue the importance of memory management and undervalue the importance of binding generation and calling convention compatibility. For instance, as far as I can tell sol3 requires manual binding of function pointers to create method calls that can be called from Lua. From where I’m standing, I don’t actually save anything effort-wise over a C binding here!

                                                                                      Fair enough, I didn’t know that about JNI. But that’s actually a good example of the notion that a binding language needs to have a good semantic match with its target. C has an adequate to poor semantic match on memory management and any sort of higher-kinded functions, but it’s decent on data structure expressiveness and very terse, and it’s very easy to get basic support working quick. C++ has mangling, a not just platform-dependent but compiler-dependent ABI with lots of details, headers that often use advanced C++ features (I’ve literally never seen a C API that uses _Complex - or bitfields) and still probably requires memory management glue.

                                                                                      Remember that the context here was Qt vs GTK! Getting GTK bound to any vaguely C-like language (let’s say any language with a libc binding) to the point where you can make calls is very easy - no matter what your memory management is. At most it makes it a bit awkward. Getting Qt bound is an epic odyssey.

                                                                                      1. 4

                                                                                        I feel like you massively overvalue the importance of memory management and undervalue the importance of binding generation and calling convention compatibility

                                                                                        I’m coming from the perspective of having written interop layers for a few languages at this point. Calling conventions are by far the easiest thing to do. In increasing levels of difficulty, the problems are:

                                                                                        • Exposing functions.
                                                                                        • Exposing plain data types.
                                                                                        • Bridging string and array / dictionary types.
                                                                                        • Correctly managing memory between two languages.
                                                                                        • Exposing general-purpose rich types (things with methods that you can call).
                                                                                        • Exposing rich types in both directions.

                                                                                        C only seems easy because C<->C interop requires a huge amount of boilerplate and so C programmers have a very low bar for what ‘transparent interoperability’ means.

                                                                                        For instance, as far as I can tell sol3 requires manual binding of function pointers to create method calls that can be called from Lua. From where I’m standing, I don’t actually save anything effort-wise over a C binding here!

                                                                                        It does, because it’s an EDSL in C++, but that code could be mechanically generated (and if reflection makes it into C++23 then it can be generated from within C++). If you pass a C++ shared_ptr<T> to Sol3, then it will correctly deallocate the underlying object once neither Lua nor C++ reference it any longer. This is incredibly important for any non-trivial binding.

                                                                                        Remember that the context here was Qt vs GTK! Getting GTK bound to any vaguely C-like language (let’s say any language with a libc binding) to the point where you can make calls is very easy - no matter what your memory management is.

                                                                                        Most languages are not ‘vaguely C-like’. If you want to use GTK from Python, or C#, how do you manage memory? Someone has had to write bindings that do the right thing for you. From my vague memory, it uses GObject, which uses C macros to define objects and to manage reference counts. This means that whoever manages the binding layer has had to interop with C macros (which are far harder to get to work than C++ templates - we have templates working for the Verona C++ interop layer but we’re punting on C macros for now and will support a limited subset of them later). This typically requires hand writing code at the boundary, which is something that you really want to avoid.

                                                                                        Last time I looked at Qt, they were in the process of moving from their own smart pointer types to C++11 ones but in both cases as long as your binding layers knows how to handle smart pointers (which really just means knowing how to instantiate C++ templates and call methods on them) then it’s trivial. If you’re a tool like SWIG, then you just spit out C++ code and make the C++ compiler handle all of this for you. If you’re something more like the Verona interop layer then you embed a C++ parser / AST generator / codegen path and make it do it for you.

                                                                                        1. 1

                                                                                          I’m coming from the perspective of having written interop layers for a few languages at this point.

                                                                                          Yeah … same? I think it’s just that I tend to be obsessed with variations on C-like languages, which colors my perception. You sound like you’re a lot more broad in your interests.

                                                                                          C only seems easy because C<->C interop requires a huge amount of boilerplate and so C programmers have a very low bar for what ‘transparent interoperability’ means.

                                                                                          I don’t agree. Memory management is annoying, sure, and having to look up string ownership for every call gets old quick, but for a stateful UI like GTK you can usually even just let it leak. I mean, how many widgets does a typical app need? Grab heaptrack, identify a few sites of concern and jam frees in there, and move on with your life. It’s possible to do it shittily easily, and I value that a lot.

                                                                                          If you’re a tool like SWIG, then you just spit out C++ code and make the C++ compiler handle all of this for you.

                                                                                          Hey, no shade on SWIG. SWIG is great, I love it.

                                                                                          From my vague memory, it uses GObject, which uses C macros to define objects and to manage reference counts. This means that whoever manages the binding layer has had to interop with C macros

                                                                                          Nah, it’s really only a few macros, and they do fairly straightforward things. Last time I did GTK, I just wrote those by hand. I tend to make binders that do 90% of the work - the easy parts - and not worry about the rest, because that conserves total effort. With C that works out because functions usually take structs by pointer, so if there’s a weird struct that doesn’t generate I can just define a close-enough facsimile and cast it, and if there’s a weird function I define it. With C++ everything is much more interdependent - if you have a bug in the vtable layout, there’s nothing you can do except fix it.

                                                                                          When I’ll eventually want Qt in my current language, I’ll probably turn to SWIG. It’s what I used in Jerboa. But it’s an extra step to kludge in, that I don’t particularly look forward to. If I just want a quick UI with minimal effort, GTK is the only game in town.

                                                                                          edit: For instance, I just kludged this together in half an hour: https://gist.github.com/FeepingCreature/6fa2d3b47c6eb30a55846e18f7e0e84c This is the first time I’ve tried touching the GTK headers on this language. It’s exposed issues in the compiler, it’s full of hacks, and until the last second I didn’t really expect it to work. But stupid as it is, it does work. I’m not gonna do Qt for comparison, because I want to go to bed soon, but I feel it’s not gonna be half an hour. Now to be fair, I already had a C header importer around, and that’s a lot of time sunk into that that C++ doesn’t get. But also, I would not have attempted to write even a kludgy C++ header parser, because I know that I would have given up halfway through. And most importantly - that kludgy C header importer was already practically useful after a good day of work.

                                                                                          edit: If there’s a spectrum of “if it’s worth doing, it’s worth doing properly” to “minimal distance of zero to cool thing”, I’m heavily on the right side. I think that might be the personality difference at play here? For me, a binding generator is purely a tool to get at a juicy library that I want to use. There’s no love of the craft lost there.

                                                                          2. 1

                                                                            So does plasma support Electron/Swift/Java/Kotlin? I know electron applications run on my desktop so I assume you mean directly as part of the desktop. If so that is pretty cool. Please forgive my ignorance, desktop UI frameworks are way outside my usual area of expertise.

                                                                          3. 2

                                                                            I only minimally use KDE on the computers at my university’s CS department, but I’ve been using cinnamon for almost four years now. I think that Plasma wins in the customizable aspect. There is just so many things that can be adjusted.

                                                                            Cinnamon on the other hand feels far more polished, with fewer options for customization. I personally use cinnamon with Arch, but when I occasionally use Mint, the full desktop with all of mint’s applications is very cohesive and well thought out, though not without flaws.

                                                                            I sometimes think that cinnamon isn’t evangelized as frequently because it’s well enough designed that it sort of fades into the background while using it

                                                                      2. 3

                                                                        I’ve used Cinnamon for years, but it inevitably breaks (or I break it). I recently looked into the alternatives again, and settled on KDE because it looked nice, it and Gnome are the two major players so things are more likely to Just Work, and it even had some functionality I wanted that Gnome didn’t. I hopped back to Cinnamon within the week, because yeah, the papercuts. Plasma looks beautiful in screenshots, and has a lot of nice-sounding features, but the moment you actually use it, you bang your face into something that shouldn’t be there. It reminded me of first trying KDE in the mid-2000s, and it was rather disappointing to feel they’ve been spinning in circles in a lot of ways. I guess that isn’t exactly uncommon for the Linux desktop though…

                                                                        1. 3

                                                                          I agree with your assessment of Plasma and GNOME (Shell). Plasma mostly looks fine, but every single time I use it–without fail–I find some buggy behavior almost immediately, and it’s always worse than just having misaligned labels on some UI elements, too. It’s more like I’ll check a setting checkbox and then go back and it’s unchecked, or I’ll try to put a panel on one or another edge of the screen and it’ll cause the main menu to open on the opposite edge like it looped around, or any other number of things that just don’t actually work right. Even after they caved on allowing a single-key desktop shortcut (i.e., using the Super key to open the main menu), it didn’t work right when I would plug/unplug my laptop from my desk monitors because of some weirdness around the lifecycle of the panels and the main menu button; I’d only be able to have the Super key work as a shortcut if it was plugged in or if it was not, but not both. That one was a little while ago, so maybe it’s better now.

                                                                          Ironically, Plasma seems to be all about “configuration” and having 10,000 knobs to tweak, but the only way it actually works reasonably well for me is if you don’t touch anything and use it exactly how the devs are dog-fooding it.

                                                                          The GNOME guys had the right idea when it came to stripping options, IMO. It’s an unpopular opinion in some corners, but I think it’s just smart to admit when you don’t have the resources to maintain a high bar of quality AND configurability. You have to pick one, and I think GNOME picked the right one.

                                                                        2. 5

                                                                          I have never understood why KDE isn’t the default VM for any serious linux distribution.

                                                                          Me neither, but I’m glad to hear it is the default desktop experience on the recently released Steam Deck.

                                                                          1. 3

                                                                            Do SUSE/OpenSUSE not count as serious Linux distributions anymore?

                                                                            It’s also the default for Manjaro as shipped by Pine64. (I think Manjaro overall has several variants… the one Pine64 ships is KDE-based.)

                                                                            Garuda is also a serious Linux distribution, and KDE is their flagship.

                                                                            1. 1

                                                                              I tried to use Plasma multiple times on Arch Linux but every time I tried it turned out to be too unstable. The most annoying bug I remember was that kRunner often crashed after entering some letters, taking down the whole desktop session with it. In the end I stuck with Gnome because it was stable and looked consistent. I do like the concept of Plasma but I will avoid it on any machine I do serious work with.

                                                                            1. 10

                                                                              I’m baffled why people prefer going through the pains of setting up Prometheus + Grafana over adding few lines of code in their app to send metrics to InfluxDB 2.0 and have all the charting and alerting capabilities in a smaller footprint and simpler services topology. Additionally, the push model is safer under heavy load as it’s usually a better choice to drop metrics on the floor rather than choke the app by means of repeated requests from Prometheus (the “push gateway” isn’t not recommended as the primary reporting method as per docs). InfluxDB has a snappy UI, a pretty clean query language, retention policies with downsampling, HTTP API, dashboards with configuration storable in version control systems. It doesn’t have counters/gauges/histograms — there’s just measurements. The only thing I miss is metrics reporting over UDP.

                                                                              1. 17

                                                                                I’m baffled why people prefer going through the pains of setting up Prometheus + Grafana over adding few lines of code in their app to send metrics to InfluxDB 2.0 […]

                                                                                The community around Prometheus and Grafana is huge. The kube-prometheus project alone can drive a kubernetes cluster observability from zero-to-hero quickly. I admit that there is a learning curve there, but the value proposition is amazing. The community is amazing. There many online collections of ready-made rules teams can use.

                                                                                Even SaaS famous monitoring SaaS solutions offer poorer defaults than kube-prometheu’s 80+ strong k8s alerts collection. Prometheus is cheap, you just throw RAM and GBs and does all the heavy lifting. Good luck enabling OpenMetrics integration to drive billions of metrics to SaaS monitoring systems.

                                                                                Assuming I install influxDB, how do I drive metrics for cert-manager to influxDB? Cert-manager is an operator that mostly just works but needs monitoring in case the SSL cert issuance fails for whatever reason. For most solution the infra team will have to build monitoring. But cert-manager (like many others) exposes prometheus metrics (open-metrics compatible) and as a bonus there’s a good community grafana dashboard ready to be used.

                                                                                Additionally, the push model is safer under heavy load as it’s usually a better choice to drop metrics on the floor rather than choke the app by means of repeated requests from Prometheus […]

                                                                                To my experience it’s the other way around: If the exporter is built as it should, non-blocking, then it’s as light as serving a static page with few kb of text. I’ve seen applications slowed down by third party push libraries or the agent misbehaving (e.g. not being able to push metrics) leading to small but visible application performance degradation. Again one could talk about implementation but the footprint is visibly bigger in all respects.

                                                                                A pull-based metrics comes into play when you have hundreds if not thousands of services running and you don’t need detailed metrics (e.g. I don’t wanna get metrics every second). The default scrape interval is 30s. You can use buckets to collect detailed metrics falling to predefined percentiles, but sub-30s spikes cannot be debugged. Prom works like this for a reason. Modern monitoring systems are metric data-lakes and prometheus strikes a perfect balance between cost (in terms of storage, processing, etc.) and visibility.

                                                                                InfluxDB has a snappy UI, a pretty clean query language, retention policies with downsampling, HTTP API, dashboards with configuration storable in version control systems.

                                                                                There are ready-made prometheus rules and Grafana dashboards for about anything. Granted that many are poorly constructed or report the wrong thing (had poor experience with several community dashboards), usually community dashboards work out of the box for most people.

                                                                                1. 2

                                                                                  In my experience, the times when a system is lagging or unresponsive are exactly the times when I want to capture as many metrics as possible. In a push model I will still get metrics leading up to the malfunction; in a pull model I may miss my window.

                                                                                  As for push slowing down an application I agree that can happen, but it can also happen with pull (consider a naive application that serves metrics over http and does blocking writes to the socket). We have chosen to push metrics using shared memory so the cost of writing is at most a few hundred nanoseconds. A separate process can then transfer the metrics out of the server via push or pull, whichever is appropriate.

                                                                                  1. 2

                                                                                    In a push model I will still get metrics leading up to the malfunction;

                                                                                    In modern observability infrastructures, the idea is to combine tracing, metrics and logs. What you’re describing is done by an APM/tracer a lot better compared to metrics.

                                                                                    I’ve seen metrics being used to measure time spent in function’s but that’s abusing the pattern I think. Of course if there’s no tracer/APM then, it’s fine.

                                                                                    Pushing metrics for every call leading up to the malfunction is usually dismissed because it’s a high cost, low value proposition.

                                                                                    1. 1

                                                                                      I didn’t say anything about metrics for every call; as you point out, that would be done better with tracing or logging. We do that too, but it serves a different purpose. A single process might handle hundreds of thousands of messages in a single second, and that granularity is too fine for humans to handle. Aggregating data is crucial, either into uniform time series (e.g. 30 second buckets so all the timestamps line up) or non-uniform time series (e.g. emitting a metric when a threshold). We keep counters in shared memory and scrape them with a separate process, resulting in a uniform time series. This is no different than what you would get scraping /proc on Linux and sending it to a tsdb, but it is all done in userspace.

                                                                                      As for push vs. pull, consider this case: a machine becomes unresponsive because it is swapping to disk. In a push model with a short interval you might see metrics showing which process was allocating memory before metrics are simply cut off. In a pull model the machine might become unresponsive before the metrics are collected, and at that point it is too late.

                                                                                      If you have a lot of machines to monitor but limited resources for collecting metrics, a pull model makes sense. In our case we have relatively few machines so the push model is a better tradeoff to avoid losing metrics.

                                                                                      In my ideal world metrics would be pushed on coarse-grained timer, or earlier when a threshold is reached. I think Brendan Gregg has written about doing something like this though I do not have the link handy.

                                                                                2. 14

                                                                                  Years ago at GitLab we started with InfluxDB. It was nothing short of a nightmare, and we eventually migrated to Prometheus. I distinctively remember two issues:

                                                                                  One: each row (I forgot the exact terminology used by Influx) is essentially uniquely identifier by a pair of (tags + values, timestamp). If the tags and their values are the same, and so is the timestamp, InfluxDB just straight up (and silently) overwrites the data. This resulted in us having far less data than we’d expect, until we: 1) recorded timestamps in microseconds/nanoseconds (don’t remember which) 2) added a slight random value to it. Even then it was still guess work as to how much data would be overwritten. You can find some old discussions on this here: https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/86

                                                                                  Two: IIRC at some point they removed clustering support from the FOSS offering and made it enterprise only. This left a bad taste in our mouth, and IIRC a lot of people were unhappy with it at the time.

                                                                                  Setting that aside, we had a lot of trouble scaling InfluxDB. It seemed that at the time it simply couldn’t handle the workload we were throwing at it. I’m not sure if it has improved since, but frankly I would stay away from push based models for large environments.

                                                                                  1. 7

                                                                                    I honestly don’t know whether push behaves better than pull during incidents; I prefer to drop application traffic before dropping metrics. But either way, I think a better argument for pull-based metrics is that they are more composable. With Prometheus as a concrete example, it’s possible to forward metrics from one service to another, from one scraper to another, and from one database to another, by only changing one scraper and not the target service/scraper/database.

                                                                                    1. 4

                                                                                      I’m using Influx myself, but the advantage of Prometheus is that you don’t have to write your own dashboards for common services. The Prometheus metric naming scheme allows folks to write and share dashboards that are actually reusable. I haven’t had that experience with InfluxDB (or Graphite before it).

                                                                                      (Plus, you know, licensing.)

                                                                                      1. 2

                                                                                        One reason to use grafana is if you have data that you do not want to bring into influxdb. Both influx and prometheus have their own graphing capabilities and can be used on their own or with grafana. We use influxdb with grafana as a frontend so we visualize both the metrics stored in influx and other sources (such as kudu) on the same dashboard.

                                                                                        1. 2

                                                                                          I tend to agree but after using influxDB for quite some time I find its semantics to be rather awful and the company to be extremely difficult to deal with.

                                                                                          I’m currently sizing up a migration to Prometheus. Despite disliking strongly the polling semantics and the fact that it intentionally doesn’t scale, so you have many sources of truth for data.

                                                                                          Oh well.

                                                                                          1. 2

                                                                                            I’ve had circular arguments on handling some monitoring but still complying with the strongly discouraged push gateway method. I think pull makes sense when you start using alertmanager and alert when a metric fails. I like the whole setup except grafana’s log solution. Loki and fluent have been a long source of frustration and there seem to be very limited resources on the internet about them. It sounds good, but is extremely hard to tame compared to prometheus. The real win over previous solutions with prometheus and grafana were alerting rules. I find it easier to express complex metrics for monitoring quickly.

                                                                                          1. 1

                                                                                            I tried on my system (which isn’t THAT old) and it took 48s to compile Go :(. It’s mostly a single-core build, so having more cores doesn’t really make a difference, and the latest Intel cores aren’t that much faster. I would guess that WD Black SSD is most of the difference – I have a random off-brand SSD.

                                                                                            1. 5

                                                                                              40s on an M1 Pro Mac.

                                                                                              1. 2

                                                                                                ./make.bash 278.53s user 44.97s system 374% cpu 1:26.27 total

                                                                                                45s on a battery powered and fanless Macbook Air. The M1 ARM processors are the most notable performance boost since SSDs made their way into personal computers.

                                                                                                1. 3

                                                                                                  If that is time output, then it took 1 minute and 26 seconds in total. 45s is the time spent in the kernel for the process (e.g. for handling system calls).

                                                                                                  1. 3

                                                                                                    🙈 You’re absolutely correct, I was confused by the time output. On my Linux machine time has a different output in bash:

                                                                                                    real  0m52.402s
                                                                                                    user  4m8.435s
                                                                                                    sys  0m18.317s
                                                                                                    

                                                                                                    which I prefer. Anyways, the M1 machine is plenty fast for my use case.

                                                                                                    1. 2

                                                                                                      I was confused by the time output.

                                                                                                      Happens :), especially with the different orders.

                                                                                                      Anyways, the M1 machine is plenty fast for my use case.

                                                                                                      I also had an MacBook Air M1 prior to the MacBook Pro 14”. The M1 Air is an awesome machine for development. And I’d say for most people who don’t need >16 GB RAM, lot of cores, or many displays, the MacBook Air M1 is the better machine.

                                                                                                  2. 1

                                                                                                    I’m not sure if the ARM and x86 compilers are really doing the same work unfortunately.

                                                                                                    1. 1

                                                                                                      If people are just compiling for the native target, it will be a bit different at the end, yeah. But typically little time is spent in the backend so it doesn’t matter that much. But this is go which has much less stuff going on in the frontend/middle than something llvm based.

                                                                                                  3. 2

                                                                                                    I also get 40s on a M1 Pro and 35s on a Ryzen 5900X.

                                                                                                    I am still amazed that my laptop that I can put in my backpack, with fans that barely spin up at all, is close to a 105W TDP Ryzen 5900X. Also not even that far away from the Intel Core i9-12900K from the article that has base power use of 125W and maximum turbo power use of 241W.

                                                                                                    My default is now to do all development on the M1 Pro, including training basic machine learning models (the AMX matrix multiplication co-processors make training small networks pretty fast). I only use a headless GPU machine for training/finetuning large models.

                                                                                                    1. 1

                                                                                                      I get similar timings on my Ryzen 5600X: 39-40 seconds. My setup is optimized for noise and size though (it’s a SFFPC), and the CPU is deliberately set to limit the temperature to 75C. This way I can keep the fans super quiet, with their speed only increasing when there is prolonged load. I think I also slightly undervolted the CPU, but I’m not entirely sure.

                                                                                                      I did experiment with faster fan speeds, but found it didn’t matter much unless I really crank them up. Even then we’re talking about a difference of maybe a few degrees Celsius. IIRC the Ryzen CPUs simply run a little on the hotter end when under load.

                                                                                                      The M1 is definitely an impressive CPU, and I really hope we start to see some more diversity in the CPU landscape over time.

                                                                                                  4. 2

                                                                                                    49s on a Ryzen 9 3900X and 3x Samsung 860 EVO (built in 2020). But honestly no complaints, that’s not bad for a whole damn compiler and language runtime.

                                                                                                    1. 1

                                                                                                      It should go faster. Was that with PBO, manual OC or stock? What RAM settings?

                                                                                                      1. 1

                                                                                                        RAM is 3200MHz CL16 with XMP settings, don’t remember how I’m fixed for OC but I didn’t push it, these days I’d rather have it work 100% of the time than have it go 5% faster but crash once a week :)

                                                                                                      2. 1

                                                                                                        I had a 3700X before. If your mainboard supports it, it’s worth considering upgrading to a 5900X some time. E.g. it builds Go in 35s, so it’s a nice speedup.

                                                                                                        1. 1

                                                                                                          I’m wondering if it’s time to replace my TR2950x

                                                                                                          ./make.bash 381.03s user 41.99s system 632% cpu 1:06.84 total

                                                                                                          1. 1

                                                                                                            I guess my machine was not fully optimized.

                                                                                                            ./make.bash 299.47s user 28.67s system 646% cpu 50.781 total

                                                                                                        2. 1

                                                                                                          It’s not really single-core and storage doesn’t have that much of an impact.

                                                                                                          NFS share: ./make.bash 193.40s user 36.32s system 690% cpu 33.273 total

                                                                                                          in-memory tmpfs: ./make.bash 190.17s user 35.55s system 708% cpu 31.843 total

                                                                                                          (this is not exactly apples-to-oranges, I’m building 1.17.6 with 1.17.4; this is on a 5950X with PBO on and RAM slightly above XMP, and an OS that doesn’t support CPPC; oh and the caches are probably pretty warm for NFS since I’ve extracted the archive on the same client machine)

                                                                                                          1. 1

                                                                                                            It’s mostly a single-core build, so having more cores doesn’t really make a difference, and the latest Intel cores aren’t that much faster.

                                                                                                            Here is a CPU profile for 30s of the ./make.bash build:

                                                                                                            https://www.dropbox.com/s/jzxklglwze7125i/go-build-cpu-counters.png?dl=0

                                                                                                            There are lot of concurrent regions during the build, so I definitely wouldn’t say it’s mostly single-core.

                                                                                                          1. 2

                                                                                                            Company: GaggleAMP

                                                                                                            Company site: https://www.gaggleamp.com/

                                                                                                            Position(s): Ruby on Rails Developer

                                                                                                            Location: REMOTE, EST/EDT time zone

                                                                                                            Description: GaggleAMP was the first company in the “Employee Advocacy” space and has led the field in terms of innovation and customer satisfaction. We are a fully remote development team looking to add a talented Ruby on Rails developer working EST/EDT time zone. In this role you will work on our existing codebases to improve outcomes for customers, with whom you would interface directly. Additionally, you will be part of the DevOps team ensuring our systems’ uptime. This position is perfect for developers who like to manage multiple projects at a time, enjoy direct customer interactions, and gain a sense of accomplishment from driving value through the product.

                                                                                                            Tech stack: Ruby on Rails and React.js

                                                                                                            Compensation: $60,000 - $90,000; Medical, dental, and vision insurance; Matching 401K

                                                                                                            Contact: Applications go through the job application page.

                                                                                                            1. 2

                                                                                                              GaggleAMP

                                                                                                              Uhm, is that lawyer proof?

                                                                                                              1. 1

                                                                                                                Sorry if i don’t get it, what do you mean by “lawyer proof”?

                                                                                                                1. 3

                                                                                                                  They are probably referring to Google’s AMP service.

                                                                                                                  1. 1

                                                                                                                    Oh! No, not related at all to Google or Google’s AMP technology. Gaggle defined as flock and “AMP” as the beginning of the word “amplify” :D

                                                                                                                  2. 2

                                                                                                                    It sounds very similar to google AMP

                                                                                                              1. 8

                                                                                                                This looks really well done! But I’m always compelled, in response to tutorials like these, to advocate for using parser/lexer generator tools instead of hand-writing them. In my experience writing your own parser is sort of a boiling-the-frog experience, where it seems pretty simple at first and then gets steadily hairier as you add more features and deal with bugs.

                                                                                                                Of course if the goal is to learn how parsers work, it’s great to write one from scratch. I wrote a nontrivial one in Pascal back in the day (and that’s part of why I don’t want to do it again!)

                                                                                                                Of the available types of parser generators, I find PEG ones the nicest to use. They tend to unify lexing and parsing, and the grammars are cleaner than the old yacc-type LALR grammars.

                                                                                                                1. 27

                                                                                                                  This looks really well done! But I’m always compelled, in response to tutorials like these, to advocate for using parser/lexer generator tools instead of hand-writing them. In my experience writing your own parser is sort of a boiling-the-frog experience, where it seems pretty simple at first and then gets steadily hairier as you add more features and deal with bugs.

                                                                                                                  That’s the opposite of my experience. Writing a parser in a parser generator is fine for prototyping and when you don’t care about particularly good error reporting or want something that you can reuse for things like LSP support but then it will hurt you. In contrast, a hand-written recursive descent parser is more effort to write at the start but is then easy to maintain and extend. I don’t think any of the production compilers that I’ve worked on has used a parser generator.

                                                                                                                  1. 8

                                                                                                                    Also once you know the “trick” to recursive descent (that you are using the function call stack and regular control flow statements to model the grammar) it is pretty straightforward to write a parser in that style, and it’s all code you control and understand, versus another tool to learn.

                                                                                                                    1. 3

                                                                                                                      What’s the trick for left recursive grammars, like expressions with infix operators?

                                                                                                                      1. 4

                                                                                                                        Shunting yard?

                                                                                                                        1. 3

                                                                                                                          The trick is the while loop. If you have something like A = A '.' ident | ident you code this as

                                                                                                                          loop {
                                                                                                                            ident()
                                                                                                                            if !eat('.') { break }
                                                                                                                          }
                                                                                                                          
                                                                                                                          1. 3

                                                                                                                            You can’t just blindly do left-recursion for obvious reasons, but infix is pretty easy to deal with with Pratt parsing (shunting yard, precedence climbing - all the same).

                                                                                                                        2. 1

                                                                                                                          Having used and written both parser generators and hand-written parsers, I agree: parser generators are nice for prototypes of very simple formats, but end up becoming a pain for larger formats.

                                                                                                                        3. 7

                                                                                                                          Thanks! My thought process is normally: handwritten is simple enough to write, easy to explain, and common in real world software, so why learn a new tool when the fun part is what’s after the parser? I just try to focus on the rest.

                                                                                                                          1. 4

                                                                                                                            Lua’s grammar is also pretty carefully designed to not put you into weird ambiguous situations, with only one exception I can think of. (The ambiguity of a colon-based method call in certain contexts.)

                                                                                                                          2. 6

                                                                                                                            In general that is true, but with Lua there are compelling reasons (which I won’t get into here) to handwrite a single step parser for real implementations.

                                                                                                                            That’s what the reference implementation does, despite its author being well-known for his research in PEGs (and authoring LPEG).

                                                                                                                            1. 2

                                                                                                                              do you have suggestions on PEG parsers that generate JS as well Javascript/Kotlin ?
                                                                                                                              I was searching for something that I can use on a frontend webapp as well as on a backend (which is in Java).

                                                                                                                              1. 2

                                                                                                                                No, sorry; the one I’ve used only generates C.

                                                                                                                            1. 3

                                                                                                                              After six years of working for GitLab, this week is my last week. Starting 2022, I’ll be working on Inko full-time (some extra info). What I’ll be doing this week is mostly offboarding, and slowly ramping up work on Inko.

                                                                                                                              1. 36

                                                                                                                                I used Arch for years, even before they switched to systemd. I find it a little too hand-holdy now. I use NixOS, by the way.

                                                                                                                                1. 47

                                                                                                                                  I think it says a lot about both Arch and Nix that I legitimately have no idea whatsoever if you’re being serious or trolling.

                                                                                                                                  1. 31

                                                                                                                                    NixOS users are the vegans of the family

                                                                                                                                    1. 16

                                                                                                                                      As a vegan, I resent this. I never talk about being vegan unless someone asks. Or to take a shot at NixOS users apparently.

                                                                                                                                      1. 3

                                                                                                                                        If food comes up, I tell people. I don’t try to preach to people, but I’ll engage (to some extent) when asked. I do not engage when people ask questions such as “what do you even eat?” or when people are just being dicks about it—not usually worth it.

                                                                                                                                      2. 16

                                                                                                                                        Are you saying NixOS is trendy and less cruel, and better for the environment?

                                                                                                                                        1. 13

                                                                                                                                          They are probably saying there’s a stereotype of many vegans being absolutely insufferable to be around with, and vilifying anybody that isn’t a vegan. At least in my experience that stereotype (unfortunately) has some truth to it.

                                                                                                                                          1. 3

                                                                                                                                            It’s not vegans in general, it’s PETA. PETA knows that they can’t convert anyone who isn’t 95% of the way already, so their strategy is to remain in the public eye through shock marketing, on the “all publicity is good publicity” theory.

                                                                                                                                            I know lots of vegans who never say anything about other people’s food choices, and only mention their own choice when it comes up.

                                                                                                                                            1. 2

                                                                                                                                              How do you tell if someone’s vegan?

                                                                                                                                              Say “Look, I’m happy to make vegan entrees, sides, and desserts for the party, but you have to tell me now, instead of three hours in when I see you aren’t eating anything and ask if you’re okay.”

                                                                                                                                              1. 1

                                                                                                                                                I was a vegetarian for 20 years, including when I met my wife. My wife was never vegetarian but she doesn’t eat cheese.

                                                                                                                                                We visited a friend and found that they had made a three part pizza for the evening: one part with pepperoni and cheese, one part with cheese but no pepperoni, and one part with pepperoni but no cheese.

                                                                                                                                                We were very grateful but a bit embarrassed by the effort on our behalf.

                                                                                                                                                1. 3

                                                                                                                                                  If it helps, they probably saw it as a chance to challenge themselves. Or a chance to flaunt their cooking skills. I def feel both when cooking for restricted diets

                                                                                                                                            2. 2

                                                                                                                                              There’s not the same moral dimension to Nix, compared to veganism; people who don’t use Nix are not somehow morally deficient or inappropriate. The main reason that we share Nix is to ease the suffering of users, not the suffering of computers or packages.

                                                                                                                                              1. 2

                                                                                                                                                Yeah. I get it. This is me not being confused, and deflecting this stupid comparison.

                                                                                                                                              2. 8

                                                                                                                                                Given your homepage declares your veganism in its second sentence, I don’t think you can feign too much indignation about the grandparent jibe.

                                                                                                                                                1. 1

                                                                                                                                                  Huh? I live a vegan lifestyle, yes. Do you have a problem with that?

                                                                                                                                                  Jesus. Mechanical engineers are insufferable.

                                                                                                                                                  1. 11

                                                                                                                                                    You announce you’re a vegan. That’s literally the reason for the “btw, I’m vegan/arch user” joke. There’s nothing wrong with that, but as you can see the stereotype matches reality.

                                                                                                                                                    1. 2

                                                                                                                                                      They announce it on their webpage… that you chose to go and read…

                                                                                                                                                      1. 1

                                                                                                                                                        I am fully aware.

                                                                                                                                                  2. 2

                                                                                                                                                    If you’ve used NixOS, you’ll know less cruel doesn’t apply

                                                                                                                                                    1. 2

                                                                                                                                                      It’s been a long time, but I gave up pretty quickly. 😀

                                                                                                                                              1. 2

                                                                                                                                                What do people prefer between KeePassXC and passwordstore.org? Personally I use the latter but mostly because I found it first and have invested effort into setting it up. But I was thinking of switching because since keepass(xc) stores passwords in a single file it seems easier to manage across devices. (As opposed to pass where files for each website are generally separate.)

                                                                                                                                                1. 12

                                                                                                                                                  I don’t care for passwordstore.org, because as you mentioned, it leaks the accounts you have to the filesystem. If your threat model includes a multi-user system or cloud storage, then this might be a problem. With KeePassXC, this threat is mitigated as every entry in stored a single encrypted database.

                                                                                                                                                  EDIT: typo

                                                                                                                                                  1. 8

                                                                                                                                                    But I was thinking of switching because since keepass(xc) stores passwords in a single file it seems easier to manage across devices

                                                                                                                                                    It’s multiple files with pass but it can be a single git repo, which I’ve found is a lot more useful since it can detect conflicts and stuff. Running pass git pull --rebase otherlaptop fits a lot better with my mental model and existing tooling than “just put it in and the program performs some unspecified merge algorithm somehow”.

                                                                                                                                                    1. 5

                                                                                                                                                      I’m using Strongbox on iOS these days. When I started using it, I was hesitant to pay for the Pro Lifetime version ($60), dictated by how well it would work for at least a year. I’m happy to say that it’s been exceeding my expectations for well over two years now, and I did end up paying for the lifetime version.

                                                                                                                                                      1. 4

                                                                                                                                                        I used pass for a few years, but recently switched to Bitwarden. I did try KeePassXC, but didn’t like it because:

                                                                                                                                                        • For some reason it was using 200+ MB of memory. I think that’s a bit much for something that has to run in the background.
                                                                                                                                                        • Syncing would be a bit clunky. Technically you can stuff the DB in Git, but it’s not great.
                                                                                                                                                        • Qt applications under GNOME/Gtk WMs always look/feel a bit clunky

                                                                                                                                                        My main issues with pass were the usual ones:

                                                                                                                                                        • It’s free-form nature makes it a bit difficult to keep password files consistent
                                                                                                                                                        • You leak file names. This isn’t the biggest deal for me, but I’d prefer to avoid it if possible
                                                                                                                                                        • Not necessarily a flaw of pass but more of my setup: I had pass auto-unlock upon logging in. This is great for me, but also means any application can just run pass ... and read passwords.

                                                                                                                                                        Bitwarden is OK, though I really hate their CLI. There’s an unofficial one (https://github.com/doy/rbw) that’s nicer to use, but it doesn’t support YubiKey logins (https://github.com/doy/rbw/issues/7), so I can’t use it.

                                                                                                                                                        1. 1

                                                                                                                                                          Syncing would be a bit clunky. Technically you can stuff the DB in Git, but it’s not great.

                                                                                                                                                          I do both. I have issues with neither method. My only problem with having the full history available is that there is no rekeying the database, you have to change every password for it to make sense. Or maybe it’s only making me aware of the actual implications of leaking the db.

                                                                                                                                                          Qt applications under GNOME/Gtk WMs always look/feel a bit clunky

                                                                                                                                                          Working in a terminal 99% of the time, I have no issue with this. In my barebones i3 setup every GUI is ugly anyway. That irked me at first, but I learned not to care a long time ago.

                                                                                                                                                          1. 1

                                                                                                                                                            It depends on your threat model but I sync my KeePass file using cloud sync (Dropbox, Jottacloud, Syncthing).

                                                                                                                                                            Been doing this for several years and no issues.

                                                                                                                                                            What I like about KeePass is that it is available on so many platforms. So even using OpenBSD and SailfishOS, I had no issue finding clients.

                                                                                                                                                          2. 2

                                                                                                                                                            I’ve found passwordstore to be a great “clearing house” for importing from elsewhere even if it isn’t my final destination. I used it to export from 1Password and the Keepass family (which I tried but didn’t really like). I’m currently polishing off a script to import my password store to Bitwarden.

                                                                                                                                                          1. 19

                                                                                                                                                            +1

                                                                                                                                                            Another useful feature are abbreviations: https://fishshell.com/docs/current/cmds/abbr.html. These are like aliases, except that they expanded in place. That is, you type sw but what you get is git switch. That’s pretty useful to keep the cognitive load low, as what you read are full commands. Also helpful when you want to edit command slightly.

                                                                                                                                                            1. 3

                                                                                                                                                              This looks nice as a replacement for some of my Fish functions. Thanks for sharing!

                                                                                                                                                              1. 1

                                                                                                                                                                I never know such a thing. I might move some alias to abbrev. To keep the cognitive load low. Thanks.

                                                                                                                                                              1. 15

                                                                                                                                                                I’ve been using Fish since 2015, and it’s been great. Fish not being POSIX compatible hasn’t been an issue in practice, though I don’t do a lot of shell scripting to begin with. If somebody is curious about my Fish configuration, you can find it here.

                                                                                                                                                                1. 4

                                                                                                                                                                  For me the lack of sh-compatible syntax has been a real problem, to the point where I switched to bash at work. Fish does have the best user experience I’ve seen, but the need for specific Fish scripts is a problem, in particular with Nix or any tool that you need to load from your profile. There are wrappersz like bass, but they don’t always work and have overhead.

                                                                                                                                                                  1. 30

                                                                                                                                                                    Just because fish is your interactive shell doesn’t mean that you need to start shell scripts with !/usr/bin/env fish.

                                                                                                                                                                    1. 9

                                                                                                                                                                      I never understood what people is doing all day in their prompt that needs POSIX compatibility. The syntax to call commands is the same.

                                                                                                                                                                      I think it is mostly a meme or a simple matter of running copy pasted scripts from the web and not understanding how interpreter resolution works or that you can manually define it.

                                                                                                                                                                      1. 1

                                                                                                                                                                        Not necessarily whole scripts. Sometimes you want to paste just a couple commands into the interactive prompt.

                                                                                                                                                                      2. 3

                                                                                                                                                                        But for stuff like Nix, don’t you have to run the setup scripts in your interactive shell with source or equivalent, so they can set environment variables and such?

                                                                                                                                                                        1. 3

                                                                                                                                                                          In Unix, all child processes of a process will inherit the parent’s environment. You should be able to write all your scripts as POSIX compliant (or bash compliant) and run them from inside fish without an issue, as long as you specify the interpreter like so: bash myscript.sh

                                                                                                                                                                          1. 8

                                                                                                                                                                            The problem, if I understood it right (I’ve never used things like Nix) is that these are not things you’re supposed to run, but things you’re supposed to source. I.e. you source whatever.sh so that you get the right environment to do everything else. Sort of like the decade-old env.sh hack for closed-source toolchains and the like, which you source so that you can get all the PATH and LD_LIBRARY_PATH hacks only for this interactive session instead of polluting your .profile with them.

                                                                                                                                                                            1. 1

                                                                                                                                                                              I see, that makes sense. I guess I didn’t consider that, wonder how the activate script generated with a Python virtual environment would work with Fish. Even a relatively fancy .profile file might be incompatible with Fish.

                                                                                                                                                                      3. 8

                                                                                                                                                                        I usually just switch to bash when I need to run a command this way. And honesty I’m more annoyed at commands like these that modify your shell environment and thus force you to use a POSIX-compatible shell, than I am at fish for deliberately trying something different that isn’t POSIX.

                                                                                                                                                                        1. 1

                                                                                                                                                                          Fortunately some commands are designed to output a shell snippet to be used with eval

                                                                                                                                                                          eval $(foo) # foo prints 'export VAR=bar'
                                                                                                                                                                          

                                                                                                                                                                          In that case you can pipe output of foo to Fish’s source

                                                                                                                                                                          foo | source
                                                                                                                                                                          
                                                                                                                                                                          1. 2

                                                                                                                                                                            No, that’s exactly what you can’t do, the code won’t be valid for source-ing (unless those commands specifically output csh-style rather than posix-style script)

                                                                                                                                                                            Though apparently these days fish does support the most common POSIX-isms

                                                                                                                                                                            1. 1

                                                                                                                                                                              I mean only the case where you set env variables (like luarocks path)

                                                                                                                                                                        2. 4

                                                                                                                                                                          I also had problems with bass. It was too slow to run in my config.fish. However, I switched to https://github.com/oh-my-fish/plugin-foreign-env and it’s worked perfectly for me. And you don’t need oh-my-fish to use it — I installed it with the plugin manager https://github.com/jorgebucaran/fisher.

                                                                                                                                                                          1. 2

                                                                                                                                                                            Ah, I hadn’t seen this one, if it succeeds to setup Nix then it’s party time!

                                                                                                                                                                            1. 3

                                                                                                                                                                              Not a fish user, but since you’re a Nix user I would also recommend checking out direnv which has fish support. For nix in direnv specifically I would also recommend something like nix-direnv (and several others) which caches and also gcroots environments so overheads are next to negligible when things remain unchanged.

                                                                                                                                                                            2. 2

                                                                                                                                                                              That looks good enough to make me want to try fish again. I had not seen it last time I tried fish. Thanks for pointing it out.

                                                                                                                                                                        1. 19

                                                                                                                                                                          Looks like he hit 56 GB/s when it was run on the AMD 5950x. The author of the code said he spent months working on this. Incredible dedication.

                                                                                                                                                                          1. 5

                                                                                                                                                                            Yeah but can they invert a binary tree on a whiteboard? If not this person doesn’t stand a chance of getting hired anywhere /s

                                                                                                                                                                            1. 1

                                                                                                                                                                              It’s a shame as the inverted binary tree presentation task is well known and less-skilled devs can just practice it ahead of time. I can’t imagine being asked to write FizzBuzz while avoiding the heap while being given the freedom to use Linux API calls that would crash the app if its output is not piped into another application. The author of the answer potentially found a Kernel bug while working on this. Even after seeing this a few days ago, I’m still seriously impressed with this person’s work.

                                                                                                                                                                          1. 18

                                                                                                                                                                            Pattern matching has been available in functional programming languages for decades now, it was introduced in the 70s. (Logic programming languages expose even more expressive forms, at higher runtime cost.) It obviously improves readability of code manipulating symbolic expressions/trees, and there is a lot of code like this. I find it surprising that in the 2020s there are still people wondering whether “the feature provides enough value to justify its complexity”.

                                                                                                                                                                            (The fact that Python did without for so long was rather a sign of closed-mindedness of its designer subgroup. The same applies, in my opinion, to languages (including Python, Go, etc.) that still don’t have proper support for disjoint union types / variants / sums / sealed case classes.)

                                                                                                                                                                            1. 45

                                                                                                                                                                              Pretty much every feature that has ever been added to every language ever is useful in some way. You can leave a comment like this on almost any feature that a language may not want to implement for one reason or the other.

                                                                                                                                                                              1. 14

                                                                                                                                                                                I think it makes more sense in statically typed languages, especially functional ones. That said, languages make different choices. For me, Python has always been about simplicity and readability, and as I’ve tried to show in the article, at least in Python, structural pattern matching is only useful in a relatively few cases. But it’s also a question of taste: I really value the simplicity of the Go language (and C before it), and don’t mind a little bit of verbosity if it makes things clearer and simpler. I did some Scala for a while, and I can see how people like the “power” of it, but the learning curve of its type system was very steep, and there were so many different ways to do things (not to mention the compiler was very slow, partly because of the very complex type system).

                                                                                                                                                                                1. 22

                                                                                                                                                                                  For the record, pattern-matching was developed mostly in dynamically-typed languages before being adopted in statically-typed languages, and it works just as well in a dynamically-typed world. (In the ML-family world, sum types and pattern-matching were introduced by Hope, an experimental dynamically-typed language; in the logic world, they are basic constructs of Prolog, which is also dynamically-typed – although some more-typed dialects exist.)

                                                                                                                                                                                  as I’ve tried to show in the article, at least in Python, structural pattern matching is only useful in a relatively few cases

                                                                                                                                                                                  Out of the 4 cases you describe in the tutorial, I believe your description of two of them is overly advantageous to if..elif:

                                                                                                                                                                                  • In the match event.get() case, the example you show is a variation of the original example (the longer of the three such examples in the tutorial), and the change you made makes it easier to write an equivalent if..elif version, because you integrated a case (from another version) that ignores all other Click() events. Without this case (as in the original tutorial example), rewriting with if..elif is harder, you need to duplicate the failure case.
                                                                                                                                                                                  • In the eval_expr example, you consider the two versions as readable, but the pattern-version is much easier to maintain. Consider, for example, supporting operations with 4 or 5 parameters, or adding an extra parameter to an existing operator; it’s an easy change with the pattern-matching version, and requires boilerplate-y, non-local transformations with if..elif. These may be uncommon needs for standard mathematical operations, but they are very common when working with other domain-specific languages.
                                                                                                                                                                                  1. 1

                                                                                                                                                                                    the change you made makes it easier to write an equivalent if..elif version

                                                                                                                                                                                    Sorry if it appeared that way – that was certainly not my intention. I’m not quite sure what you mean, though. The first/original event example in the tutorial handles all click events with no filtering using the same code path, so it’s even simpler to convert. I added the Button.LEFT filtering from a subsequent example to give it a bit more interest so it wasn’t quite so simple. I might be missing something, though.

                                                                                                                                                                                    In the eval_expr example, you consider the two versions as readable, but the pattern-version is much easier to maintain. Consider, for example, supporting operations with 4 or 5 parameters, or adding an extra parameter to an existing operator;

                                                                                                                                                                                    I think those examples are very hypothetical – as you indicate, binary and unary operators aren’t suddenly going to support 4 or 5 parameters. A new operation might, but that’s okay. The only line that’s slightly repetitive is the “attribute unpacking”: w, x, y, z = expr.w, expr.x, expr.y, expr.z.

                                                                                                                                                                                    These may be uncommon needs for standard mathematical operations, but they are very common when working with other domain-specific languages.

                                                                                                                                                                                    You’re right, and that’s part of my point. Python isn’t used for implementing compilers or interpreters all that often. That’s where I’m coming from when I ask, “does the feature provide enough value to justify the complexity?” If 90% of Python developers will only rarely use this complex feature, does it make sense to add it to the language?

                                                                                                                                                                                    1. 3

                                                                                                                                                                                      that was certainly not my intention.

                                                                                                                                                                                      To be clear, I’m not suggesting that the change was intentional or sneaky, I’m just pointing out that the translation would be more subtle.

                                                                                                                                                                                      The first/original event example does not ignore “all other Click events” (there is no Click() case), and therefore an accurate if..elif translation would have to do things differently if there is no position field or if it’s not a pair, namely it would have to fall back to the ValueError case.

                                                                                                                                                                                      You’re right, and that’s part of my point. Python isn’t used for implementing compilers or interpreters all that often.

                                                                                                                                                                                      You don’t need to implement a compiler for C or Java, or anything people recognize as a programming language (or HTML or CSS, etc.), to be dealing with a domain-specific languages. Many problem domains contain pieces of data that are effectively expressions in some DSL, and recognizing this can very helpful to write programs in those domains – if the language supports the right features to make this convenient. For example:

                                                                                                                                                                                      • to start with the obvious, many programs start by interpreting some configuration file to influence their behavior; many programs have simple needs well-served by linear formats, but many programs (eg. cron jobs, etc.) require more elaborate configurations that are DSL-like. Even if the configuration is written in some standard format (INI, Yaml, etc.) – so parsing can be delegated to a library – the programmer will still write code to interpret or analyze the configuration data.
                                                                                                                                                                                      • more gnerally, “structured data formats” are often DSL-shaped; ingesting structured data is something we do super-often in programs
                                                                                                                                                                                      • programs that offer a “query” capability typically provide a small language to express those queries
                                                                                                                                                                                      • events in an event loop typically form a small language
                                                                                                                                                                                  2. 14

                                                                                                                                                                                    I think it makes more sense in statically typed languages, especially functional ones.

                                                                                                                                                                                    In addition to the earlier ones gasche mentioned (it’s important to remember this history), it’s used to pervasively in Erlang, and later Elixir. Clojure has core.match, Racket has match, as does Guile. It’s now in Ruby as well!

                                                                                                                                                                                    1. 3

                                                                                                                                                                                      Thanks! I didn’t know that. I have used pattern matching in statically typed language (mostly Scala), and had seen it in the likes of Haskell and OCaml, so I’d incorrectly assumed it was mainly a statically-typed language thing.

                                                                                                                                                                                      1. 1

                                                                                                                                                                                        It is an important feature of OCaml.

                                                                                                                                                                                        1. 3

                                                                                                                                                                                          I am aware - was focusing on dynamically typed languages.

                                                                                                                                                                                      2. 7

                                                                                                                                                                                        For me, it is the combination of algebraic data types + pattern matching + compile time exhaustiveness checking that is the real game changer. With just 1 out of 3, pattern matching in Python is much less compelling.

                                                                                                                                                                                        1. 1

                                                                                                                                                                                          I agree. I wonder if they plan to add exhaustiveness checking to mypy. The way the PEP is so no hold barred makes it seem like the goal was featurefulness and not an attempt to support exhaustiveness checking.

                                                                                                                                                                                          1. 2

                                                                                                                                                                                            I wonder if they plan to add exhaustiveness checking to mypy.

                                                                                                                                                                                            I don’t think that’s possible in the general case. If I understand the PEP correctly, __match_args__ may be a @property getter method, which could read the contents of a file, or perform a network request, etc.

                                                                                                                                                                                      3. 11

                                                                                                                                                                                        I find it surprising that in the 2020s there are still people wondering whether “the feature provides enough value to justify its complexity”.

                                                                                                                                                                                        I find it surprising that people find this surprising.

                                                                                                                                                                                        Adding features like pattern matching isn’t trivial, and adding it too hastily can backfire in the long term; especially for an established language like Python. As such I would prefer a language take their time, rather than slapping things on because somebody on the internet said it was a good idea.

                                                                                                                                                                                        1. 3

                                                                                                                                                                                          That’s always been the Scheme philosophy:

                                                                                                                                                                                          Programming languages should be designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary.

                                                                                                                                                                                          And indeed, this pays off: in the Scheme world, there’s been a match package floating around for a long time, implemented simply as a macro. No changes to the core language needed.

                                                                                                                                                                                          1. 4

                                                                                                                                                                                            No changes to the core language needed.

                                                                                                                                                                                            I’m sure you recognize that this situation does not translate to other languages like in this case Python. Implementing it as a macro is just not feasible. And even in Scheme the usage of match macros is rather low. This can be because it is not that useful, but also might be because of the hurdle of adding dependencies is not worth the payoff. Once a feature is integrated in a language, its usage “costs” nothing, thus the value proposition when writing code can be quite different.

                                                                                                                                                                                            1. 7

                                                                                                                                                                                              This is rather unrelated to the overall discussion, but as a user of the match macros in Scheme, I must say that I find the lack of integration into the base forms slightly annoying. You cannot pattern-match on a let or lambda, you have to use match-let and match-lambda, define/match (the latter only in Racket I think), etc. This makes reaching for pattern-matching feel heavier, and it may be a partial cause to their comparatively lower usage. ML-family languages generalize all binding positions to accept patterns, which is very nice to decompose records for example (or other single-case data structures). I wish Scheme dialects would embrace this generalization, but they haven’t for now – at least not Racket or Clojure.

                                                                                                                                                                                              1. 2

                                                                                                                                                                                                In the case of Clojure while it doesn’t have pattern matching built-in, it does have quite comprehensive destructuring forms (like nested matching in maps, with rather elaborate mechanisms) that works in all binding positions.

                                                                                                                                                                                                1. 2

                                                                                                                                                                                                  Nice! I suppose (from your post above) that pattern-matching is somehow “integrated” in the Clojure implementation, rather than just being part of the base macro layer that all users see.

                                                                                                                                                                                                  1. 2

                                                                                                                                                                                                    I think the case is that Clojure core special forms support it (I suppose the implementation itself is here and called “binding-forms”, which is then used by let, fn and loop which user defined macros often end up expanding to). Thus it is somewhat under the base layer that people use.

                                                                                                                                                                                                    But bear in mind this is destructuring, in a more general manner than what Python 2.x already supported, not pattern matching. It also tends to get messy with deep destructuring, but the same can be said of deep pattern matches through multiple layers of constructors.

                                                                                                                                                                                        2. 8

                                                                                                                                                                                          I agree about pattern matching and Python in general. It’s depressing how many features have died in python-ideas because it takes more than a few seconds for an established programmer to grok them. Function composition comes to mind.

                                                                                                                                                                                          But I think Python might be too complicated for pattern matching. The mechanism they’ve settled on is pretty gnarly. I wrote a thing for pattern matching regexps to see how it’d turn out (admittedly against an early version of the PEP; I haven’t checked it against the current state) and I think the results speak for themselves.

                                                                                                                                                                                          1. 6

                                                                                                                                                                                            But I think Python might be too complicated for pattern matching. The mechanism they’ve settled on is pretty gnarly.

                                                                                                                                                                                            I mostly agree. I generally like pattern matching and have been excited about this feature, but am still feeling out exactly when I’ll use it and how it lines up with my intuition.

                                                                                                                                                                                            The part that does feel very Pythonic is that destructuring/unpacking is already pretty pervasive in Python. Not only for basic assignments, but also integrated into control flow constructs. For example, it’s idiomatic to do something like:

                                                                                                                                                                                            for key, val in some_dictionary.items():
                                                                                                                                                                                                # ...
                                                                                                                                                                                            

                                                                                                                                                                                            Rather than:

                                                                                                                                                                                            for item in some_dictionary.items():
                                                                                                                                                                                                key, val = item
                                                                                                                                                                                                # ...
                                                                                                                                                                                            

                                                                                                                                                                                            Or something even worse, like explicit item[0] and item[1]. So the lack of a conditional-with-destructuring, the way we already have foreach-with-destructuring, did seem like a real gap to me, making you have to write the moral equivalent of code that looks more like the 2nd case than the 1st. That hole is now filled by pattern matching. But I agree there are pitfalls around how all these features interact.

                                                                                                                                                                                          2. 2
                                                                                                                                                                                            for i, (k, v) in enumerate(d.items(), 1): pass
                                                                                                                                                                                            

                                                                                                                                                                                            looks like pattern matching to me

                                                                                                                                                                                            1. 2

                                                                                                                                                                                              Go aims for simplicity of maintenance and deployment. It doesn’t “still don’t have those features”. The Go authors avoided them on purpose. If you want endless abstractions in Go, embedding Lisp is a possibilty: https://github.com/glycerine/zygomys

                                                                                                                                                                                              1. 5

                                                                                                                                                                                                Disjoint sums are a basic programming feature (it models data whose shape is “either this or that or that other thing”, which ubiquitous in the wild just like pairs/records/structs). It is not an “endless abstraction”, and it is perfectly compatible with maintenance and deployment. Go is a nice language in some respects, the runtime is excellent, the tooling is impressive, etc etc. But this is no rational excuse for the lack of some basic language features.

                                                                                                                                                                                                We are in the 2020s, there is no excuse for lacking support for sum types and/or pattern matching. Those features have been available for 30 years, their implementation is well-understood, they require no specific runtime support, and they are useful in basically all problem domains.

                                                                                                                                                                                                I’m not trying to bash a language and attract defensive reactions, but rather to discuss (with concrete examples) the fact that language designer’s mindsets can be influenced by some design cultures more than others, and as a result sometimes the design is held back by a lack of interest for things they are unfamiliar with. Not everyone is fortunate to be working with a deeply knowledgeable and curious language designer, such as Graydon Hoare; we need more such people in our language design teams. The default is for people to keep working on what they know; this sort of closed-ecosystem evolution can lead to beautiful ideas (some bits of Perl 6 for example are very nice!), but it can also hold back.

                                                                                                                                                                                                1. 3

                                                                                                                                                                                                  But this is no rational excuse for the lack of some basic language features.

                                                                                                                                                                                                  Yes there is. Everyone has a favorite feature, and if all of those are implemented, there would easily be feature bloat, long build times and projects with too many dependencies that depend on too many dependencies, like in C++.

                                                                                                                                                                                                  In my opinion, the question is not if a language lacks a feature that someone wants or not, but if it’s usable for goals that people wish to achieve, and Go is clearly suitable for many goals.

                                                                                                                                                                                              2. 3

                                                                                                                                                                                                Ah yes, Python is famously closed-minded and hateful toward useful features. For example, they’d never adopt something like, say, list comprehensions. The language’s leaders are far too closed-minded, and dogmatically unwilling to ever consider superior ideas, to pick up something like that. Same for any sort of ability to work with lazy iterables, or do useful combinatoric work with them. That’s something that definitely will never be adopted into Python due to the closed-mindedness of its leaders. And don’t get me started on basic FP building blocks like map and folds. It’s well known that Guido hates them so much that they’re permanently forbidden from ever being in the language!

                                                                                                                                                                                                (the fact that Python is not Lisp was always unforgivable to many people; the fact that it is not Haskell has now apparently overtaken that on the list of irredeemable sins; yet somehow we Python programmers continue to get useful work done and shrug off the sneers and insults of our self-proclaimed betters much as we always have)

                                                                                                                                                                                                1. 25

                                                                                                                                                                                                  It is well-documented that Guido Van Rossum planned to remove lambda from Python 3. (For the record, I agree that map and filter on lists are much less useful in presence of list comprehensions.) It is also well-documented that recursion is severely limited in Python, making many elegant definitions impractical.

                                                                                                                                                                                                  Sure, Python adopted (in 2000 I believe?) list comprehensions from ABC (due to Guido working with the language in the 1980s), and a couple of library-definable iterators. I don’t think this contradicts my claim. New ideas came to the language since (generators, decorators), but it remains notable that the language seems to have resisted incorporating strong ideas from other languages. (More so than, say, Ruby, C#, Kotlin, etc.)

                                                                                                                                                                                                  Meta: One aspect of your post that I find unpleasant is the tone. You speak of “sneers and insults”, but it is your post that is highly sarcastic and full of stray exagerations at this or that language community. I’m not interested in escalating in this direction.

                                                                                                                                                                                                  1. 7

                                                                                                                                                                                                    less useful in presence of list comprehension

                                                                                                                                                                                                    I’m certainly biased, but I find Python’s list comprehension an abomination towards readability in comparison to higher-order pipelines or recursion. I’ve not personally coded Python in 8-9 years, but when I see examples, I feel like I need to put my head on upsidedown to understand it.

                                                                                                                                                                                                    1. 6

                                                                                                                                                                                                      It is also well-documented that recursion is severely limited in Python, making many elegant definitions impractical.

                                                                                                                                                                                                      For a subjective definition of “elegant”. But this basically is just “Python is not Lisp” (or more specifically, “Python is not Scheme”). And that’s OK. Not every language has to have Scheme’s approach to programming, and Scheme’s history has shown that maybe it’s a good thing for other languages not to be Scheme, since Scheme has been badly held back by its community’s insistence that tail-recursive implementations of algorithms should be the only implementations of those algorithms.

                                                                                                                                                                                                      You speak of “sneers and insults”, but it is your post that is highly sarcastic and full of stray exagerations at this or that language community.

                                                                                                                                                                                                      Your original comment started from a place of assuming – and there really is no other way to read it! – that the programming patterns you care about are objectively superior to other patterns, that languages which do not adopt those patterns are inherently inferior, and that the only reason why a language would not adopt them is due to “closed-mindedness”. Nowhere in your comment is there room for the (ironically) open-minded possibility that someone else might look at patterns you personally subjectively love, evaluate them rationally, and come to a different conclusion than you did – rather, you assume that people who disagree with your stance must be doing so because of personal faults on their part.

                                                                                                                                                                                                      And, well, like I said we’ve got decades of experience of people looking down their noses at Python and/or its core team + community for not becoming a copy of their preferred languages. Your comment really is just another instance of that.

                                                                                                                                                                                                      1. 8

                                                                                                                                                                                                        I’m not specifically pointing out the lack of tail-call optimization (TCO) in Python (which I think is unfortunate indeed; the main argument is that call stack matters, but it’s technically fully possible to preserve call stacks on the side with TC-optimizing implementations). Ignoring TCO for a minute, the main problem would be the fact that the CPython interpreter severely limits the call space (iirc it’s 1K calls by default; compare that to the 8Mb default on most Unix systems), making recursion mostly unusable in practice, except for logarithmic-space algorithms (balanced trees, etc.).

                                                                                                                                                                                                        Scheme has been badly held back by its community’s insistence that tail-recursive implementations of algorithms should be the only implementations of those algorithms.

                                                                                                                                                                                                        I’m not sure what you mean – that does not make any sense to me.

                                                                                                                                                                                                        [you assume] that the programming patterns you care about are objectively superior to other patterns [..]

                                                                                                                                                                                                        Well, I claimed

                                                                                                                                                                                                        [pattern matching] obviously improves readability of code manipulating symbolic expressions/trees

                                                                                                                                                                                                        and I stand by this rather modest claim, which I believe is an objective statement. In fact it is supported quite well by the blog post that this comment thread is about. (Pattern-matching combines very well with static typing, and it will be interesting to see what Python typers make of it; but its benefits are already evident in a dynamically-typed context.)

                                                                                                                                                                                                        1. 4

                                                                                                                                                                                                          and I stand by this rather modest claim, which I believe is an objective statement.

                                                                                                                                                                                                          Nit: I don’t think you can have an objective statement of value.

                                                                                                                                                                                                          1. 4

                                                                                                                                                                                                            Again: your original comment admits of no other interpretation than that you do not believe anyone could rationally look at the feature you like and come to a different conclusion about it. Thus you had to resort to trying to find personal fault in anyone who did.

                                                                                                                                                                                                            This does not indicate “closed-mindedness” on the part of others. They may prioritize things differently than you do. They may take different views of complexity and tradeoffs (which are the core of any new language-feature proposal) than you do. Or perhaps they simply do not like the feature as much as you do. But you were unwilling to allow for this — if someone didn’t agree with your stance it must be due to personal fault. You allowed for no other explanation.

                                                                                                                                                                                                            That is a problem. And from someone who’s used to seeing that sort of attitude it will get you a dismissive “here we go again”. Which is exactly what you got.

                                                                                                                                                                                                        2. 4

                                                                                                                                                                                                          This is perhaps more of a feeling, but saying that Rust isn’t adopting features as quickly as Ruby seems a bit off. Static type adoption in the Python community has been quicker. async/await has been painful, but is being attempted. Stuff like generalized unpacking (and this!) is also shipping out!

                                                                                                                                                                                                          Maybe it can be faster, but honestly Python probably has one of the lowest “funding amount relative to impact” of the modern languages which makes the whole project not be able to just get things done as quickly IMO.

                                                                                                                                                                                                          Python is truly in a funny place, where many people loudly complain about it not adopting enough features, and many other loudly complain about it loudly adopting too many! It’s of course “different people have different opinions” but still funny to see all on the same page.

                                                                                                                                                                                                          1. 3

                                                                                                                                                                                                            It is well-documented that Guido Van Rossum planned to remove lambda from Python 3

                                                                                                                                                                                                            Thank you for sharing that document. I think Guido was right: it’s not pythonic to map, nor to use lambdas in most cases.

                                                                                                                                                                                                            Every feature is useful, but some ecosystems work better without certain features. I’m not sure where go’s generics fall on this spectrum, but I’m sure most proposed features for python move it away from it’s core competency, rather than augmenting a strong core.

                                                                                                                                                                                                            1. 1

                                                                                                                                                                                                              We have previously discussed their tone problem. It comes from their political position within the Python ecosystem and they’re relatively blind to it. Just try to stay cool, I suppose?

                                                                                                                                                                                                              1. 6

                                                                                                                                                                                                                I really do recommend clicking through to that link, and seeing just what an unbelievably awful thing I said that the user above called out as “emblematic” of the “contempt” I display to Python users. Or the horrific ulterior motive I was found to have further down.

                                                                                                                                                                                                                Please, though, before clicking through, shield the eyes of children and anyone else who might be affected by seeing such content.

                                                                                                                                                                                                            2. 5

                                                                                                                                                                                                              To pick one of my favorite examples, I talked to the author of PEP 498 after a presentation that they gave on f-strings, and asked why they did not add destructuring for f-strings, as well as whether they knew about customizeable template literals in ECMAScript, which trace their lineage through quasiliterals in E all the way back to quasiquotation in formal logic. The author knew of all of this history too, but told me that they were unable to convince CPython’s core developers to adopt any of the more advanced language features because they were not seen as useful.

                                                                                                                                                                                                              I think that this perspective is the one which might help you understand. Where you see one new feature in PEP 498, I see three missing subfeatures. Where you see itertools as a successful borrowing of many different ideas from many different languages, I see a failure to embrace the arrays and tacit programming of APL and K, and a lack of pattern-matching and custom operators compared to Haskell and SML.

                                                                                                                                                                                                            3. 1

                                                                                                                                                                                                              I think the issue is more about pattern matching being a late addition to Python, which means there will be lots of code floating around that isn’t using match expressions. Since it’s not realistic to expect this code to be ported, the old style if … elif will continue to live on. All of this adds up to a larger language surface area, which makes tool support, learning and consistency more difficult.

                                                                                                                                                                                                              I’m not really a big fan of this “pile of features” style of language design - if you add something I’d prefer if something got taken away as well. Otherwise you’ll end up with something like Perl 5

                                                                                                                                                                                                            1. 7

                                                                                                                                                                                                              A GUI toolkit that is easy as html to use but as memory efficient as native.

                                                                                                                                                                                                              1. 5

                                                                                                                                                                                                                For some reason I’m reminded of XUL. And I think I just heard a Firefox developer cry out in terror.

                                                                                                                                                                                                                1. 2

                                                                                                                                                                                                                  the big question there would be what features do you consider essential from html?

                                                                                                                                                                                                                  1. 2

                                                                                                                                                                                                                    This is an idea I’ve been toying with for some time. Basically an HTML rendering engine meant for GUIs, like Sciter. But instead of Javascript, you’d control everything using the host language (probably Rust). If you’d want JS you’d have to somehow bind that to Rust.

                                                                                                                                                                                                                    I think this could really work out. However: I’ve dealt with XML and HTML parsers for years in the past, and I’m not sure I’m ready yet to dive into the mess that is HTML again.

                                                                                                                                                                                                                  1. 6

                                                                                                                                                                                                                    This sounds similar to how Pony does things

                                                                                                                                                                                                                    1. 3

                                                                                                                                                                                                                      Yup, though Pony takes it a bit further by introducing many different reference types/capabilities.