Threads for kaveman

  1. 9

    Heh, the only thing I think I need is some variant of ML with rust-level quality of implementation. Specifically, I think I need:

    • Functional programming language (as in “programming with values”, not as in “programming with category theory”)
    • Aimed at high-level programs (think Java/Go/Swift/Python/Ruby/C#)
    • With ruthless focus on human side in the human-computer continuum (so, all ints are big ints, no thinking about stack vs heap, no async in surface-syntax)
    • And reasonable concurrency story (either Erlant-style separte heaps, or, if someone invents Send/Sync+borrow checker which is not hard, that)
    • With low-overhead implementatin (seamlessly works on all platforms, fast to compile, static binaries)
    • And a minimal type-system which strongly encourages low-abstraction code, but still allows implementing JSON serialization in a non-horrible way.

    Basically, today I use Rust for everything because it “just works” more than anything else I’ve tried (didn’t’ try Go though), but I really don’t need blazing fastness for eg, my shell scripts. OCaml is in theory what I need, but I find it’s quality of implementation lacking.

    1. 4

      Reading the list, I was definitely thinking that OCaml almost fits the bill. Its compiler is actually very good (when it comes to emitting correct code, being fast, and having few compiler bugs).

      What is lacking is indeed a few “modern” things:

      • no standard deriving/serde (where rust clearly shines)
      • bigints are not the default (zarith is neat but it’s not as seamless as int)
      • the portability/static binary story is lacking.

      It’s really sad, because OCaml with some of Go’s tooling qualities would be incredible. Maybe someone will step up and develop the next ML :-).

      1. 6

        Yeah, OCaml is frustratingly close, but I also have a few gripes with its inner workings:

        • objects are redundant
        • polymorphic equality/hash/comparison is awkward, which is a sign of a deeper ad-hoc polymorphism issue
        • compilation model with a shared global namespace and without explicit DAG of dependencies is pretty horrible (ocamldep is one of the grossest hacks in my book)
        • no mature multicore support
        1. 2

          ocaml objects aren’t any more redundant than any feature in any language that has more than one way to do things. they may not be much used due to cultural/ecosystem choices, but every now and then they are the most ergonomic solution to what i’m trying to do.

          the namespace issues are indeed a pain though :( after getting a day job working in python i have been dissatisfied with namespacing in both ruby and ocaml, which are otherwise two of my favourite languages.

      2. 1

        Seems like any of Haskell/OCaml/Rust/Swift fits this bill depending on your other tastes.

        1. 7

          Not at all:

          • Rust prioritizes machine friendliness over human friendliness, the entire language is basically build around stack/heap distinction
          • OCaml suffers from QoI issues (elaborated in a different comment)
          • Haskell suffers from different QoI issues (fragmentation due to language pragmas, opaque behavior due to laziness, forces the programmer on the path of programming with categories rather than programming with values, build/packaging issues)

          Swift kinda at least is in the right ballpark I think, but there’s a big issue of platform comparability: if I understand correctly, a lot of weight is pulled by MacOS specific foundation library. And it also has quite a bit more complex type system than I think is optimal.

          Kotlin is kinda like Swift: great direction, but suffers from JVM platform and from comparability with Java.

          1. 2

            forces the programmer on the path of programming with categories rather than programming with values

            Would you mind elaborating on this distinction? It’s the first time I’ve heard it made.

            1. 4

              When people talk about functional programming, two different pictures come to mind.

              One is that of an advanced type-system, with roots in category theory. Keyword here is “monad”. This style of programming focuses on producing highly abstract, high order code, where you not so much do things directly, but rather provide an abstract description of computation in a DSL. Prime examples of this aspect of FP are Haskell and certain corners of Scala ecosystem.

              The different picture is that of non-imperative programming. Basically, everything is an immutable data structure, so every bit of program logic is just a function which takes some value as an input, and produces a different value as an output. Keyword would be “Algebraic Data Type”. This style of programming is direct: the code describes a sequence of transformation steps, it is mostly first-order. Prime examples of this aspect of FP are Clojure and Erlang, to a lesser extent OCaml, and, among more obscure languages, Elm.

              In my personal style I of programming, I’ve found that I really enjoy directness and simplicity, and that I fight overuse of abstraction more often than the lack of it. I am basically a Go programmer in my heart. So, I run towards FP as programming with values, and look skeptically at FP as programming with categories.

        2. 1

          Doesn’t Erlang/Elixir almost fill the bill. The only place where it lacks a little is 5th point, but:

          • With releases where you do not use NIFs you can just compile once, run everywhere (as soon as you have runtime)
          • You can build cross-releases with the ERTS included into the release which make the deployment almost self-contained (and with statically linked source ERTS is is self contained)
          • There are project like Burrito that make all the above much simpler.
          1. 2

            Yeah, I think Erlang’s runtime is quite close to what I want. I do need mandatory static types though, as they unlock a lot of automated tooling with strong guarantees of correctness.

            1. 7

              There is Gleam by @lpil that may fit your needs.

          2. 1

            D is kind of that, but somewhat more heavily weighted towards allowing performance optimization. So generally you don’t need to think about stack/heap etc, but you can if you want to. And you need to put in a bit of effort to use it in a pure functional style, but it’s very possible.

            1. 1

              “only thing”? you seem to want everything :)

              OCaml comes close… so does F# except for the static binaries. Scala native could be considered too… the runtime seems light but the concurrency story is missing. SML+pony runtime tacked on would be lovely.

              1. 1

                Yes, I do want quite a bunch of things, but I want only the boring things: I don’t need dependent types, give me rather a build system which doesn’t make me cry :)

                SML+pony runtime tacked on would be lovely.

                Yeah, someone here said once “Go should have been SML with channels”, and yes, that’s a good terse approximation to what I want.

                1. 1

                  Yeah, someone here said once “Go should have been SML with channels”, and yes, that’s a good terse approximation to what I want.

                  “SML with channels” sounds like Erlang to me….

            1. 7

              This is honestly the only thing that’s been holding me back from making anything in rust. Now that it’s going into GCC there’s probably going to be a spec and hopefully slower and more stabler development. I don’t know what’s going to come after rust but I can’t find much of a reason to not jump ship from C++ anymore.

              1. 33

                I doubt a new GCC frontend will be the reason a spec emerges. I would expect a spec to result from the needs of the safety and certification industry (and there already are efforts in that direction: https://ferrous-systems.com/blog/ferrocene-language-specification/ ) instead.

                1. 15

                  Thanks for highlighting that. We’re well on track to hit the committed release date (we’re in final polish, mainly making sure that the writing can be contributed to).

                2. 6

                  hopefully slower and more stabler development

                  As per usual, slower and more stable development can be experienced but using the version of rust in your OS instead of whatever bleeding edge version upstream is shipping…

                  1. 1

                    Unless one of your dependencies starts using new features as soon as possible.

                    1. 4

                      Which is the exact same problem even when using GCC Rust, so it’s not really a relevant argument.

                      1. 4

                        Stick with old version of dependency?

                        1. 21

                          Let’s be honest, Rust uses evergreen policy, the ecosystem and tooling follows it, and fighting it is needless pain.

                          I still recommend to update the compiler regularly. HOWEVER, you don’t have to read the release notes. Just ignore whatever they say, and continue writing the code the way you used to. Rust keeps backwards compatibility.

                          Also, I’d like to highlight that release cadence has very little to do with speed of language evolution or its stability. Rust features still take years to develop, and they’re just released on the next occasion. This says nothing about the number and scale of changes being developed.

                          It’s like complaint that a pizza cut into 16 slices has too many calories, and you’d prefer it cut into 4 slices instead.

                          1. 2

                            The time it takes to stabilize a feature doesn’t really matter though if there are many many features in the pipeline at all times.

                            1. 10

                              Yup, that’s what I’m saying. Number of features in the pipeline is unrelated to release frequency. Rust could have a new stable release every day, and it wouldn’t give it more or less features.

                          2. 3

                            Do that, and now you’re responsible for doing security back-ports of every dependency. That’s potentially a lot more expensive than tracking newer releases.

                            1. 13

                              So then don’t do that and track the newer releases. Life is a series of tradeoffs, pick some.

                              It just seems like a weird sense of entitlement at work here: “I don’t want to use the latest version of the compiler, and I don’t want to use older versions of dependencies because I don’t want to do any work to keep those dependencies secure. Instead I want the entire world to adopt my pace, regardless of what they’d prefer.”

                              1. 1

                                The problem with that view is that it devalues the whole ecosystem. You have two choices:

                                • Pay a cost to keep updating your code because it breaks with newer compilers.
                                • Pay a cost to back-port security fixes because the new version of your dependencies have moved to an incompatible version of the language.

                                If these are the only choices then you have to pick one, but there’s always an implicit third choice:

                                • Pick an ecosystem that values long-term stability.

                                To give a couple of examples from projects that I’ve worked on:

                                FreeBSD maintains very strong binary compatibility guarantees for C code. Kernel modules are expected to work with newer kernels within the same major revision and folks have to add padding to structures if they’re going to want to add fields later on. Userspace libraries in the base system all use symbol versioning, so functions can be deprecated, replaced with compat versions, and then hidden for linking by new programs. The C and C++ standards have both put a lot of effort into backwards compatibility. C++11 did have some syntactic breaks but they were fairly easy to mechanically fix (the main one was introducing user-defined string literals, which meant that you needed to insert spaces between string literals and macros in old code) but generally I can compile 10-20-year old code with the latest libraries and expect it to work. I can still compile C89 code with a C11 compiler. C23 will break C89 code that relies on some K&R features that were deprecated in 1989.

                                Moving away from systems code and towards applications, GNUstep uses Objective-C, which uses late binding by default and (for the last 15 years or so) even extends this to instance variables (fields) in objects, so you don’t even have an ABI break if a library adds a field to a class that you subclass. Apple has been a bit more aggressive about deprecating things in their OpenStep implementation (Cocoa), but there are quite a few projects still around that started in 1988 as NeXTSTEP apps and have gradually evolved to be modern macOS / iOS apps, with a multi-year window to fix the use of features that were removed or redesigned in newer versions of Cocoa. You can still compile a program with XCode today that will run linked against a version of the Cocoa frameworks in an OS release several years old.

                                The entitlement that you mention cuts both ways. If an ecosystem is saying ‘whatever you do, it’s going to be expensive, please come and contribute to the value of this ecosystem by releasing software in it!’ then my reaction will be ‘no thanks, I’ll keep contributing to places that value long-term stability because I want to spend my time adding new features, not playing catch up’.

                                LLVM has the same rapid-code-churn view of the world as Rust and it costs the ecosystem a lot. There are a huge number of interesting features that were implemented on forks and weren’t able to be upstreamed because the codebase has churned so much underneath it that updating was too much work for the authors.

                      2. 3

                        Corroding codebases! this was my reason too for not switching from C++. Only last week I was thinking of dlang -betterC for my little “system programming” projects. It is now hard not to ignore rust. perhaps after one last attempt at learning ATS.

                      1. 1

                        I see bellcore… did it use MGR? That was a neat little windowing system with a bunch of interesting ideas.

                        1. 1

                          There was definitely an X port. I can’t remember if MGR was available or not though I wouldn’t be surprised.

                          1. 2

                            Coherent got X11 before it got TCP/IP. The X11 programs were compiled to use a library that provided IP over named pipes rather than using the network. I don’t remember there ever being a commercial networking stack for the product from Mark Williams. Someone did successfully add TCP/IP after Mark Williams went out of business. I dimly remember an AT&T MGR port but I could be wrong.

                            I wish that the effort that went into making X11 work had been used for TCP/IP networking instead. I believe that X11 would have been easier to port if TCP/IP was available and the delay would have also meant working with a later version of XFree86. XFree86 was improving rapidly at that time so the delay would have meant better quality code all around. It didn’t work out that way for a lot of reasons. The X11 vs networking issue seems like a big blunder but looking back with a realistic eye toward the fog of war reveals that the decision was not controversial at all. It should have been clear that TCP/IP would be a big player in networking between 1993 and 1997 but earlier on it wasn’t obvious that TCP/IP would subsume and replace all other networks.

                            The mid ‘90s were a weird time in Unix. There were a handful of sanctioned AT&T Unix OS’s for the i386. I remember SCO, Interactive Unix, and Xenix to mention 3. Bill Jolitz and his wife had just released 386BSD. There was Coherent which, ran on both the 286 and the 386, and wasn’t based on AT&T code. Coherent had been reviewed and vetted as such by Dennis Ritchie. And obviously, Linux had just appeared.

                            1. 2

                              Calling out all greybeards, can we get a MGR port to NetBSD, pretty please!?

                          2. 1

                            MGR

                            AFAIK, no.

                            There was a 3rd party X server, for X11R5: https://www.autometer.de/unix4fun/coherent/third_party.html

                            A relatively recent look at Coherent: https://virtuallyfun.com/wordpress/2017/12/19/coherent-3-0/

                            Later the company did its own version – here’s the manual: https://www.nesssoftware.com/home/mwc/doc/coherent/x/pdf/X11.pdf

                          1. 11

                            Generating C++ is an underrated strategy! C++ has great debugging and performance tools like uftrace, perf, CLion, etc. And you can link with anything.

                            It can be annoying to write, e.g. writing all the redundant headers, but generating code from a different language solves that.

                            Oil is doing something similar: Brief Descriptions of a Python to C++ Translator

                            It gives us much better performance, as well as memory safety. There were several reactions like “Why C++? It’s not memory safe”. But the idea is to generate code with that property, as SerenityOS is doing. If you never write free() or delete in your source language, then there are no double-free / double-delete bugs.

                            (The boundary between manually written C++ and generated C++ is still an issue .. I’ll be interested see their API. But it should be nicer and more direct than the interface between two totally separate languages.)


                            There are also some interesting ways to generate fast code with C++ templates: https://souffle-lang.github.io/translate

                            i.e. you get a fairly nice language for writing a templatized “runtime”, and then you generate very high level typed code that targets it. Then the C++ compiler mushes it all together into fast code.

                            Oil’s scheme is simpler, but it has this flavor. The garbage collector and templated types interact in an interesting way, e.g. consider tracing Dict<int, int> vs. Dict<int, Str*>. (I’m working on a blog post about this.)

                            1. 5

                              There were several reactions like “Why C++? It’s not memory safe”.

                              As if assembly directly were any safer.

                              1. 3

                                D with the -betterC mode could also be a good target for transpilers. It’s got most of the good stuff from C++ and the imperative/inspection-based meta programming model could be easier to target, I think. Also has 3 compilers and all of them compile faster than C++ (no preprocessor etc). The only problem in Serenity’s case would be binding to existing C++ code seamlessly.

                                1. 3

                                  Yeah D is nice because it’s garbage collected already! It is a shame that people seem to shy away for certain things because it is garbage collected, but I view that as a big benefit.

                                  I’m guessing SerenityOS has a good reason for ARC though … So yeah that is a problem – GC’s are hard to tune and aren’t universal across applications. There was a thread here recently with Chris Lattner on GC vs ARC.

                                  I think Nim lets you choose the allocator, but to me a big downside is that the generated code isn’t readable. It is a huge benefit to be able to read it, step through it in the debugger, pretty print data structures, etc.

                                  1. 7

                                    The -betterC subset is not GCed. It doesn’t have a runtime. However, the good stuff (RAII, meta-programming) is all there… it can be tedious to use by hand but as a transpiler target I think it is a good choice.

                                2. 1

                                  I generate C++ from Go in https://github.com/nikki93/gx and when I do multiple TUs I’m planning to actually just generated decls for all referenced symbols on the top, rather than generate #includes. This way each TU is much smaller.

                                  1. 1

                                    Generating C++ is an underrated strategy

                                    I have a suspicion that the reason it’s underrated is that common old wisdom says “since I’m going to all the effort to generate code at all, maybe I generate C instead to get faster compile times.”

                                    I wonder to what extent that is no longer relevant? My belief is that there’s not an interesting difference now because most time goes into optimisers rather than parsing.

                                    C++ totally makes sense for this context of course because they already have a C++ codebase which they want to link with. :)

                                    1. 3

                                      Well I definitely ran into some template bloat problems, so it’s a double-edged sword. I think it basically comes down to whether you want to use templates, classes, and exceptions, etc. My post mentions that:

                                      http://www.oilshell.org/blog/2022/05/mycpp.html#why-generate-c-and-not-c

                                      Templates increase compile times but they can generate faster code, as the Souffle Datalog example shows. I would think of it as a very expressive and high level language for monomorphization.


                                      Also, when you generate code you have more control over the translation units, so the nonlinear explosion in compile times can be mitigated. Right now Oil has 24 translation units, and I see the redundancy in build times. I even preprocessed each one and counted the 10x blowup in line count:

                                      https://oilshell.zulipchat.com/#narrow/stream/121539-oil-dev/topic/Preprocessed.20Output.20.28ninja.20is.20fun.29 (requires login)

                                      So less than 100K lines of code blows up to about 1M once all the template headers are included. But I could probably cut it down to 6 or 8 translation units, and compile a linear amount of code. I think it would end up at 200K or less.

                                      But yeah the point is that once you generate code you can do stuff like that, whereas manually writing C++ you often get “stuck”. Most big C++ projects have bad build systems and bad code structure …

                                      1. 2

                                        Thanks for weighing in. <3

                                        Templates increase compile times but

                                        I was kinda thinking you wouldn’t use them in a code generator, since you can generate exactly the needed monomorphic versions yourself? but that might not take advantage of the C++ compiler’s ability to coalesce identical implementations sometimes.

                                        Also, when you generate code you have more control over the translation units…

                                        I was thinking that perhaps it might be possible to generate code with absolutely minimal headers / forward declarations too, which could help some more?

                                        1. 2

                                          Yeah I think it’s mainly about the implementation effort … Trying to recreate what C++ is doing could be the difference between a 1 year project and a 5 year one. I should know :)

                                          e.g. I liken it to one of the “wrong turns” I mentioned on the blog – to “just” write a Python bytecode interpreter. Well it turns out that is a huge project, and there’s no way to make it fast with a reasonable amount of effort.

                                          You can get something slow working, but getting good performance is a whole different story!

                                  1. 1

                                    Can Standard ML be made so it can write device drivers?

                                    Yes, this is quite like what the Fox Project in the 90s did.

                                    1. 2

                                      Did they produce device drivers? FoxNet was a network stack in SML and that too in userspace. Not exactly device drivers. With the exception of mlton, SML runtimes (and OCaml too) use an uniform value representation therefore use a few bits of the pointer for the type tag. GC also takes away a bit typically which means full width machine words needs to be boxed and that can be inefficient.

                                      ATS, though, is the perfect combination of ML and C, barring the syntax. The metasepi project is using ATS for kernel components[1].

                                      [1] https://metasepi.org/en/tags/ats.html

                                    1. 1

                                      The book “Database Internals” is a fantastic resource for getting started. Each chapter has a ton of references for further reading material. One small gripe was it didn’t come with code but it can be supplemented easily using code from the various high quality open source databases.

                                      1. 1

                                        That’s the second link in the list!

                                      1. 16

                                        If you master some really niche tech where companies find they absolutely need an expert in it every now and then, the companies come to find you.

                                        Via a failed startup many years ago, I ended up learning macOS device driver/kernel programming. My activity on GitHub and Stack Overflow related to this subsequently led to one contracting/consulting gig after the other. These days most of the work centres around macOS’s transition to DriverKit, which is so complex and poorly documented, there is a fairly high bar to getting started with it without the context of the tech it replaces. So most of the time, internal developers try it, run into problem after problem, most of which end up leading them to a Stack Overflow question I’ve answered. Eventually many conclude it’ll be easier if they just get me do the project altogether.

                                        1. 12

                                          These days most of the work centres around macOS’s transition to DriverKit

                                          This comment confused me a lot but it turns out that Apple is recycling names again. I thought you were referring to DriverKit, the Objective-C device driver framework that shipped with NeXTSTEP, which Apple abandoned with Mac OS X 10.0 and replaced with IOKit, a C++ kernel driver framework. In fact you were referring to DriverKit, the userspace C++ driver framework that Apple introduced in macOS 10.15.

                                          This is almost as confusing as when they decided that the version of Objective-C that came after Objective-C 4 was Objective-C 2.0.

                                          1. 1

                                            have not written anything with the Driver and IOKits but from reading the docs, the old DriverKit feels so much nicer than IOKit. Wish the BSDs had something like that instead of having to simulate Objective-C in C. For a while, I hoped you (@david_chisnall) would put Pragmatic Smalltalk into the FreeBSD kernel. What a wonderful programming environment that would have been! NetBSD does have a lua binding but Smalltalk reads so much nicer. Pragmatic Smalltalk is/was almost there… using the objc runtime means we get to write bindings in Objective-C – something that the language was originally designed for.

                                            1. 2

                                              For a while, I hoped you (@david_chisnall) would put Pragmatic Smalltalk into the FreeBSD kernel.

                                              I actually had an intern do exactly that. He also worked on a modified runtime designed for very small numbers of methods per class. It worked but I didn’t see a path to community adoption. In particular, Objective-C has become increasingly dependent on exceptions, which we definitely don’t want in the kernel and I had implemented Smalltalk non-local returns using exceptions except in cases where the block could be inlined (basically, conditionals and loops on things that the Objective-C type info tells you evaluates to a BOOL).

                                              In the kernel, I’m not sure I want duck typing, I want as much to be statically checked as possible. I have ported libc++ to build in the FreeBSD kernel and C++17 is quite a nice programming environment for kernel use (writing kernel modules in C++ is noticeably less verbose than in C) but the kernel’s loader needs some work for it to be useful (I wanted to be able to have a libc++ module and have other modules depend on it, but there’s no handling of COMDATs in the kernel loader). The lack of exceptions is also a problem for C++, since a lot of the standard library classes handle failures by calling abort (panic, in the kernel) on errors if compiled without exceptions. Maybe it’s worth trying to port SerentyOS’s libraries instead.

                                              1. 1

                                                I actually had an intern do exactly that. He also worked on a modified runtime designed for very small numbers of methods per class.

                                                Interesting. Any chance it is open/available? Would be interested in trying it out.

                                                I didn’t see a path to community adoption.

                                                NetBSD perhaps? much better base for experiments and I for one would like to try and work on this on NetBSD.

                                                In particular, Objective-C has become increasingly dependent on exceptions, which we definitely don’t want in the kernel and I had implemented Smalltalk non-local returns using exceptions except in cases where the block could be inlined (basically, conditionals and loops on things that the Objective-C type info tells you evaluates to a BOOL).

                                                couldn’t exceptions be dropped? since the runtime environment is severely restricted anyway, a subset that is suitable for it could be defined and used no?

                                                In the kernel, I’m not sure I want duck typing, I want as much to be statically checked as possible.

                                                agreed, but the nice thing with objc is that the types are not lost at runtime, which can be quite useful. A Smalltalk-like scripting interface, say through a workspace/transcript device nodes, would be a nice way to inspect and maybe even make small modifications to the running system.

                                                I have ported libc++ to build in the FreeBSD kernel and C++17 is quite a nice programming environment for kernel use (writing kernel modules in C++ is noticeably less verbose than in C) but the kernel’s loader needs some work for it to be useful (I wanted to be able to have a libc++ module and have other modules depend on it, but there’s no handling of COMDATs in the kernel loader).

                                                Bareflank[1] seems to use a custom loader for their C++ runtime, also based on libc++.

                                                [1] https://github.com/Bareflank/hypervisor

                                                1. 1

                                                  I actually had an intern do exactly that. He also worked on a modified runtime designed for very small numbers of methods per class.

                                                  Interesting. Any chance it is open/available? Would be interested in trying it out.

                                                  I don’t think so. It was 10 years ago and, even if the code were around, it would be quite bit rotted by this point.

                                                  I didn’t see a path to community adoption.

                                                  NetBSD perhaps? much better base for experiments and I for one would like to try and work on this on NetBSD.

                                                  Possibly. NetBSD put Lua in the kernel before everyone else but I’m not aware of anything significant using it.

                                                  In particular, Objective-C has become increasingly dependent on exceptions, which we definitely don’t want in the kernel and I had implemented Smalltalk non-local returns using exceptions except in cases where the block could be inlined (basically, conditionals and loops on things that the Objective-C type info tells you evaluates to a BOOL).

                                                  couldn’t exceptions be dropped? since the runtime environment is severely restricted anyway, a subset that is suitable for it could be defined and used no?

                                                  Yes, you can build without exceptions, but then you’d need some other error-handling mechanism. In more modern languages, this is typically some form of option type.

                                                  In the kernel, I’m not sure I want duck typing, I want as much to be statically checked as possible.

                                                  agreed, but the nice thing with objc is that the types are not lost at runtime, which can be quite useful. A Smalltalk-like scripting interface, say through a workspace/transcript device nodes, would be a nice way to inspect and maybe even make small modifications to the running system.

                                                  True, although I’ve recently been using Sol3 with C++ and, although it needs a little bit more glue, it’s fairly small.

                                                  I have ported libc++ to build in the FreeBSD kernel and C++17 is quite a nice programming environment for kernel use (writing kernel modules in C++ is noticeably less verbose than in C) but the kernel’s loader needs some work for it to be useful (I wanted to be able to have a libc++ module and have other modules depend on it, but there’s no handling of COMDATs in the kernel loader).

                                                  Bareflank[1] seems to use a custom loader for their C++ runtime, also based on libc++.

                                                  Do they support kernel modules? There weren’t any problems compiling C++ into the kernel, or even within a single kernel module, the problem was using C++ types across a module boundary.

                                                  1. 1

                                                    Do they support kernel modules?

                                                    I posted the wrong link. [0] seems to support kernel modules, haven’t checked it out yet though, only watched a couple of talks [1, 2] :)

                                                    There weren’t any problems compiling C++ into the kernel, or even within a single kernel module, the problem was using C++ types across a module boundary.

                                                    Missed the “types across module boundary” part. I was thinking only about a single kernel module and pointers to working examples of C++ kernel modules appreciated.

                                                    [0] https://github.com/Bareflank/standalone_cxx

                                                    [1] https://www.youtube.com/watch?v=uQSQy-7lveQ

                                                    [2] https://www.youtube.com/watch?v=bKPN-CGhEC0

                                                    1. 1

                                                      Missed the “types across module boundary” part. I was thinking only about a single kernel module and pointers to working examples of C++ kernel modules appreciated.

                                                      I didn’t dig deeply into the problem, but there are some subtle problems. FreeBSD kernel modules are really .o files, not .so files and the module loader does the equivalent of static linking. This makes sense because a kernel module is loaded zero or one times in a given system and so you don’t need to pay any of the costs of a shared-library code model (PLT indirections and so on), you can apply relocations into code when the module is loaded and before you mark the pages are executable in the kernel’s address space.

                                                      Handling things like C++ inline declarations (and template instantiations) is tricky here. Normally, the compiler emits these in every compilation unit that references them but then the static linker discards the unused ones. I think the right solution to this is to have a ‘link’ phase that examines the modules that you depend on and discards any COMDATs that are present in those but even that can lead to problems when you have modules A and B that both depend on C and include a template instantiation that C doesn’t, because now you have two definitions of the same symbol in the kernel. Solving that is nontrivial.

                                                      It looks as if standalone_cxx doesn’t have a solution to this, they assume a full link, so don’t handle any form of dynamic loading.

                                              2. 1

                                                I sort of wish for an Objective-Zig or something – the same marriage of Smalltalk with a (better) version of C for the fiddly memory bits. I miss writing Objective-C, partly because of the discoverability of the Objective part.

                                          1. 7

                                            I think there’s definitely a bunch of differences in various areas of the world here. The one time I did a interview for a London-based Clojure role it involved a terrible tech test with me writing Clojure by hand on paper.

                                            1. 6

                                              writing Clojure by hand on paper

                                              I’m trying to imagine how this would work and it’s just not coming together in my head. Do you use like … different colored magic markers for syntax highlighting? Asking people to write code without paredit feels almost inhumane, much less not having access to a repl.

                                              1. 19

                                                I’m picturing a red button on the desk, and every time you write an unbalanced “)” the interviewer hits it and there’s a BZZZZZ

                                                1. 6

                                                  It was just me and a biro. It was excruciating and I kept asking the interviewer if I could take my laptop out of my bag and do it there, and he kept refusing. These days I’d just walk if they suggested I do such a thing.

                                                  1. 3

                                                    These days I’d just walk if they suggested I do such a thing.

                                                    Bingo.

                                                    There’s a point at which even if you pass the interview, if the process was so broken, you know that if you take the job you’ll be stuck only working with co-workers who also passed the broken process.

                                                    1. 1

                                                      Jeez. I’d have just asked the interviewer how often they wrote their clojure code on paper.

                                                    2. 4

                                                      you could simply drop the parens altogether. You can get away saying something like “you are not supposed to see/count parens anyway!”. /me ducks.

                                                  1. 5

                                                    Figured out something in the terminal?

                                                    Paste it into a bash script verbatim

                                                    Obligatory reference to Ctrl+X+E, which I learned from The Shell Hater’s Handbook presentation.

                                                    1. 2

                                                      and I learned it from you. thanks!

                                                      1. 1

                                                        Thanks for sharing! I love how neatly it ties into the design philosophy: write small programs that does one thing well. I’ve been studying quite a bit about various IPC mechanisms lately, and ultimately I discovered that the kernel gives you everything you need for performant and robust IPC. It’s not a perfect fit for every use case, but I think it is vastly overlooked.

                                                        1. 1

                                                          Or, in set -o vi or bash-rsi mode, esc-v.

                                                        1. 3

                                                          I’ve been using that JSON library a few months. It’s good, but the developer seems to be ignoring std::optional in general. My biggest gripe is that you have to first use ‘contains’ to check for a property before you ‘get’ it, if you don’t want a potential exception. Why isn’t there a ‘get_if’ method that returns an optional?

                                                          1. 1

                                                            my guess is that library has been around for much longer than std::optional.

                                                          1. 16

                                                            The reason userspace threads are interesting, performance-wise, is not memory usage, but context switching. I can switch userspace threads in 10s of clock cycles; kernel threads have no chance of competing. I therefore find this benchmark fairly uninteresting.

                                                            1. 3

                                                              I would disagree here, sort-of. I am pretty sure that the main reason people reach out for green threads is that you can just spawn 10 million of those, while for threads, in practice, you need to do a bunch of sysadmin work to get even to 100k. Not sure if that’s “performance” or “robustness”, but to me it seems like the biggest reason. Why it is the case is still don’t know, besides the negative fact that it’s not memory.

                                                              1. 2

                                                                Is there really no way the kernel could provide a thread-style abstraction that was just as good, though?

                                                                1. 6

                                                                  Depends on what exactly a ‘kernel’ is. It is possible you could come up with a novel hardware design which made context switches cheap (eg I think itanium could do it in 27clk or so?). And if you do memory protection in software rather than hardware, then context switches can be done entirely in software, and the performance impact will be similar to that of userspace threads.

                                                                  Operating under the same constraints as contemporary kernels, though, there is no way.

                                                                  1. 1

                                                                    I’m not sure why you are saying software outperforms hardware? Could you elaborate please.

                                                                    1. 6

                                                                      Modern operating systems implement a capability-based security policy enforced using:

                                                                      1. A set of trusted interfaces (‘kernel’, ‘syscalls’)

                                                                      2. Hardware memory mappings, to prevent privileged operations from being performed except through the trusted interfaces

                                                                      The former mechanism (and the policy it implements) are the interesting part of this picture; the latter mechanism is purely instrumental. Implementation of the latter mechanism involves creation of a unique hardware security context for every logical security domain (process). And switching between hardware security contexts is quite slow. This means that switching between logical security domains will also be slow (and in this case, the implementation of the trusted interface is its own security domain, so any time you have to go though it, you also have to switch hardware security contexts).

                                                                      The reason that it’s necessary to create hardware security contexts is that userspace programs are written in machine code, and the implementation strategy for that machine code is to feed it directly to the cpu. But machine code is able to forge pointers and has no native notion of capability safety. It does, however, have a very basic notion of memory safety: access to unmapped memory causes an exception to be thrown; this coarse ability can be used to implement the desired security policy.

                                                                      But now suppose our userspace programs are written in a language which is unable to forge pointers (where by ‘forge’ I mean ‘acquire, by other means than through the trusted interface or through existing unforged pointers’). Say, java. Then it would not be necessary to create distinct hardware security contexts for each logical security domain, since all pointer accesses occurring within a given security domain would, of necessity, be well-formed by construction wrt the security policy.

                                                                  2. 6

                                                                    There are two ways in which cooperative threading outperforms preemptive threading:

                                                                    The amount of state that needs saving is much smaller. A preemptive thread switch needs to save the full register contents. On a modern system with a load of vector registers, that’s KiBs of data. In contrast, a cooperative thread needs to save only callee-save registers. That’s typically a much smaller amount of state. The caller is responsible for saving everything else and, often, doesn’t need to because it’s not in use.

                                                                    Cooperative threading can yield directly to another thread, without invoking the scheduler (which, un turn, acquires locks and does a bunch of work that is considerably more expensive than a function call).

                                                                    The down side, of course, is that a cooperative-threading model is vulnerable to starvation if a thread doesn’t yield sufficiently often. A lot of modern systems aim towards a hybrid: cooperative scheduling for the common case, preemptive scheduling as a fallback after detecting starvation.

                                                                    1. 2

                                                                      Yeah, cooperative scheduling of concurrent threads is not robust. Even with safepoints, I would be somewhat wary. However, I think it’s important to distinguish userspace threading as an implementation strategy from coroutines as a programming model. See python’s generators as a prominent, representative example of the latter.

                                                                    2. 5

                                                                      Google did something sort of in this vein, using kernel threads but exposing an extra API so that the running thread could request a specific thread run next, both avoiding the work the kernel scheduler does and potentially giving userspace a chance to do smart things (e.g. ‘message’ a thread and immediately activate it). If you’re doing a pure context switch benchmark it can’t beat jmp of course, but they apparently found it useful.

                                                                      A talk is here and some code they published is here.

                                                                      1. 3

                                                                        Try looking at it this way:

                                                                        “The kernel” on Linux, Windows, (and so on) isn’t special[1]: It’s just another thread.

                                                                        Can we transition from thread A to kernel-thread K to thread B as fast as we can transition from thread A directly to thread B. The answer is no, because A+B < A+B+K

                                                                        [1]: It’s not special, but it is generic, so it has to handle AB as easily as BA and any other transition. Doing that is expensive too, so it’s sometimes useful to consider the fact you’re doing this transition AKA every time you do a system call. If you’re curious what kind of kernel changes are needed to avoid this kind of thing, just consider this very simple AKA transition and then go look at the iouring documentation.

                                                                        1. 4

                                                                          That’s true, but misses the point (‘not even wrong’).

                                                                          If I do two userspace thread switches, that’s still going to be way cheaper than a single kernel thread switch. (In your nomenclature, A+B+C < A+B+K. A+B+C+D+E+F+G+H < A+B+K, even.) The issue is that switching hardware security contexts is very expensive, and such context switches are not necessary when doing userspace threading.

                                                                          1. 2

                                                                            If I do two userspace thread switches, that’s still going to be way cheaper than a single kernel thread switch. (In your nomenclature, A+B+C < A+B+K. A+B+C+D+E+F+G+H < A+B+K, even.) The issue is that switching hardware security contexts is very expensive, and such context switches are not necessary when doing userspace threading.

                                                                            While there is overhead here, I remember seeing some work from Google which measured the cost of entering the kernel, and finding it negligible compared to the cost of selecting the next thread to run (~100ns out of a 3 usec switch). Lots of that involves sending interrupts to threads running on other CPUs.

                                                                            1. 1

                                                                              But of course, if you have two kernel-threads, they can complete faster than one kernel-thread with two userspace tasks because we have lots of cores. So even though I agree with you that kernel threads can’t replace user threads, I can’t agree (if you are suggesting) that user threads can replace kernel threads always either; esp. when sum[max x+k]<sum raze x which it usually is in compute-heavy operations.

                                                                              1. 1

                                                                                I completely agree. I also, as mentioned else-thread, find hardware preemption interesting for robustness reasons, preferring it to safepoints (completely ignoring explicit yields).

                                                                                (That said, I also find the hardware memory protection mechanisms associated with these ‘kernel threads’ overly heavy-handed; a sentiment which, given your experience with kos, I expect you might be inclined to agree with.)

                                                                          2. 2

                                                                            Co-operative context switching is easier if you take advantage of the system calling convention. On 32b Linux, all you need to save is four registers and a few instructions while on 64b Linux, it’s six registers and two instructions. This is way less state than pre-emptive context switching requires. It also avoids a trip through the kernel, saving even more time.

                                                                          3. 1

                                                                            yes but doesn’t it also fight with the scheduler which doesn’t know of the userspace scheduling that goes on with goroutines and such?

                                                                            goroutines and other light-weight coroutines are great for implementing concurrency models but for performance I think it is better to use thread-per-core with very little shared data, exchanging messages when needed. As they say, code runs on the CPU and the kernel keeps interrupting it. So much, that the number of steps that one has to typically do for low latency systems is mind boggling [0]. Having another layer is only going to make it worse. no?

                                                                            [0] https://rigtorp.se/low-latency-guide/

                                                                            1. 3

                                                                              fight with the scheduler

                                                                              That’s more subtle a question than I can do justice to, but for user-mode scheduling, the pattern of system calls the kernel sees is mostly the same as you would for say a callback driven event loop runtime.

                                                                              I think it is better to use thread-per-core with very little shared data

                                                                              That can be good, too. That said, the choice of whether to use user-mode cooperative threads (eg: instead of callbacks / coroutines) can be orthogonal to the scheduling model. In theory you could have a thread per core, but guarantee that user-mode threads are (by default) scheduled on the local kernel-thread. (As opposed to multiplexing M user-mode threads on top of N kernel-threads).

                                                                              But to be fair, I’ve not heard of anyone doing that.

                                                                              Having another layer is only going to make it worse

                                                                              Depends on the trade-offs you want to make. As someone else said, user-mode context switches can be very cheap, and maybe within the same ballpark as making an indirect function call.

                                                                          1. 4

                                                                            These are not languages which I can step away from and come back to 10 years apart without having to remember things or reading updates.

                                                                            I don’t know a single language that this wouldn’t apply to.

                                                                            1. 4

                                                                              C. Though, not functional.

                                                                              1. 3

                                                                                common lisp and sml are very much applicable. frozen standard, multiple confirming implementations, changes mostly in libraries. even smalltalk qualifies that way. it is not that people program pharo very differently from how they did squeak or older implementations.

                                                                              1. 22

                                                                                Perhaps SML (Standard ML)? Unusually, it has a formal specification of its typing rules and operational semantics, as well as a standard library, all of which are somewhat set in stone. By that I mean that apparently, those specifications haven’t changed in 25 years, even though it is still widely used in certain areas (such as proof assistants). That said, there’s been efforts to improve/extend it, under the umbrella “Successor ML”.

                                                                                1. 10

                                                                                  Yeah, for the particular needs that OP is describing, SML is ideal. You can take any SML program written in the last 25 years and compile with any SML compiler (modulo implementation-specific extensions), and that will remain the case for the next 100 years, because of the specification.

                                                                                  Now, whether that is a good thing for the language ecosystem and adoption is another question entirely.

                                                                                  1. 7

                                                                                    I agree. SML is a dead language, which is what the OP is asking for, yet it is functional and reasonably capable. It does not have a very active community, and indeed OCaml, F# or GHC Haskell are probably more pleasant to use in practice, but they are not stable as languages.

                                                                                    1. 11

                                                                                      Precisely: “dead” is a feature here, if we mean dead as in “no longer being extended”.

                                                                                      After all things considered and looked at, SML is 100% what I’m going to be looking at. It looks great!

                                                                                      EDIT: I think everyone should take a minute to read https://learnxinyminutes.com/docs/standard-ml/ … It is beautiful! FFI looks very easy too. Next step is to implement an RSS feed manager with SML + SQLite.

                                                                                      1. 4

                                                                                        It’s also not wasted time. SML is used as the foundation for a lot of PLT research because it is so formally defined and provides a nice basis for further work.

                                                                                        1. 3

                                                                                          To be fair, Haskell98 is also quite old and dead. It’s hard to get a recent compiler into pure H98 mode (especially wrt stdlib) but old ones still exist and don’t need to change. Haskell2010 is the same just less old. Only “GHC Haskell” is a moving target, but it’s also the most popular.

                                                                                          1. 2

                                                                                            I would care more if there was a Haskell98 which people were actively improving performance wise.

                                                                                            1. 3

                                                                                              Why? Do you find GHC’s H98 under-performing?

                                                                                    2. 7

                                                                                      OCaml and Haskell are well ahead in terms of features as well as adoption. F# and Scala are not going anywhere. That said, I really like SML and glad that mlton, MLKit, PolyML,SML/NJ, SML# are all alive. If only someone (don’t think I have the chops for it) could resuscitate SML.NET

                                                                                      1. 4

                                                                                        OCaml and Haskell are well ahead in terms of features

                                                                                        I think OP wants stability over features. Some people consider new features “changes” because what is idiomatic may change

                                                                                      2. 2

                                                                                        I think this is the right answer. I have had similar impulses to the OP and always came back to Standard ML as the thing that comes closest to fulfilling this purpose. I just wish it were less moribund and a little closer to Haskell in terms of aesthetics.

                                                                                      1. 4

                                                                                        they’ve come for the terminals now? sigh. why should a terminal emulator call home?

                                                                                        1. 3

                                                                                          I was curios how many apps “call home” on my Android phone, so I install a small app that creates a Virtual VPN (in a lack of a better description) and logs whatever apps are making network calls. I was surprised to see that even my custom keyboard was calling home… It’s everything nowadays.

                                                                                        1. 1

                                                                                          ACME really shines on its native OSes that are designed for it, such as Plan 9 and Inferno.

                                                                                          There used to be an inferno instance with ACME called ACME_SAC that could be run in Windows, Linux and OSX which could access the host operating system paths. I really wish that it gets reborn one day.

                                                                                          1. 1

                                                                                            sadly, VitaNova stopped working on it. But the code is all there. Eventually, I believe almost all of these tools would be ported to Go and “modernized”. There is already a version of Sam in progress.

                                                                                          1. 1

                                                                                            I’ve only skimmed this at the moment, but compiling highlights and philosophy from Plan 9 (and adjacent) mailing lists is something I’ve wanted to do for a while, so I’m really happy someone else has had the idea.

                                                                                            1. 2

                                                                                              Indeed. although the current post only scratches the surface. I regularly mine 9fans for tips and tricks from seasoned practitioners. I use (deadpixi) sam especially when working on remote machines. It is a refreshing change from the frantic typing required on emacs and vi(m) and not having to carry your config across machines. Terminals still need a lot of typing but using @LeahNeukirchen’s tt, which is a Ruby/Tk implementation of the ideas from 9term, has made it tolerable again. Perhaps I should switch to acme fully…

                                                                                              1. 1

                                                                                                Amazing this still works!

                                                                                                1. 1

                                                                                                  Ruby/Tk was a bit of a bother to build (needed some symlinks to find the libs). Once built, tt is easily one of the most useful v0.1 program on my computers. Thanks!

                                                                                            1. 3

                                                                                              My favorite example of large files is C#’s garbage collector is implemented in a single 37k+ line file.

                                                                                              1. 1

                                                                                                isn’t this the one that was (originally?) auto-generated from common lisp?

                                                                                              1. 1

                                                                                                Need Linux and Windows for paid work so I have a Dell Precision 5540 (i7 9850H) and an Asus Zenbook 14 (Ryzen 5700U) running Ubuntu and Windows 11 respectively. Both machines are plenty fast and the Zenbook has a much nicer keyboard and lasts much longer on a battery charge (U vs H series so not a fair comparison). I’d definitely pick the Zenbook again but probably run Linux on it instead. Eyeing an M1 Air but I have little use for macOS to justify the buy. Had an MBA (2017 model) but the keyboard was horrible and couldn’t get xhyve working reliably. For my NetBSD experiments I use a refurbished Thinkpad T470 (in addition to a bunch of low-power desktops) which works out fine for my needs.

                                                                                                1. 18

                                                                                                  I find this article & much of the FP community somehow completely misses the point of OO.

                                                                                                  I’m not an expert of OO to provide great claims, but I can say that working in ruby made me see it really is a beautiful way to handle state in certain cases & has value.

                                                                                                  How can you completely write-off an entire paradigm/philosophy?

                                                                                                  In general there’s a lack of respect for many languages that don’t fit the cool/obscure criteria. Most languages were designed for quite a long period of time, if nothing else give some respect to all the time spent & try to understand the design choices that were made. People don’t add or omit features purely out of ignorance of FP or types.

                                                                                                  Not every language needs to protect people from mistakes. Not every language needs to have a sound type system or one at all. Not every language even has to have a novel feature.

                                                                                                  But every language will have a vision, which I think more often than not is not purely semantics/feature based. Ecosystems are part of the language. The philosophies of the creator is part of the language. Etc.

                                                                                                  It is extremely rare that any feature is strictly better than any other feature.

                                                                                                  1. 8

                                                                                                    I once wrote that there are three kinds of programmers: problem solvers, puzzle solvers, and artists. A lot of times language criticism comes from someone who is in one camp against the other. “Why would I care if a Monad is a monoid in the category of endofunctors when I can make a website in PHP in a weekend of Red Bull?” “Why would I use PHP when the code will be an unmaintainable mess in three years?” “Why would I use a type system that can’t encode all my code’s invariant conditions?” Etc.

                                                                                                    1. 3

                                                                                                      There are similar variations on the “three kinds” argument. On the one hand, it’s important to recognize that different languages have different design goals and different audiences. On the other, the OP summarizes common critiques of classical OO that, perhaps due to a failure of my imagination, do not solve problems for any audience, for example:

                                                                                                      • the brittleness of inheritance and artificial taxonomies in general
                                                                                                      • how prominent shared mutable state is in OO language design
                                                                                                      • how encapsulation does not really solve problems with shared mutable state
                                                                                                      • the awkwardness of “records with overinflated egos” and the “kingdom of nouns” vs. using verbs (functions) to operate on nouns (anything)

                                                                                                      It could be that because OO never clicked for me that I haven’t been motivated to look for useful patterns that are unique to classical OO outside what I encounter in my day job. In any case, my hope is that, as so many languages evolve into a multi-paradigmatic soup, the OO inertia will dissolve and the choice will be amongst better or worse application design patterns rather than programming language tribes.

                                                                                                    2. 2

                                                                                                      Strong static typing+ADTs+pattern matching makes for a pleasant programming experience for most problems. But I don’t get OO bashing at all. It’s just another approach to data abstraction[0, 1]. Doesn’t fit the problem, don’t use it. OO did turn into a cult and made a lot of noise during the 90s and early ’00s and that did not help. In the meanwhile a ton of research happened on FP and when hardware got faster and cheaper most of these ideas got implemented and took over modern general-purpose programming. OO still has uses: UI stuff maps well, even OS kernels are inherently OO (devices, VFS etc) although they all use horrible kludges to implement the model. I was secretly hoping that @david_chisnall would somehow inject a Smalltalk/Objective-C runtime into the FreeBSD kernel to replace the KOBJ stuff but it looks like he has moved on (and that is a good indicator btw!).

                                                                                                      [0] https://web.cecs.pdx.edu/~black/OOP/slides/Cook%20on%20DataAbstraction.pdf

                                                                                                      [1] https://www.cs.utexas.edu/~wcook/Drafts/2009/essay.pdf

                                                                                                      1. 2

                                                                                                        OO did turn into a cult and made a lot of noise during the 90s and early ’00s and that did not help.

                                                                                                        I think OO became popular because they were a nice way of handling GUIs. And they made genuine attempts at solving a bunch of business logic problems.

                                                                                                        In the meanwhile a ton of research happened on FP and when hardware got faster and cheaper most of these ideas got implemented and took over modern general-purpose programming.

                                                                                                        I’m aware that some FP concepts have been implemented in Java and .NET, but from what I can see the dominant model is still some form of OO. Java, .NET, Python and Ruby are OO. Is JS more FP than OO? Maybe depends on how you squint?

                                                                                                        I still see FP as rather niche, especially as the proponents of FP push languages like Haskell and Ocaml that have so far failed to take the world by storm.

                                                                                                        1. 2

                                                                                                          Pure FP may be niche but value-oriented programming (ADTs + pattern matching + functions/closures), as opposed to a class hierarchy, is not IMO. JVM has Scala and Clojure, .NET has F# with C# picking up features from it regularly. JS has Typescript. Haskell and Ocaml have eaten into dynamic language territory, may or may not be in the enterprise. Even C++ too is slowly moving away from (a bad) OO towards (a bad) ML Swift and Rust, I’m sure would have enough features to enable programming with values. It is just a matter of time before “programming with values” becomes the standard approach.

                                                                                                          1. 1

                                                                                                            Well, that’s interesting, but it just tells me that the most successful languages are amalgamations of the concepts from more “pure” languages that actually have real-world value.

                                                                                                            After all, purists will say that Java isn’t “real” OO because it’s not Smalltalk. But Java is a massive success, and Smalltalk is a footnote in history.

                                                                                                            In 20 years greybeards will complain that JavaScriptXS isn’t “real” FP because it’s not Haskell.

                                                                                                    1. 30

                                                                                                      Unpopular opinion: I use bash all the way up til I need more than associative arrays, then I use Rust. It works surprisingly well for scripting tasks.

                                                                                                      1. 15

                                                                                                        Other than the rust part, I suggest this is the popular opinion ;-)

                                                                                                        1. 7

                                                                                                          Julialang.org is especially good. But I need to google how to use it all the time.

                                                                                                          And JS in contrast is my “native” language. I remember it even at night at 3 a.m. So scripting in JS is much easier for me let’s say than in bash.

                                                                                                          But something is just easier in bash, like cat | wc -l. Zx helps me to combine the power of the two.

                                                                                                          I also believe it’s true for lots of people. This is why zx saw such an increase in popularity.

                                                                                                          1. 4

                                                                                                            You’re absolutely right. That’s why it’s an unpopular opinion. 😁

                                                                                                            I’m fully aware that I’m far more willing to write far more bash (and zsh) than most people.

                                                                                                            1. 1

                                                                                                              Julia is tough for me as well, so much feels very magical and I don’t really know how to operate it.. but when it works it’s very cool

                                                                                                            2. 3

                                                                                                              Ditto, but I reach for Ruby because other folks in my team/servers have the runtime installed.

                                                                                                              1. 3

                                                                                                                For scripting inside a project I tend to include an executable task file with this Python starter: https://gist.github.com/sirikon/d4327b6cc3de5cc244dbe5529d8f53ae

                                                                                                                1. 1

                                                                                                                  What is the advantage of this as opposed to just write whatever code you need directly and run the script?

                                                                                                                  1. 1

                                                                                                                    With the Python starter I get the basic, repeating stuff done:

                                                                                                                    • A simple help command is generated (triggered by running just ./task), after a week or so without touching a project I just forget the commands and want a quick help.
                                                                                                                    • Working directory switched to the task file’s directory, handy when running the script from another place like a subdirectory.
                                                                                                                    • A cmd function as a pass-thru to subprocess.run, but forcing check=True.

                                                                                                                    Here’s the task file from one of my projects, as an example: https://github.com/sirikon/bilbostack-app/blob/master/task

                                                                                                                2. 2

                                                                                                                  /bin/sh to perl for me. I’d rather try (un)icon and scsh than javascript.

                                                                                                                  1. 2

                                                                                                                    I usually use Python since it’s available, though I entirely understand your sentiment due to the built-in std::process and clap crate.

                                                                                                                    1. 5

                                                                                                                      the moment I need more than bash I just look which lang is available and has the right libraries, can be rust, can be python..

                                                                                                                      1. 1

                                                                                                                        Have you tried xonsh?