1. 2

    I always wanted to write something like this, but I never managed/bothered to assemble suitable documentation and/or existing implementations to test against.

    Well done!

    1. 2

      Thanks!

      For both documentation and implementation testing, I used plan9port. It has a one-shot client for simple reads, writes, and stats, as well as a 9p FUSE adapter. Alternatively, there’s 9p client support built into the Linux kernel.

      I might spin up a 9front VM to do some even more rigorous testing, though.

    1. 5

      I use bash on Debian, with whatever defaults it has.

      A crazy idea: when you get down to it, completion is really about defining a grammar for command-line options - “a command can be the token ls, followed by zero or more of -a, -r, …” where some of the grammar productions are defined dynamically (“filename” being the most obvious example). I’d love a completion system where I can dump grammar files in a standard format (PEG, yacc, EBNF, whatever) into a directory, and executables to produce dynamic productions into another directory; I feel like it would be a lot easier to write completion grammars in a declarative grammar syntax than the imperative-grammar-manipulation system that bash and zsh seem to use.

      1. 2

        It looks like this tool is grammar-based, or at least it’s a DSL and not imperative: https://github.com/mbrubeck/compleat

        I definitely think the imperative model is verbose. But I think you have the classic DSL problem in this domain too: need to be able to “escape” out of the DSL for special cases. @cmhamill, who mentioned compleat in this thread, said that it’s not entirely “flexible”, and I presume that’s what he means.

        1. 1

          That’s a pretty cool tool, thanks for pointing it out!

          It looks like the implemented grammar is a lot simpler than, say, yacc or OMeta, though. While DSLs do often need escape hatches, I’m not sure that the limits of this DSL imply that all command-line parsing DSLs are too limited.

      1. 1

        use emacs keybindings everywhere, in the shell, browser, you name it. On OS X, Karabiner mapped those bindings for me and now on linux laptop with GNOME it is a top-level feature.

        Does anyone know what GNOME feature the author is referring to here?

        1. 9

          Along with UI themes, icon themes and cursor themes, GNOME supports “key themes” which determine the keybindings used in text-entry fields.

          To see the current value, from the command-line:

          dconf read /org/gnome/desktop/interface/gtk-key-theme
          

          To set the theme to “Emacs”:

          dconf write /org/gnome/desktop/interface/gtk-key-theme "'Emacs'"
          

          (the double-quoting means it will be set as a string value)

          To reset to defaults:

          dconf reset /org/gnome/desktop/interface/gtk-key-theme
          

          There’s also a UI for this option, in the gnome-tweaks tool.

          1. 1

            Ahh right. Thanks for the explanation. I see there is an, “Emacs Input”, option in Tweak Tool now.

            1. 1

              Brilliant, I must try this, thanks!

          1. 2

            As long as the fundamental authoritative protocol remains the same, I don’t really see the problem with connecting to your resolver over HTTPS. Is Cloudflare baked in at the protocol level or just the only current large company with lots of servers willing to back this?

            1. 7

              DNS over HTTPS is an IETF draft and Cloudflare is, of course, not baked in.

              1. 3

                It’s not even the only company with a public DoH service; Google has one too.

              1. 2

                I would have expected the installer for Microsoft’s flagship Windows development system to be a standard MSI package, or maybe a .NET application, or maybe some kind of COM component with an HTML UI based on the Internet Explorer engine… I did not expect node.js, or “…many of the Installer files appear to test for Windows and/or *nix, despite VS being Windows-only.”

                1. 3

                  Welcome to the cultural decline of Microsoft - the next generation of devs that work for them were raised on GNU and the web, not on Visual Studio.

                  1. 3

                    Despite being a satisfied user of “the *nix way” of doing things, I am a bit saddened by the outlook of a *nix monoculture.

                    1. 2

                      Yeah, so many things have been eroded by Unix and the web - VMS, (everything is a POSIX app there) Mac OS, (Unix hijacked it, and now the web and mobile ports will slowly eat at its native ecosystem) AS/400, (everything new runs in the (slow) AIX compatibility layer due to IBM cost-cutting) and now Windows.

                1. 4

                  I’d like to know why DosBox-based games are on this list, since DosBox on Linux doesn’t need things like Wine and would function the same on Linux as on Windows.

                  Separately, there are tons of Steam games I have that have Linux ports but the Linux port isn’t natively on Steam (eg. Quake, Unreal, etc.) I’ve never understood why this is. I end up using a custom compiled ioquake engine with assets from Steam, which works great.

                  1. 4

                    With older games that were ported to Linux, the distribution rights are often (but not always) with a different publisher.

                    Since Steam requires a game to have ports for different platforms of the same title under the same product ID (and thus publisher), there is no way to set up a proper revenue sharing system for the owners of the Linux ports (or Mac ports, old Mac games are in the same boat)

                    Steam initially required a single title to be a single product ID because they don’t want publishers to make people re-buy old titles that were newly ported, in order to boost SteamOS adoption - this way, many players would have a half-decent Steam library from the get-go on the new platform.

                    Many of the old porting shops for Linux and Mac have gone under, or the ports haven’t been maintained since before Linux 2.6 or even 2.4, meaning that many of the ports can no longer be trivially made to work on modern day distributions. Many games from before say 2003 used SVGA lib to render directly to the framebuffer, for example, without going through X11.

                    So, sadly, many of these ports are lost to the sands of time and the murky status of IP limbo.

                    This does not explain why DosBox titles are run through Wine, but I guess that’s just a matter of the publisher not being interested in making and testing a Linux build, given the limited revenue that comes from the platform. These re-releases are probably a very low budget and low income affair, more for the sake of IP owners being able to point to them and say “see, we still provide these products! Preservationists which are distributing our old games are plain pirates, they are not serving a higher purpose!”. But maybe that’s just me being cynical.

                    1. 1

                      I’m sure in a lot of cases you’re exactly right. I’m just frustrated because the games I’m referring to are largely exceptions. Take Quake 3 - the engine is released under the GPL, has a community maintained fork, targets OpenGL rather than Svgalib and Valve have the same distribution rights to it as to Wine or Dosbox. It’s possible this is still publisher related, for example if Valve are expecting the publisher to compile/support it and the publisher doesn’t do so. In the end it seems like a lack of economic incentive to package and distribute a thing that already exists.

                      Most id engine games are in this situation and a couple of those were included in the current beta. They really do use Win32 dosbox on Wine to run a DOS game (so a 500Mb download for a 10Mb game.) 430Mb of that is a Wine/Proton tarball which is then extracted (but left on disk) so Proton on disk is 1.6Gb to run a 10Mb game.

                      PS. I had great fun with SvgaLib on Linux for games before Steam came along. At one point I was using an a.out version of Doom on a much newer system, and it worked great because a.out had a parallel set of usermode libraries so everything was period except for the kernel, which was the only thing that needed to be compatible.

                    2. 2

                      This has annoyed me before. I got a dos game from gog a while ago and I thought it would be trivial to run on linux but it turns out gog bundles the game and dosbox together in a way you can’t split apart. I tried to get the dosbox version to run in wine but it wasn’t working so I had to find a torrent of the original dos copy

                      1. 6

                        GOG gives you a single installer that includes DosBox and the game itself; once you’ve gotten the files out you can ignore the DosBox-related ones in favour of running the original binary in your own copy of DosBox or open-source re-implementation or whatever.

                        To get the files out, you can run the installer in Wine, or use a tool like innoextract.

                        1. 2

                          the worst ones are when GOG.com is delivering a butchered Win32 game and you can’t get the original copy out of it

                    1. 4

                      As someone who never used Rust I want to ask: does the section about crates imply that all third-party libraries are recompiled every time you rebuild the project?

                      1. 6

                        Good question! They are not; dependencies are only built on the first compilation, and they are cached in subsequent compilations unless you explicitly clean the cache.

                        1. 2

                          I would assume dependencies are still parsed and type checked though? Or is anything cached there in a similar way to precompiled headers in C++?

                          1. 10

                            A Rust library includes the actual compiled functions like you’d expect, but it also contains a serialized copy of the compiler’s metadata about that library, giving function prototypes and data structure layouts and generics and so forth. That way, Rust can provide all the benefits of precompiled headers without the hassle of having to write things twice.

                            Of course, the downside is that Rust’s ABI effectively depends on accidental details of the compiler’s internal data structures and serialization system, which is why Rust is not getting a stable ABI any time soon.

                            1. 4

                              Rust has a proper module system, so as far as I know it doesn’t need hacks like that. The price for this awesomeness is that the module system is a bit awkward/different when you’re starting out.

                            2. 1

                              Ok, then I can’t see why the article needs to mention it. Perhaps I should try it myself rather than just read about its type system.

                              It made me think it suffers from the same problem as MLton.

                              1. 4

                                I should’ve been more clear. Rust will not recompile third-party crates most of the time. It will if you run cargo clean, if you change compile options (e.g., activate or deactivate LTO), or if you upgrade the compiler, but during regular development, it won’t happen too much. However, there is a build for cargo check, and a build for cargo test, and yet another build for cargo build, so you might end up still compiling your project three times.

                                I mentioned keeping crates under control, because it takes our C.I. system at work ~20 minutes to build one of my projects. About 5 minutes is spent building the project a first time to run the unit tests, then another 10 minutes to compile the release build; the other 5 minutes is spent fetching, building, and uploading a Docker image for the application. The C.I. always starts from a clean slate, so I always pay the compilation price, and it slows me down if I test a container in a staging environment, realize there’s a bug, fix the bug, and repeat.

                                One way to make sure that your build doesn’t take longer than is needed to is be selective in your choice of third party crates (I have found that the quality of crates varies a lot) and making sure that a crate pays for itself. serde and rayon are two great libraries that I’m happy to include in my project; on the other hand, env_logger brings a few transitive libraries for coloring the log it generates. However, neither journalctl nor docker container logs show colors, so I am paying a cost without getting any benefit.

                                1. 2

                                  Compiling all of the code including dependencies, can make some types of optimizations and inlining possible, though.

                                  1. 4

                                    Definitely, this is why MLton is doing it, it’s a whole program optimizing compiler. The compilation speed tradeoff is so severe that its users usually resort to using another SML implementation for actual development and debugging and only use MLton for release builds. If we can figure out how to make whole program optimization detect which already compiled bits can be reused between builds, that may make the idea more viable.

                                    1. 2

                                      In last discussion, I argued for multi-staged process that improved developer productivity, esp keeping mind flowing. The final result is as optimized as possible. No wait times, though. You always have something to use.

                                      1. 1

                                        Exactly. I think developing with something like smlnj, then compiling the final result with mlton is a relatively good workflow. Testing individual functions is faster with Common Lisp and SLIME, and testing entire programs is faster with Go, though.

                                        1. 2

                                          Interesting you mentioned that; Chris Cannam has a build setup for this workflow: https://bitbucket.org/cannam/sml-buildscripts/

                              1. 2

                                The first things I checked is if there is anything in between 2336 and 237A. Nope.

                                1. 3

                                  (APL symbols, for folks following along at home)

                                  1. 2

                                    I don’t think there should be. This is for iconic fonts, where they put a bunch of useful “icons” in the private use area of of the Unicode code space, and then you can use those codepoints wherever you use text to get useful little icons.

                                    The project linked here patches other fonts (including, presumably, those that have APL characters) to include these icons.

                                    (Note that I don’t really like the use of icon fonts, but that’s neither here nor there…)

                                  1. 7

                                    Another weird Game Boy peripheral: fish-finder sonar.

                                    1. 8

                                      A bunch of Debian’s officially supported architectures appear on Rust’s supported platform list under the “Tier 2” heading, i.e. the upstream project doesn’t run tests on these platforms so it doesn’t make sense for Debian to run those tests either.

                                      It would be nice if Debian ran tests for platforms in the intersection of their “officially supported” sets (basically x86 and x86_64 Linux), but I don’t know how flexible Debian packages can be in that regard.

                                      1. 10

                                        Debian still runs all the tests, they just don’t fail the build if tests fail any more. Another change is that Debian used to diligently report all test failures upstream, but they no longer do, because upstream doesn’t care. They now only report failures only if they have fixes as well.

                                      1. 7

                                        As usual with decentralized systems, the main problem I had was discovering good feeds. One could find stuff, if one knew what one was looking for, but most of the time these feeds only contain the first few lines of a article. And then again, there are other feeds that just post too much, making it impossible to keep up. Not everyone is coming to RSS/Atom which a precomposed list of pages and websites they read.

                                        These are the “social standards”, which I belive are just as important as the technical standards, which should have to be clarified in a post like this one.

                                        1. 6

                                          I agree. Finding good feeds is difficult indeed, but I believe that good content does spread by word at some point (it may even be word in social media, actually). Feeds that post too much are definitely a problem. Following RSS/Atom feeds of newspapers specifically defeats the purpose. Nobody can manage this hilarious amount of posts, often barely categorised. I don’t have a solution for these at hand; this article suggests that the standard should be improved on this. It might be a good idea to do so.

                                          Excerpt feeds I don’t like, because they are hard to search using the feed reader’s search facilities. I however still prefer an excerpt feed over no feed at all, which is why the article mentions this as a possible compromise. The main reason for excerpt feeds appears to be to draw people into the site owner’s Google Analytics.

                                          1. 3

                                            As far as unmanageably large&diverse sites go, I seem to recall at The Register you can/could run a search and then get an RSS feed for current and future results of that query. Combined with ways to filter on author etc. that worked a treat.

                                          2. 2

                                            the main problem I had was discovering good feeds

                                            This is why my killer feature (originally of GOOG Reader and now of NewsBlur) is a friends/sharing system. The value of shared content is deeply rooted in discovery of new feeds.

                                            feeds only contain the first few lines of a article

                                            Modern feed readers generally support making it easy to get full articles / stories without a context switch.

                                            feeds that just post too much, making it impossible to keep up

                                            Powerful filtration is also another place where modern readers have innovated. Would definitely check them out, because these are solved problems.

                                            1. 2

                                              Can you recommend any specific readers?

                                              1. 1

                                                NewsBlur is pretty great. It’s a hosted service, rather than a local application, but that’s kind of necessary for the whole “sharing” thing.

                                                1. 1

                                                  If you’re an emacs user: elfeed. It has pretty good filtering and each website can be tagged.

                                                  1. 1

                                                    I tried that for a while, but eventually I just couldn’t keep up. I never really have the time to read an article when I’m in Emacs, since usually I’m working on something.

                                                  2. 1

                                                    I have been quite pleased with NewsBlur. It has the added benefit of being open source, so if it were to disappear (cough, cough, GOOG Reader), it could easily be resurrected.

                                                    For the social aspect, of course, might want to poll friends first to see what they are on.

                                              1. 6

                                                Clearly a lot of work went into this post, but I found it less of a “state of type hints” and more of a “what and how”, in that I was expecting an overview of adoption, or progress in the type system. Having said that, it’s probably an excellent post for someone wondering “what are type hints in Python and how can I use them?”

                                                However, I’m pretty sure the final paragraph is incorrect?

                                                Remember that, similar to unit tests, while it does makes your code base contain an extra number of lines, at the end of the day all the code you add is code that is automatically checked and enforced to be correct. It acts as a safety net to ensure that when you change things around later on things keep working

                                                In a statically-typed language this would be true, but type hints in Python are just hints. There’s no guarantee from Python that either the hints or what’s passed is actually correct.

                                                1. 7

                                                  While it’s true that Python’s type-hints aren’t automatically enforced, the same is true of tests: they don’t help either unless you run the test suite. At least for Python, comparing types and tests in this way seems reasonable.

                                                  1. 3

                                                    In a statically-typed language this would be true, but type hints in Python are just hints.

                                                    I write a lot of Python and thus far I’ve eschewed type hints, for the simple reason that if I’m gonna go to all that effort I might as well just write Go.

                                                    (I’m being slightly sarcastic, but there’s a kernel of truth in there.)

                                                    1. 6

                                                      I also write a lot of Python, and I type-hint 100% of it. The type hints typically take very little effort to write, and the benefits (IDE help + fewer bugs + easier refactoring) save me a lot of work, so on net it’s almost certainly effort-saving.

                                                      (A bonus that’s possibly orthogonal to effort is that having to think about and write out the types I use encourages me to design cleaner APIs and makes a lot of bad code smells much more obvious.)

                                                      1. 2

                                                        Sure, and Python will still support that. Now imagine you’re an organization like Google and you have 100 million lines of Python in your company’s repo. It’s not going to be economically viable to rewrite it but you’d still like to incrementally add type checking to try to lower the rate bugs are found in Python code. That’s where Python’s type checking makes sense at the moment. I’ve still yet to see much adoption in the scientific Python space, for example, although I bet that’s mostly because NumPy doesn’t have type stubs yet.

                                                        1. 3

                                                          If wishes were fishes, one of the fish in my river would be to have something like [Elm’s record types], but for specifying Pandas data frame column names instead of record fields. Elm’s record types do not require you to specify all of the input’s fields; instead, you specify the ones the function expects, and the fields that are guaranteed to be in the output. The typechecker keeps track of which fields a record has as it passes through the functions.

                                                          It would be perfect for the domain-specific cleaning and wrangling functions one writes on top of pandas. I realise this is not a trivial thing to wish for, though.

                                                    1. 7

                                                      I’ve been guilty of trash-talking other projects myself in the past

                                                      Well, the blog is titled “Software is Crap” :)

                                                      [Rust’s] designers made the unfortunate choice of having memory allocation failure cause termination – which is perhaps ok for some applications, but not in general for system programs, and certainly not for init

                                                      Rust can help with not allocating at all (e.g. heapless), and try_reserve is in nightly already.

                                                      Zig though is a language oriented exactly at this: it forces you to manually pick an allocator and handle allocation failure. But it is much younger than Rust, so if you’re worried about Rust “mutating” (FYI, Rust 1.x is stable, as in backwards compatible), it’s way too early to consider Zig (0.x).

                                                      non-Linux OSes are always going to be Rust’s second-class citizens

                                                      Yeah, related to that: Rust’s designers made the unfortunate assumption that OSes don’t break userspace from one release to another, just like Linux. The target extension RFC would solve this.

                                                      Other than that… while the core team is indeed focused on the “big three” (Linux/Windows/Mac), Rust does support many “unusual” targets, including Haiku, Fuchsia, Redox, CloudABI.

                                                      Back to inits and service managers/supervisors:

                                                      There are so many of them, many of them are interesting (I’ve been looking at immortal recently), but they all have one big problem: existing service files/scripts on your system are not written for them. So I usually end up just using FreeBSD’s rc for basic pre-packaged daemons + runit for my own custom stuff.

                                                      The Ideal Service Manager™ should:

                                                      • read existing service definitions from system packages (rc scripts, OpenRC scripts, systemd units, daemontools/runit/s6 style bare shell scripts)
                                                      • prevent the services from daemonizing, somehow (injecting -f into $SERVICE_flags? horrible and evil hacks like LD_PRELOADing a library that overrides libc’s daemon() with a no-op? lol)
                                                      • force the services to log to syslog, somehow (redirect stdout/stderr, but what about daemons that open a custom logfile by default? maybe just let them do that)
                                                      • supervise them like runit does

                                                      I guess instead of preventing forking it can support tracking forking services with cgroups on Linux, and… with path=/ ip4=inherit ip6=inherit sysvmsg=inherit ... jails on FreeBSD? I wish there was a 100% reliable way to make sure any service runs in the foreground.

                                                      1. 5

                                                        Well, the blog is titled “Software is Crap” :)

                                                        Yeah, there is that. I had originally wanted to emulate a humorous style I’d seen elsewhere (the long defunct “bileblog”) which badmouthed things in such an over-the-top fashion that you knew it was humorous; I could never quite get that right and it always seemed like I was just being nasty. Now I just try to provide objective criticism; it’s probably not as entertaining to read, but it’s also less likely to upset people. And of course, I also write about Dinit and occasionally write (hopefully) helpful articles on other topics.

                                                        Rust can help with not allocating at all (e.g. heapless), and try_reserve is in nightly already. Zig though is a language oriented exactly at this:

                                                        heapless probably wouldn’t serve my needs, but things like try_reserve are what are sorely needed for Rust to be a serious systems language, so I’m glad that’s happening. There are other reasons (perhaps more subjective) that I don’t like Rust - particular aspects of its syntax and semantics bother me - but in general I think the concept of ownership and lifetime as part of type are worthwhile. I have no doubt that good things will come from Rust.

                                                        As for Zig, I need to look at it again. It certainly also has promise; but you’re right that I’d be worried about its stability and future.

                                                        I guess instead of preventing forking it can support tracking forking services with cgroups on Linux, and… with path=/ ip4=inherit ip6=inherit sysvmsg=inherit … jails on FreeBSD? I wish there was a 100% reliable way to make sure any service runs in the foreground.

                                                        Yeah, that’s a fundamental problem. Linux and DragonFlyBSD both have a simple means to prevent re-parenting past a particular process, which is one potential way to solve it (if you are ok with inserting an intermediate process, and really I don’t think that’s a big deal); cgroups/jails as you mention are another; any other option starts to feel pretty hacky (upstart apparently used ptrace to track forks, but that really feels like abuse of the mechanism to me).

                                                        Thanks for your comments.

                                                        1. 5

                                                          Yeah, there is that. I had originally wanted to emulate a humorous style I’d seen elsewhere (the long defunct “bileblog”) which badmouthed things in such an over-the-top fashion that you knew it was humorous; I could never quite get that right and it always seemed like I was just being nasty. Now I just try to provide objective criticism; it’s probably not as entertaining to read, but it’s also less likely to upset people. And of course, I also write about Dinit and occasionally write (hopefully) helpful articles on other topics.

                                                          The problem with it is: that style of humor is so common in the programming world that even good one is not at all novel. Also, as you say it, it’s also very hard to get right, even for seasoned comedians, which - no offense - most programmer aren’t.

                                                          heapless probably wouldn’t serve my needs, but things like try_reserve are what are sorely needed for Rust to be a serious systems language, so I’m glad that’s happening.

                                                          Everyone attaches their own meaning to “systems language”, and adding “serious” feels a bit like moving goalposts. “Ah, yeah, you got the systems part down, but how about serious”. It might not be convenient at all places and I agree that some things are undone, but we’re up against literally decades old languages. We’re definitely serious about getting that issue solved in a foreseeable timeframe.

                                                          Heapless helps in the sense that you can provide your own stuff on top. Even the basic Box type in Rust is not part of libcore, but libstd.

                                                          Servo takes a middle ground of extending Vec with fallible push. (https://github.com/servo/servo/blob/master/components/fallible/lib.rs)

                                                          The thing here is mostly that stdlibs collection considers allocation failure and unrecoverable error. For ergonomic reasons, that’s a good pick for a standard library.

                                                          So, it’s perfectly feasible to write your own collection library (or, for example extension) even now.

                                                          Also, here’s a list of notes about what’s needed to make fallible stuff in the language proper cool. I can assure you after attending the All Hands that this is definitely a hot topic, but also a hard one.

                                                          This just as a little bit of context, I’m not trying to convince you.

                                                          I’d be very interested in what your semantic issues with Rust are.

                                                          To add to that, I’m happy that you took a look at the language, even if you came away wanting.

                                                          As for Zig, I need to look at it again. It certainly also has promise; but you’re right that I’d be worried about its atability and future.

                                                          I’m definitely hoping for more “new generation” systems programming languages. I think there is quite some space around and I hope that some of these make it.

                                                          1. 4

                                                            I’d be very interested in what your semantic issues with Rust are.

                                                            A proper answer to that would need me to sit down for an hour (or more) and go through again the material on Rust to remember the issues I had. Some of them aren’t very significant, some of them are definitely subjective. I should qualify: I’ve barely actually used Rust, just looked at it a number of times and had second-hand exposure via friends who’ve been using it extensively. The main thing I can remember off the top of my head that I didn’t like is that you get move semantics by default when passing objects to functions, except when the type implements the Copyable trait (in which case you get a copy), so the presence or absence of a trait changes the semantics of an operation. This is subtle and, potentially, confusing (though the error message is pretty direct). I’d rather have a syntactic distinction in the function call syntax to specify “I want this parameter moved” vs copied.

                                                            Other things that bother me are lack of exceptions (I realise this was most likely a design decision, just not one that I agree with) and limited metaprogramming (the “hygienic macro” facility, when I looked at it, appeared a bit half-baked; but then, I’m comparing to C++ which has very extensive metaprogramming facilities, even if they have awful syntax).

                                                            I can assure you after attending the All Hands that this is definitely a hot topic, but also a hard one.

                                                            Yep, understood.

                                                            I’m happy that you took a look at the language, even if you came away wanting.

                                                            I’ll be continuing to watch closely. I’m very interested in Rust. I honestly think that some of the ideas it’s brought to the table will change the way future languages are designed.

                                                            1. 3

                                                              …you get move semantics by default when passing objects to functions, except when the type implements the Copyable trait (in which case you get a copy), so the presence or absence of a trait changes the semantics of an operation.

                                                              I can definitely understand how that would feel worrying, but in practice it’s not so bad: Rust doesn’t have copy constructors, so the Copytrait means “this type can be safely memcpy()d”. For types that can be cheaply and infinitely duplicated without (heap) allocation, like u32, copy vs. move isn’t that much of a semantic difference.

                                                              The closest thing to C++’s copy constructor is the Clone trait, whose .clone() method will make a separately-allocated copy of the thing. Clone is never automatically invoked by the compiler, so the difference between moving a String versus copying a String is somefunc(my_string) versus somefunc(my_string.clone()).

                                                              lack of exceptions

                                                              As a Python programmer, I’m pretty happy with Rust’s error-handling, especially post-1.0 when then ? early-return operator was added. I feel it’s a very nice balance between C and Go-style error handling, which is explicit to the point of yelling, and Java and Python-style error handling, which is minimal to the point where it’s hard to say what errors might occur where.

                                                              limited metaprogramming

                                                              It depends how much you care about getting your hands dirty. Rust doesn’t have full-scale template metaprogramming like C++, but the hygenic macro system (while limited) is a good start. If you want to go further, Rust’s build system includes a standard and cross-compilation-friendly system for running tasks before your code is compiled, so you can run your code through cpp or xsltproc or m4 or a custom Python script or whatever before the Rust compiler sees it. Lastly, “nightly” builds of the compiler will load arbitrary plugins (“procedural macros”) which will let you do all the crazy metaprogramming you like. Since this involves tight integration with the compiler’s internals, this is not a stable, supported feature, but nevertheless some high-profile Rust libraries like the Rocket web framework are built on it.

                                                          2. 2

                                                            Linux and DragonFlyBSD both have a simple means to prevent re-parenting past a particular process

                                                            Hmm?? This sounds very interesting! Please tell me more about it.

                                                            upstart apparently used ptrace to track forks

                                                            Oh, this made me realize that I can actually use DTrace to track forks!

                                                            1. 4

                                                              Hmm?? This sounds very interesting! Please tell me more about it.

                                                              In linux: prctl(PR_SET_CHILD_SUBREAPER, 1); In DragonFlyBSD (and apparently FreeBSD too, I see): procctl(P_PID, getpid(), PROC_REAP_ACQUIRE, NULL);

                                                              In both cases this marks the current process as a “reaper” - any child/grandchild process which double-forks or otherwise becomes orphaned will be reparented to this process rather than to init. Dinit uses this already to be able to supervise forking processes, but it still needs to be able to determine the pid (by reading it from a pid file). There’s the possibility though of inserting a per-service supervisor process which can then be used to keep track of all the processes that a particular service generates - although it still doesn’t provide a clean way to terminate them; I think you really do need cgroups or jails for that.

                                                          3. 2

                                                            [Rust’s] designers made the unfortunate choice of having memory allocation failure cause termination – which is perhaps ok for some applications, but not in general for system programs, and certainly not for init

                                                            Or just run under Linux and have random processes killed by the OOM killer and random times because that’s so much better letting a program know the allocation didn’t really succeed twenty minutes ago when it could do something about it.

                                                            1. 2

                                                              Agreed, the OOM killer is totally bonkers, but its existence doesn’t justify stopping a program due to a failed allocation.

                                                              1. 3

                                                                its existence doesn’t justify stopping a program due to a failed allocation.

                                                                Yes, especially since overcommit can be turned off, which should largely (if not always - I’m not sure) prevent the OOM killer from acting.

                                                                1. 1

                                                                  IIRC overcommit is even off by default in Debian.

                                                                2. 1

                                                                  Right. I was saying just let malloc return NULL and let the program deal with it instead of basically lying about whether the allocation succeeded or not. I disable memory overcommit on most of my systems.

                                                                  1. 1

                                                                    For C I totally agree.

                                                                    The Rust equivalent would be:

                                                                    let b = Box::new(...);
                                                                    

                                                                    But Box::new doesn’t return a Result. If allocation fails, the program is terminated.

                                                                    And so far we have only really talked about the heap. As far as I can tell you never know if stack allocation succeeded until you get a crash! Even in C. But I suppose once the stack is hosed, so is your program, which may not be true for the heap.

                                                            1. 11

                                                              os

                                                              Upvote and/or comment here if you prefer “os” to be the label of the new taxon.

                                                              1. 5

                                                                I see a lot of Plan 9 links in the list, and I don’t think anybody running Plan 9 these days is doing operating systems research or writing papers about it, so “osres” or “os-research” doesn’t seem to fit. Meanwhile, “osdev” makes me expect articles on things like “how to map the VGA framebuffer” and “how to get from 8086 real mode to x86_64 long mode”.

                                                                “os” is perhaps a little generic, but I guess the alternative would be to tag such posts as “software”, which is even more so.

                                                                1. 5

                                                                  I do operating system research based on Plan 9.
                                                                  And I think what 9front developers do count as research as much.
                                                                  Also people usually write “papers” when the research produce something interesting. It might take a while. :-)

                                                                  Note that “os” is fine to me, I just think it will require more moderation to keep it focused on less known operating systems.

                                                              1. 2

                                                                I’ve definitely worried about “will this test pass for a trivial reason instead of the real reason” before, and often I try and resolve it by (for example) examining the description associated with the exception I caught, a notoriously brittle approach that adds test maintenance burden.

                                                                This seems a much more sensible and robust solution, and I’m itching to find out whether it’s as practical and useful in large code-bases as it sounds.

                                                                1. 22

                                                                  So (1) nobody cared to improve the old tools, writing new ones is more fun, and (2) updating the old tools to reflect current reality would break old scripts, so the rational choice is to both let the old tool rot (thus quite possibly breaking anything that relies on it) as well as introduce a new tool that definitely isn’t compatible with the old scripts. Why do I feel like this line of arguing has a problem or two?

                                                                  Pray tell, what happens when the interface provided by the iproute2 utilities no longer reflects reality. Let them rot and write yet another set of new tools? Break them? Introduce subtle lies?

                                                                  Oh by the way, if you’re configuring IPv6 on linux, don’t use the old tools. They’re subtly broken and can waste you a lot of time. I’ve been there. Don’t mention it.

                                                                  Meanwhile, I’m glad that OpenBSD’s ifconfig can show more than one IP per interface. And I can use the same tool to set up my wifi too. It’s a tool meant to work.

                                                                  1. 12

                                                                    The BSDs are maintaining the kernel and the base system in locksteps. This is not the case for Linux distributions. Over the years, Linux developers started to do the same. That’s why we now have iproute2, ethtool, iw and perf, which are userland tools evolving at the same speed as the kernel (and sharing the version number).

                                                                    1. 8

                                                                      nobody cared to improve the old tools

                                                                      The people who want to use the old tools want them to keep working the same way they always have. They already work that way, so the people who want to use the old tools have no motivation to make changes.

                                                                      updating the old tools to reflect current reality would break old scripts

                                                                      It would also piss off the people who want to keep using the old tools, since by definition they would no longer keep working the same way.

                                                                      The names ifconfig and netstat are now claimed and cannot be re-used for a different purpose, in much the same way that filenames like COM1 and AUX are claimed on Windows and cannot be re-used.

                                                                      Meanwhile, I’m glad that OpenBSD’s ifconfig can show more than one IP per interface.

                                                                      My understanding is that OpenBSD reserves the right to change anything at any time, from user-interface details down to the C ABI. “The people who want to use the old tools” are discouraged from using OpenBSD to begin with, so it’s not surprising that OpenBSD doesn’t have to wrestle with these kinds of problems.

                                                                      1. 4

                                                                        the right to change anything at any time

                                                                        While this is true, I think you are taking it a little too literally. You won’t, for example, upgrade to the latest snapshot and find that ls has been replaced by some new tool with a completely different name, or that CVS has been replaced by GIT. And while POSIX doesn’t require (best I can tell) a tool named ifconfig it’s very unlikely you would find it replaced by something else.

                                                                        1. 5

                                                                          Right. And by following the discussions on tech@, I’ve gotten the impression that Theo (as well as many other developers) do deeply care about avoiding unnecessary chance to the user facing parts as tools get replaced or extended. Case in point, daemon configuration. The system got extended with the rcctl tool, but the old way of configuring things via rc.local and rc.conf.local still works as it always did. Nothing like the init system swaps on Linux. Still, extending or changing the behavior of a tool even at the risk of breaking some old script seems to be preferred to making new tools that require everyone to adapt.

                                                                          After a decade of using Linux as well as OpenBSD, I’d say that OpenBSD is way more committed to keeping the user facing parts finger compatible while breaking ABI more freely (“we live in a source code world”). In the Linux world I’ve come to expect ABI compatibility but user compatibility gets rekt all the time.

                                                                    1. 6

                                                                      focuses to solve a single (possibly complex) problem (mosh)

                                                                      But mosh solves two generally unrelated problems: “input prediction” and “automatic reconnection”. It just so happens that a lot of people are using high-latency, low-reliability connections (3G) and find both features useful at the same time, but there definitely exist high-latency, high-reliability connections (like connecting to a data-centre on another continent) and low-latency, low-reliability connections (like connecting from my laptop to my desktop). Sometimes I really wish somebody would split mosh apart so I can use the pieces individually.

                                                                      1. 9

                                                                        I’m not qualified to make any judgment on the technical merits of several dependency management solutions, but as someone working primarily in Go, the churn of solutions is starting to have a real cognitive cost.

                                                                        1. 6

                                                                          Some of the solutions suggested from a couple of the Go devs in that “thread” sound.. almost surreal to me.

                                                                          My favorite one so far:

                                                                          We’ve been discussing some sort of go release command that both makes releases/tagging easy, but also checks API compatibility (like the Go-internal go tool api checker I wrote for Go releases). It might also be able to query godoc.org and find callers of your package and run their tests against your new version too at pre-release time, before any tag is pushed. etc.
                                                                          https://github.com/golang/go/issues/24301#issuecomment-390788506

                                                                          With all the cloud providers starting to offer pay-by-the-second containers-as-a-service, I see no reason we couldn’t provide this as an open source tool that anybody can run and pay the $0.57 or $1.34 they need to to run a bazillion tests over a bunch of hosts for a few minutes. There’s not much Google secret sauce when it comes to running tests.
                                                                          https://github.com/golang/go/issues/24301#issuecomment-390790036

                                                                          That sounds… kind of crazy for anyone that isn’t Google scale or doesn’t have Google money.
                                                                          Are the Go devs just /that/ divorced from the (non Google) reality that the rest of us live in?

                                                                          1. 10

                                                                            Kind of crazy, but not super crazy. As another example, consider Rust’s crater tool. When the Rust team are trying to evaluate the impact of fixing a syntax quirk or behavioural bug, they make a version of the Rust compiler with the change and a version without, and boot up crater to test every publicly available Rust package with both compiler versions to see if anything breaks that wasn’t already broken.

                                                                            crater runs on Mozilla’s batch-job infrastructure (TaskCluster), and Mozilla is much, much smaller than Google scale. On the other hand, they’re still bigger than a lot of organisations, and I believe a crater run can take a few days to complete, so it’s going to be a lot more than “$1.34 … for a few minutes” on public cloud infrastructure.

                                                                            1. 1

                                                                              I get the spirit of those responses; we’re getting to the point with cloud services were that kind of integration test suite could happen cheaply.

                                                                              But it is not the answer to the problems that prompted those responses.

                                                                              Dependency management is hard, and there isn’t a perfect solution, mvs is a cool approach and I’m curious how it shakes out in practice, but to OP’s point, I’m not sure I can do another switch like we’ve done up to now

                                                                              Manual Vendoring ($GOPATH munging)
                                                                              govendor
                                                                              dep
                                                                              vgo
                                                                              whatever fixes the problems with vgo

                                                                              1. 3

                                                                                Agreed. I have a couple of projects that I have switched solutions at least 4 or 5 times already (manual GOPATH munging, godep, gpm, gb, dep), because each time it was either a somewhat commonly accepted solution, or seemed the least worst alternative (before there was any kind of community consensus).

                                                                            2. 3

                                                                              I have yet to migrate a project between dependency managers.

                                                                              The old ones work exactly as well as they always have.

                                                                              1. 2

                                                                                I’ve reverted to using govendor for all new projects. I might be able to skip dep if vgo proves to be a good solution.

                                                                                1. 1

                                                                                  similar story for us; govendor works better with private repos

                                                                            1. 3

                                                                              At work we use YYYY.MM.NN for internal software (NN being a 0-indexed release number for that month).

                                                                              I like this for knowing when something was last updated, but it’s not helpful for identifying major changes vs. bugfixes. Perhaps that’s not such a big deal for software that’s on a rapid release cycle.

                                                                              1. 2

                                                                                It’s also not a big deal for software that’s too big or complex for “major change” to be meaningful. If a tiny, rarely used Debian package removes support for a command-line flag, that’s a major change (in the SemVer sense) and since Debian includes that package it’s therefore technically a major change in Debian. But if Debian followed SemVer that strictly, its version number would soon leave Chrome and Firefox in the dust, and version numbers would cease being a useful indicator of change.

                                                                                1. 7

                                                                                  Isn’t Debian’s solution to this to not include major-version changes in updates to an existing release? So it does wait for the next major version of Debian to be included, usually

                                                                                  1. 1

                                                                                    Yep, and this is where the -backports or vendor repos are really useful - newer packages built against the stable release.

                                                                                  2. 2

                                                                                    It’s why we have to make “stable” releases. Otherwise everyone goes crazy. If someone is updating their SemVer too often, they have bad design or do not care for their developers.

                                                                                  3. 2

                                                                                    There’s a discussion of CalVer and breaking changes here: https://github.com/mahmoud/calver/issues/4

                                                                                    Short version, “public” API is a bit of a pipe dream and there’s no replacement for reading (and writing) the docs :)

                                                                                    1. 2

                                                                                      The concept of breaking changes in a public API isn’t really related to ‘read the docs’, except when it comes to compiled in/statically linked libraries.

                                                                                      If you have dependency management at the user end (i.e. via apt/dpkg dependencies that are resolved at install time), you can’t just say “well, install a version that will work, having read the docs and understood what changes when”.

                                                                                      You instead say “I require X major version of package Foo”, because no matter what the developer does, Foo version X.. will always be backwards compatible - new features might be added in a X.Y release, but thats not a problem if theyre added in a backwards compatible manner (either theyre new functionality that has to be opted in for, or e.g. they don’t require extra options to work).

                                                                                      Yes, I know that things like Composer and NPM have a concept of a ‘lock’ file to fix the version to a specific one, but that’s not a solution for anything but internal projects. If you’re installing tools that you aren’t directly developing on yourself using NPM or Composer, you’re doing it wrong.

                                                                                      1. 1

                                                                                        I really don’t see what that has to do with the linked thread. In the very first line, you mention a “public” API. The point is that there’s much less consensus on what constitutes a public API than developers assume. So, you end up having to write/read the docs about what would constitute a “semantic” version change. (Not that docs are a silver bullet, they’re just a necessary part of healthy software development.)

                                                                                        1. 1

                                                                                          The point is that there’s much less consensus on what constitutes a public API than developers assume.

                                                                                          A comment by you making that same claim on GitHub isn’t really evidence of a lack of consensus. What possible definition is there for “public API” besides, “something that will be consumed my someone outside the project”.

                                                                                          So, you end up having to write/read the docs about what would constitute a “semantic” version change.

                                                                                          The decision tree for SemVer is two questions, with 3 possible outcomes. And you’ve still ignored my point. Adherence to semver means you can automatically update dependencies independently of the developer.

                                                                                          So, for instance, if the developer depended on a shared library, that happens to have a security vulnerability, when the library author/project releases a new patch version, end-users can get the fix, regardless of what the tool/app developer is doing that week.

                                                                                          1. 1

                                                                                            The automatic updates work until they don’t. Here is a recent example where Python’s pip broke with many assumptions about public APIs. Your point has not been ignored, I’ve written about it extensively, in the linked thread and in the linked site (and links therein).

                                                                                            As for your closing comment, I’m noticing an important assumption I’m working to uproot: current date is not the only possible date in the release version. manylinux2010 came out in 2018, and is named as much because it’s backwards compatible to 2010.

                                                                                            The Teradata example on the CalVer site also highlights maintaining multiple releases, named after their initial release date. At the consumer level, Windows 98 got updates for years after 2000 came out.

                                                                                            1. 1

                                                                                              That isn’t a failing of semver, it’s a failing of the developers who didn’t properly identify they had a breaking change.

                                                                                              The same thing would have happened under calver, they would have marked it as a patch release with compatibility to the previous version, regardless of the date component.

                                                                                              Expecting people to just forget about the possibility of automatic dependency updates is like suggesting people forget that coffee exists after they’ve had it daily for 10 years.

                                                                                  1. 1

                                                                                    TIL there’s python package manager which creates virtualenvs automatically. Never heard about it before. But PyPA instead of PyPI? Or PyPA is a layer on top of PyPI? Python packaging systems change so fast.

                                                                                    1. 6

                                                                                      PyPA is the Python Packaging Authority, the people who maintain PyPI, tools like pip and setuptools, and make sure they all work nicely together.