1. 2

    This may have some impact on Windows (because they have core architectural mistakes that make processes take up to 100 milliseconds to spin up)

    Do you happen to have a link with more details on this? I’ve heard that Windows is slow for processes/IO several times and I’d be curious to know why (and why they can’t fix it in a backwards-compatible way).

    1. 5

      There are a number of highly upvoted answers here but it’s hard for me to distill anything. It may be that these aren’t good answers.


      1. 6

        I think these are good answers, but I’ve had a lot of exposure to Windows internals.

        What they’re saying is that the barebones NT architecture isn’t slow to create processes. But the vast majority of people are running Win32+NT programs, and Win32 behavior brings in a fair amount of overhead. Win32 has a lot of expected “startup” behavior which Unix is not obliged to do. In practice this isn’t a big deal because a native Win32 app will use multithreading instead of multiprocessing.

        1. 3

          I don’t think that is strictly correct. WSLv1 processes are NT processes without Win32 but still spawn relatively slowly.

          1. 3

            Hm, I remember WSLv1 performance issues mostly being tied to the filesystem implementation. This thread says WSLv1 process start times were fast, but they probably mean relative to Win32.

            I suspect an optimized pico process microbenchmark would perform competitively, but I’m just speculating. The vast majority of Win32 process-start slowdown comes from reloading all those DLLs, reading the registry, etc. That is the “core architectural mistakes” I believe the OP is talking about.

      2. 4

        I don’t remember for sure where I saw this, but it may have been in the WSL1 devlogs? Either way I may have been wrong about the order of magnitude but I remember that Windows takes surprisingly long to spin up new processes compared to Linux.

      1. 1

        I hang out in a couple programming language development chat channels

        Can you share these channels if they are public and accept new people?

        1. 2

          Probably they mean #lang-dev on the Rust discord (https://discord.gg/rust-lang-community) and the Programming Languages discord, which I unfortunately no longer have a link to.

          1. 2

            Correct on both counts. The latter is https://discord.gg/yqWzmkV ; it’s actually an offshoot of Reddit r/programminglanguages , which I’d forgotten. The Rust discord is more fun to actually talk to people in, but the Programming Languages discord occasionally has outright deluges of interesting and/or useful information.

        1. 3

          I’m continuing to turn a shell (dash) into a library so that I can directly integrate it into an experimental terminal.

          As part of this I’m moving all the global state into a struct and passing it around, and as part of that I’m currently boggling at dash’s use of errno - it’s redefining it for glibc and I don’t understand why. If any C experts are interesting in taking a look see the following lines of code: main.h:45-50, main.c:67-69, main.c:97-99. My current best guess is that it’s somehow related to the fact that it longjumps out of its interrupt handler.

          1. 2

            Sounds interesting, what’s the benefit of linking it with the terminal? I can think of a few things, but I’m curious what you want to do with it.

            On a cursory glance, it doesn’t look to me like the dash_errno thing does much. Like it’s not really used anywhere?

            I would be inclined to take it out and see if the shell still works :) Not sure if dash has tests, but I have a whole bunch of shell tests if you like …

            1. 2

              I’m still working out how to best explain this, but basically I’m trying to create a more cohesive/interactive interface between the user and the shell. Things like the ability to filter output based on command/stdout/stderr after the fact. Ability to inspect the output of subshells after the fact. Live monitoring of background jobs/generally better support for running things in the background while still using the shell. An easy interface for taking commands you’ve run and turning them into a shell script. Support for running a shell script “interactively” (similar to in a debugger with step-in/step-over/step-to + live editing), etc.

              This is largely inspired by the good parts of blue ocean (jenkins) and xl-release.

              The cost of this is that I’m going to abandon using a tty - and therefore compatibility with a lot of programs that expect a tty. Possibly I’ll eventually implement a way to run an individual program in a tty, but it’s always going to be a second class citizen/compatibility mode.

              errno within the shell is defined to be *dash_errno - and is used two or three dozen times. After looking at glibc source code as well I’m pretty sure it is just a historical artifact, and yes it appears to still work when removed.

              A bunch of tests will definitely be useful in the near future. Looking at your work - is this the best/current documentation on where to find and how to use your tests?

              1. 2

                OK cool I’ve been thinking of things along the same lines. Although I think there could be a way not to abandon the TTY. What about a GUI that HAS a terminal for commands, so then:

                • the shell itself talks to the GUI ?
                • the child processes talk to the terminal

                For example, someone suggested that the shell’s input line is always at the top and separate from the terminal.

                So that input line could get send to the shell, and it will need ahook for autocomplete. However child processes of the shell like ps need to run in a terminal.

                With the current architecture, the shell and the child process share a terminal, but if the shell were factored as a library, then that wouldn’t need to be the case.

                Some links here: Shell as an engine for TUI or GUI

                (it even links to a thread about debuggers)

                I haven’t seen XL release but it seems related. If you have any screenshots or videos of what you’re thinking of, I’d be interested.

                Oil is eventually supposed to be embeddable C++ code with a C interface to support use cases like this. In fact I had a long discussion with the maintainer of the fish shell about embedding Oil in fish. For one, Oil uses almost no global variables, unlike bash.

                One problem I see with dash is that it doesn’t have any autocompletion support? I think that could limit things UI-wise.

                Fish has some of the cleaner code I’ve seen for a shell, although of course it’s not a POSIX shell.

                As far as tests, you can get a feel like this:

                oil$ test/spec.sh smoke  ~/src/languages/dash-0.5.8/src/dash
                case    line    dash    bash    mksh    osh     osh_ALT dash
                  0       4     pass    pass    pass    pass    pass    pass    builtin
                  1       8     pass    pass    pass    pass    pass    pass    command sub

                Notice the extra dash column which is running against a second version of dash.

                It’s not fully automated for running against say dash, but it could be. But just running those smoke tests and a few other suites could be useful, and I think is quite easy to setup ( as long as you have a Debian- / Ubuntu- ish machine).



                Sample results here: https://www.oilshell.org/release/0.8.5/test/spec.wwz/survey/osh.html

                And feel free to ask me any questions… !

                1. 2

                  Wow, yes we’re thinking along very similar lines I think.

                  Your description of how to use tty’s sounds like my plan to eventually implement them, but I’m not keen on it being the default way to run commands. I’d like each command running in a tty to be in its own tty (so I can filter output by which command it came from), and I’d like to have running multiple commands in parallel be a very first class experience. The main problem with tty’s in my opinion is that as soon as you are doing anything that actually needs a tty - you basically need exclusive control over the output buffer or you get weird effects. I.e. interleaving output lines from different tty’s will often not make much or any sense (despite this being approximately what happens by default in terminals).

                  I’m definitely not attached to dash as the shell. I would like something posix-like to start with because I think changing the terminal UI has already more or less exhausted my strangeness budget if I hope to get anyone to actually use it. The other thing is that (like you suggest in the github issue) I want to convert the shell I’m using to be async because of signals (and because it makes the api for using it cleaner). My current somewhat crazy plan for that is to:

                  1. Rewrite everything global to be in a struct.
                  2. Transpile the C source code to rust using c2rust.
                  3. Apply the async keyword everywhere that blocks, which causes the compiler turn the code into a giant state machine (one of the under sold reasons for using rust - which has nothing to do whatsoever with safety).
                  4. Replace blocking calls with async calls.
                  5. Ideally also change all fork calls to just be making a copy of the global shell state and keep subshells running in the same process - but I’m not sure how simple that will actually be.

                  Autocompletion support is an interesting point, my naive assumption was that I could handle that separately and have the interface to dash basically be execstring(shell, output_related_callbacks, "thing the user typed") - but I can see how that might be problematic. You know a lot more about shells than I do, so I’m curious if you think I need to start planning for how to handle this now, or alternatively that my original plan of “worry about this later” is going to be ok?

                  My frontend work isn’t far enough along that screenshots are particularly instructive, nevertheless here’s a super early stage image. I don’t have access to xl-release anymore and can’t find screenshots/video showing what I want - but the short version of it is that it’s just about the only scripting system I’ve seen that had editing future steps while a script (pipeline) was running, restarting a script from an arbitrary point, and so on as first class features. The interface was pretty clunky, but it had a lot of good ideas.

                  1. 2

                    Yes I imagine that the GUI would have multiple TTYs for running multiple commands in parallel. This requires some cooperation from the shell as far as I can tell – it can’t really be done with bash or dash as is.

                    The async part is interesting and tricky. wait() is a blocking call, and I wonder if you have to resort to the self-pipe trick to make it async. I feel like it changes the structure of the interpreter in nontrivial ways.

                    BTW in Oil the interpreter <-> OS interface is mostly in a single file:


                    This file has:

                    • class Executor
                      • RunSimpleCommand
                      • RunBackgroundJob
                      • RunPipeline
                      • RunSubshell
                      • ….

                    And actually we have a NullExecutor which I want to use for “pure” config file evaluation, and maybe we should have a WindowsExecutor, since Windows doesn’t have fork() or exec(), etc.

                    But yeah making this async is an interesting problem. I will be interested in the results of that.

                    Somehow I’m a little skeptical of the c2rust approach… it seems more direct just to prototype a mini shell in Rust, i.e. first the synchronous version, and then see how it integrates with the GUI.

                    One place to start might be the xv6 shell. It’s only a few hundred lines but it has pipelines and redirects!


                    Trying to make this async would be an interesting exercise (and it’s not clear to me how).

                    To me the issue with autocompletion is that it should appear in the GUI and not in the terminal ? At least the way I was thinking of it. Ditto for command history.

                    I guess what I was warning about is that if you do a ton of work with dash, and then it doesn’t have ANY autocomplete support, then you might have to switch to something else. But xv6 doesn’t either – although it’s much less work, and I think it exposes the core issues of shell more cleanly. In my mind what you’re doing is a fairly ambitious experiment, so it makes sense to try to surface the issues as quickly as possible!

                    But it looks like you already have something working so maybe my concern about dash is overblown.

                    What kind of app is it? Is it like a Linux desktop app?

                    FWIW I created a #shell-gui Zulip channel if you want to chat about it:


                    I seeded it with this older wiki page, which nonetheless might inspire some ideas:


                    1. 2

                      Yeah actually I went through the wiki and a lot of these have cool screenshots/animations:




                      Though I don’t think anyone has done anything quite like what we’re discussing, probably because you have to hack up a shell to do it !! And that’s hard. But I hope Oil can become a library for such experiments. It seems like an “obvious” next step for shell.

                      I use shell because I want a fast, responsive, and automatable interface… but I want to do things in parallel, with UI that’s a little nicer than tmux, and I want some GUI support.

              2. 1

                I was going to suggest looking at the commit history, but it looks like that’s present in the very first commit … Maybe ask on the mailing list? https://git.kernel.org/pub/scm/utils/dash/dash.git/commit/src/main.h?id=05c1076ba2d1a68fe7f3a5ae618f786b8898d327

                The weird thing is that I wouldn’t expect this to actually do anything - isn’t main.h included after the standard library headers? That means that only dash would see the different copy of errno, but it never seems to use it!

                1. 1

                  Dash does use its redefined errno sometimes (note the #define errno (*dash_errno)), but as far as I can tell after looking at the glibc source it is functionally identical to glibc’s errno. My best guess for now is that it’s a historical artifact.

                  Asking on the mailing list would probably work - but I don’t really want to bother maintainers with “so I’m curious about this part of the code” questions when it’s unlikely to be particularly important.

              1. 2

                No language extensions, nice!

                1. 2

                  Haha! Like Stroustrup says, stability is a feature.

                  I’m curious that you don’t count intra-doc links as a language extension though, they’re a whole mini-language the docs, sort of like aggregate initializers in C.

                  1. 2


                1. 6

                  @andyc Congratulations! One of the main reasons I stopped using osh was because of the performance (especially for tab-completion), so it’s great to hear that’s mostly been solved. I might try osh again soon :)

                  I do hope that OSH gets better support for job control, when I stopped contributing it seemed like I was the only one using it, so no one noticed when it broke. There’s also quite a few issues that have been open for a while. I know it’s hairy but I do find it really useful.

                  RE “writing it in Rust would be too much boilerplate” - I’m actually currently in the process of rewriting my parser in Rust, so I’ll let you know how it goes. I plan to use pratt parsing, not recursive descent, which should cut down on the amount of code: so far the most boilerplate by far has been the pretty printing (about 200 lines of code that could probably have been autogenerated). I think this would have been similarly long in any language, although I challenge others to prove me wrong. This of course will change as I make more progress with the parser. Right now I’ve only implemented binary expressions and postfix operators, but the hardest bit is parsing typedefs and function pointer declarations.

                  1. 2

                    I plan to use pratt parsing

                    I agree pratt parsing being nice for expressions; but is it any better than plain recursive descent for statements?

                    but the hardest bit is parsing typedefs and function pointer declarations

                    I struggled with that, too. The Right Left Rule might be useful for you: http://cseweb.ucsd.edu/~ricko/rt_lt.rule.html

                    1. 1

                      Yes I’m optimistic Oil will be fast. So far we’ve translated 16K lines of code but that doesn’t include tab completion at the moment. For a variety of reasons like using yield that might be the last thing translated, but we can talk about it.

                      I remember you had a problem with job control but I can’t find the bug right now. I know there are some other unresolved bugs like:


                      Baisc job control works for me – I just tested the latest 0.8.pre2 release with vim -> Ctrl-Z -> fg. But there are other hairy parts that aren’t implemented, and probably won’t be without help because I’m a tmux user :-/ But I also may not have encouraged help there because I knew we were going to translate to C++. The code right now is best viewed as a prototype for a production quality shell. I expected it will be 3-5K lines of hand-written C++ and 30-50K lines of translated C++.

                      We can talk about it on Zulip maybe but I don’t think pratt parsing is great for most full languages like Python or JS, only for “operator grammar” subset with precedence. Even despite writing a whole series on it!


                      If the language is “normal” I don’t think Rust is a bad idea – after all plenty of parsers are written in Rust. Shell is an especially large language syntactically. It’s much smaller than Python and C++ in general, but the syntax is much larger and requires a big parser.

                    1. 1

                      Switches must be exhaustive. Because one of my main personal projects deals with strings, I’m often dealing with matching against them. With IRC, you only have a small number of message types you probably want to match on, but Rust enforces you to cover all cases.

                      How does this work in Go? Will Go allow you to write a match statement that only encompasses a finite number of exact strings? What happens if the value to match on is a string that isn’t in the set, does it crash?

                      Idiomatic rust would suggest creating an enum type to represent every IRC command you care about, converting the raw string to that type early on (or failing if the input string isn’t a valid IRC command), and then using that type in the rest of the code.

                      1. 1

                        Switches in Go have a ‘default’ case (which is optional / ‘noop if not specified’).

                        There are type switches too, but I don’t think you could use subtyping with string enums the way you could in rust (I could be wrong, but I’ve done quite a bit of go and have never seen that sort of technique used).

                        1. 6

                          The reason why Rust cannot have the default case is that match in Rust is an expression while it is statement in Go. That mean that it is possible to do something like

                          let foo = match value {
                            Foo => 1,
                            Bar => 2

                          In Go this would need to be written:

                          var foo int
                          switch value {
                          case "foo":
                            foo = 1
                          case "bar":
                            foo = 2
                          1. 4

                            in Rust is an expression while it is statement in Go.

                            That’s an interesting observation. I’m about to expand it; this is mostly for my own understanding, I don’t think I’m about to write anything you don’t already realize.

                            Go’s switch constructs are imperative and each branch contains statements, which means every branch’s type is effectively Any-with-side-effects-on-the-scope.

                            Rust’s match constructs are expressions, which means (in Rust) that every match arm is also an expression, and all must have the same type T-with-no-side-effects-on-the-scope.

                            (Both languages are free to perform side effects on the world inside their match arms.)

                            Then, if I understand what you’re getting at, statement blocks have an ‘obvious’ null/default value of ‘no-op, do nothing’, which is why Go’s compiler can automatically add a default handler ‘do nothing if no match’. If the programmer know that is the wrong default action, they must explicitly specify a different one.

                            Types, on the other hand, have no notion of a default value. Which is why the Rust programmer must match exhaustively, and specify a correct value for each match arm. The compiler can’t add `_ => XXX_of_type_T’, because it cannot know what XXX should be for any type T.

                            1. 3

                              Yes, in theory it could use Default::default() if defined, but it is not defined for all values, so it would be confusing for the users. Also forcing exhaustive matching reduces amount of bugs, in the end you can always write:

                              match value {
                                1 => foo(),
                                2 => bar(),
                                _ => unreachable!()

                              To tell compiler that value should be 1 or 2 and it can optimise rest assuming that this is true (and will result with meaningful error otherwise). unreachable!() returns bottom type ! which match any type (as it is meant for functions that do not return at all).

                              1. 3

                                Small nit: unreachable!() doesn’t allow for optimizations, that’s unreachable_unchecked. On the other hand, _unchecked can cause undefined behavior if your expression can actually be reached.

                      1. 2

                        Very informative article. I have a question about this though!

                        + ?Sized - The size of B can be unknown at compile time. This isn’t relevant for our use case, but it means that trait objects may be used with the Cow type.

                        But isn’t B a str which doesn’t have a size known at compile time?

                        1. 2


                          1. 2

                            str doesn’t have a known size, but &str does: it’s the size of a (ptr, len) pair.

                            1. 3

                              Yes, but that’s &B. The statement quoted is that the “B: ?Sized” bound doesn’t matter for the thing described. It does, as B is str.

                          1. 13

                            I wouldn’t say that strong typing removes the need for unit tests. You can have a well-typed function and a langauge that enforces that the function is only called with well-typed inputs; and still have that function perform the wrong business logic for your application without a unit test checking this.

                            Can all the historical versions of all the events be deserialized correctly from the event store, getting converted to newer versions (or to completely different event types) as needed?

                            Let’s assume that your event store stores values as some very generic format, like a list of bytes, and that your event type is some kind of complicated enum. Your deserialization function is then something like List byte -> Optional EventType - Optional, of course, because it doesn’t make sense that for every possible list of bytes, there will be a valid event value in your program that the bytes correspond to. The bytes comprising the ASCII-encodes declaration of independence are a well-typed input to this function, just as the actual bytes in your event store are. So you still need some way to check that you’re doing the business logic of decoding your bytes the right way. A unit test seems perfectly appropriate here. You might even want to have the ASCII-encoded version of the declaration of independence in your unit test, to be sure that your function returns None instead of trying to parse this as an event in your system for some reason.

                            1. 4

                              So, I do agree that type systems can never fully replace unit/integration tests; they can catch type errors but not logic errors.

                              However, a good type system can turn logic errors into type errors. A great example of this is returning a reference to a local variable in a language without memory management: in C or C++ it’s completely legal (maybe with a warning), in Rust it’s a compile error. This isn’t unique to memory management: in C, char + float is a float; in Haskell (and most other functional languages, including Rust), adding Char to Double is a type error. One last example: I’m writing a Rust wrapper for a C library. Every C function I call can return PARAM_INVALID if the allocated buffer is null. The Rust function doesn’t even document the error because it’s not possible to have a null reference in Rust’s type system (also not unique to Rust, C++ has references too).

                              My long winded point is that even though you always need tests, if you have a good type system, there are less things to test.

                              1. 4

                                Curry-Howard Correspondence says that types ARE logic, so they definitely DO catch logic errors.

                                1. 3

                                  That requires a powerful enough type system to represent a proof. Theoretically this is possible, and there definitely is value in using dependent typing and formal verification tools. But at the moment, with typical programming languages, only limited parts of the business logic can be represented with types.

                                  Even today, with a bit of discipline, it is possible to make certain states impossible to represent in the program. This allows you to eliminate some unit tests, which is definitely a good thing, but we’re still far from the point of proving all of a program’s logic with the type system.

                                  1. 1

                                    I understand they don’t catch all errors because there are some you can’t encode in your logic, I’m just pointing out that “type errors are not logic errors” is totally incorrect!!

                                  2. 1

                                    I’m not terribly familiar with Curry-Howard, but Wikipedia says it states that

                                    a proof is a program, and the formula it proves is the type for the program

                                    I don’t see how that means that types can catch logic errors: the type is a theorem, not a proof. Furthermore, just because you can formalize code as a proof doesn’t mean it’s correct; mathematicians find wrong proofs all the time.

                                    1. 3

                                      If you declare some argument is a String and your code compiles, then the type checker has proven that the argument can only possibly be a String. Passing a non-String argument, as you could do in a dynamic language, is a logic error. You violate a precondition of your function, which only works on strings.

                                      1. 1

                                        Type checkers are logic checkers so you can’t really screw up your proof, only the theorems. Yes, this happens sometimes, but it IS a logic system.

                                        1. 3

                                          I think a better phrasing of skepticism is to ask what there is that can check whether you proved the right things.

                                          Whether it’s done with tests or with types, at some point you are relying on the programmer to provide a sufficiently-correct formal specification of the problem to a machine, and if you declare that it should be done via types because the programmer is fallible and types catch things the programmer will mess up, you potentially trigger an infinite regression of types that check the code, and meta-types that check the types, and meta-meta-types that check the meta-types, and so on.

                                          (of course, this is all very well-trod ground in some fields, and is ultimately just a fancier version of the old “Pray, Mr. Babbage…”, but still a question worth thinking about)

                                    2. 2

                                      See also https://www.destroyallsoftware.com/talks/ideology which goes much more into depth on this.

                                  1. 2

                                    Some Java GUI IDEs have something like this: you can create a component, add various events handlers, then copy paste the whole component into another menu and it will recreate all the handlers and layout. Excel will also let you copy paste rows/columns/grids within a spreadsheet. It walould be really cool to make that a standardized format so you can use it across applications.

                                    1. 5

                                      I’m looking forward to the rest in the series as I’m a fan of the author and everything they’ve done for Rust, however with only the first article out thus far which merely discusses components that may cause slow compilation it leads the reader in an overly negative direction, IMO.

                                      Rust compile times aren’t great, but I don’t believe they’re as bad as the author is leading onto thus far. Unless your dev-cycle relies on CI and full test suite runs (which requires full rebuilds), the compile times aren’t too bad. A project I was responsible for at work used to take ~3-5ish minutes for a full build if I remember correctly. By removing some unnecessary generics, feature gating some derived impls, feature gating esoteric functionality, and re-working some macros as well as our build script the compile times were down to around a minute which meant partial builds were mere seconds. That along with test filtering, meant the dev-test-repeat cycle was very quick. Now, it could also be argued that feature gates increase test path complexity, but that’s what our full test suite and CI is for.

                                      Granted, I know our particular anecdote isn’t indicative of all workloads, or even representative of large Servo style projects, but for your average medium sized project I don’t feel Rust compile times hurt productivity all that much.

                                      …now for full re-builds or CI reliant workloads, yes I’m very grateful for every iota of compile time improvements!

                                      1. 7

                                        It is also subjective. For a C++ developer 5 minutes feels ok. If you are used to Go or D, then a single minute feels slow.

                                        1. 5

                                          Personally, slow compile times are one of my biggest concerns about Rust. This is bad enough for a normal edit/compile/run cycle, but it’s twice as bad for integration tests (cargo test --tests) which have to link a new binary for each test.

                                          Of course, this is partly because I have a slow computer (I have a laptop with an HDD), but I don’t think I should need the latest and greatest technology just to get work done without being frustrated. Anecodatally, my project with ~90 dependencies is ~8 seconds for an incremental rebuild, ~30 seconds just to build the integration tests incrementally, and over 5 minutes for a full build.

                                        1. 7

                                          It’s not a majority opinion, but I believe some things should just be kept forever.

                                          Sometimes they were deprecated in Python 2 like using Threading.is_alive in favour of Threading.isAlive to be removed in Python 3

                                          Like this, for example. Is changing the spelling of something really worth breaking this for everyone?

                                          1. 6

                                            Yeah or just provide an alias and in the docs note that the snake case version is preferred or something.

                                            I really really want to like Python. From a sys admin perspective its a fantastic language. One file scripts where you don’t have to deal with virtual env and its not gonna change much.

                                            From a DevOps perspective (package creation and management, versioning, virtualenv, C based packages not playing well with cloud-oriented distros, stuff like this in doing language version upgrades) I’ve always found it to be a nightmare, and this kind of thing is just another example of that. I tried to install Ansible and am basically unable to do it on a Mac because no version of Python or Pip can agree that I have installed it and it should be in my path.

                                            I don’t begrudge anyone who uses it or think it’s a bad language, that would be pretty obtuse, but I always avoid it when I can personally.

                                            1. 7

                                              This is what we do in Mercurial. Aliases stay forever but are removed from the documentation.

                                              Git does this too. git diff --cached is now a perpetual alias for git diff --staged because the staging area has been variously called the cache and the index. Who cares. Aliases are cheap. Just keep them forever.

                                              1. 2

                                                I didn’t even realize --cached was deprecated, I use that all the time.

                                                1. 3

                                                  And that’s how it should be. You shouldn’t even notice it changed.

                                              2. 4

                                                Yeah or just provide an alias and in the docs note that the snake case version is preferred or something.

                                                That’s exactly what was done: https://docs.python.org/2.7/library/threading.html - unfortunately, not everybody spots this stuff, and people often ignore deprecation notices.

                                                The real problem is that it’s difficult to write a reliable tool to flag and fix this stuff is you can’t reliably do type inference to figure out the provenance of stuff.

                                                I tried to install Ansible and am basically unable to do it on a Mac because no version of Python or Pip can agree that I have installed it and it should be in my path.

                                                Some of the things Homebrew does makes that difficult. You’re right: it’s a real pain. :-( I’ve used pipx in the past to manage this a bit better, but upgrades of Python can break the symlinks, sometimes for no good reason at all.

                                                As far as Ansible goes, I stick to the version installed by Homebrew and avoid any dependencies on any of Ansible’s internals.

                                                1. 1

                                                  From someone who uses Ansible on a daily basis: the best way to use it is by creating a virtualenv for your Ansible project and keeping everything you need in there. My ‘infra’ project has a requirements.txt and a boostrap.sh that creates the virtualenv and install all dependencies. If you try to install Ansible at the system level, you are going to hate your life.

                                                2. 6

                                                  Yeah, or at least bring in some compatibility libraries. Deprecating and removing things that aren’t actually harmful seems like churn for the sake of it.

                                                  1. 3

                                                    That (removing cruft, even if not harmful) was basically the reason for Python 3. And everyone agreed with it 10 years ago. And most people using the language now came to it probably after all these decisions have been made, and the need to upgrade to Python 3 was talked about for all this time. Now is simply not the time to question it. Also, making most of those fixes is easy (yes, even if you have to formally fork a library to do a sed over it).

                                                    1. 1

                                                      Those breaking changes came with a major version bump. Why not just wait until Python 4 to remove the cruft?

                                                      1. 3

                                                        There should ideally never be a Python 4: none of those deprecated bits are meant to be used in Python 3 code. They were only present to ease transition in the short term, and it’s been a decade.

                                                        1. 2

                                                          While there are people who prefer a semver-esque approach of bumping the major every time a deprecation cycle finishes, AFAIK Python has no plans to adopt such a process, and intends to just continue doing long deprecation cycles where something gets marked for deprecation with a target release (years in the future) for actually removing it.

                                                          1. 1

                                                            Python’s releases are never perfectly backwards compatible. Like most programming languages, old crufty things that have been deprecated for years are sometimes removed.

                                                            1. 1

                                                              That’s a shame. A lot of languages provide some mechanism for backwards compatibility, either by preserving it either at the source or ABI level, or allowing some kind of indication as to what language version or features the code expects. It’s nice to be able pick up a library from years ago without having to worry about bit rot.

                                                              1. 2

                                                                It’s a library compatibility issue not a language compatibility issue. It’s been deprecated for a decade honestly there’s been plenty of time to fix it

                                                                1. 1

                                                                  This particular library is part of the language. A decade is an awfully short time for a language.

                                                                  1. 2

                                                                    Python has never promised that it will eternally support every thing that’s ever been in the language or the standard library. Aside from the extended support period of Python 2.7, it’s never even promised to maintain something as-is on a time scale of a decade.

                                                                    Python remains a popular and widely-adopted language despite this, which suggests that while you personally may find it a turn-off that the language deprecates things and removes them over time, there are other people who either do not, or are willing to put up with it for sake of having access to a supported version of the language.

                                                                    This is, incidentally, the opposite of what happens in, say, Java, where the extreme backward-compatibility policy and glacial pace of adding even backwards-compatible new features tends to split people exactly the same way.

                                                                    1. 2

                                                                      In a semver-esque world, anything deprecated in 2 is fair game for removal in 3, of course (and if this particular thing was, then I concede). In that way Python 3 is a different language to Python 2, which I believe is how most folks consider it anyway. It’s just a shame that, apparently, you can’t write a Python 3 program and expect it to work with Python 3 in 10 years with no programmatic way of specifying which Python 3 it works in. Nobody would be any worse off if they just waited for Python 4 to clean up.

                                                                      1. 2

                                                                        If I write something today, that raises deprecation warnings today, I don’t expect to be able to walk away from it for ten years and then have it continue to work on the latest version. I expect that those deprecation warnings really do mean “this is going to change in the future”, and that I either should clean it up now to get rid of the warnings, or I’ll have to clean it up later if I want it to keep working.

                                                                        That’s the kind of situation being discussed here – people who wrote code that already raised deprecation warnings at time of writing, and are surprised to find out that those warnings really did mean “this is going to change in the future”.

                                                                      2. 1

                                                                        Everyone who wants a Python that doesn’t change has been (and probably still is) using Python 2.7. I expect that we will see more pushback in the community against these sorts of changes as that becomes less tenable.

                                                                      3. 2
                                                          2. 4

                                                            It would be nice if Python had an equivalent of go fix. It’s just a pity things like that are difficult with dynamic languages.

                                                          1. 10

                                                            Objective reasons:

                                                            • nulls are checked
                                                            • resources are safe(r)
                                                            • strongly typed
                                                            • reasonably efficient in my hands, incredibly efficient in skilled hands
                                                            • cargo is awesome, along with the other dev tools (is this subjective? I objectively don’t think so)
                                                            • small runtime
                                                            • wasm

                                                            Subjective reasons:

                                                            • it’s not C++
                                                            • it’s not Haskell
                                                            • I like the community
                                                            • I like the lobster
                                                            1. 2

                                                              One more for the list: Result instead of exceptions makes it easy easier to see what can go wrong in a function. Compare that to python or even Java where any function can throw any exception and your only hope is to read the documentation.

                                                              1. 1

                                                                Java does have checked exceptions though, which should have the same benefit. They’re not mandatory though.

                                                                1. 2

                                                                  Java has checked exceptions but they’re annoying to use. I see a lot of try { return Integer.parseInt(s); } catch (NumberFormatException e) { throw new RuntimeError(e); }, especially in prototyping. In Rust errors are much easier to work with, you can implement functions on Result like .unwrap_or() or .and_then().

                                                              1. 5
                                                                1. 3

                                                                  Lol he forgot case 6:

                                                                          case 5:
                                                                              //Demo over
                                                                              advancetext = true;
                                                                              hascontrol = false;
                                                                              state = 6;
                                                                          case 7:

                                                                  Seriously though I can’t imagine writing 4099 cases.

                                                                  1. 2

                                                                    It skips tons of numbers all over the place. E.g, goes from 2514 to 3000. Seems like much of it was intentional? Either way, there are way fewer than 4099 cases.. not that that makes it much better :).

                                                                  2. 3

                                                                    If that’s not “Doing Things The Hard Way”, I don’t know what is :-)

                                                                    1. 3

                                                                      Haha, reminds me of the thousand case switch statement in Undertale’s source. Game code really can be scary – it’s seems like that’s especially true for 2D games for some reason…

                                                                  1. 1

                                                                    I’m working on course materials for a new IoT course at my university, as well as hacking on side projects when I have the time.

                                                                    I’m also trying to get back into a good work/sleep schedule for 2020, which is difficult, given that some of my most productive work hours seem to occur after midnight.

                                                                    1. 2

                                                                      Hey Philip! Let me know when you get those resources together. Charles said the project is going to be a webserver implemented in assembly, is that right? Sounds like a lot of fun.

                                                                      I’ve found that my most productive time is either way early in the morning (before 8) or after dinner (after 8). During the day I don’t seem to get as much done, I couldn’t tell you why.

                                                                    1. 1

                                                                      Two projects, both in Rust:

                                                                      For work, implementing a Rust wrapper around a NoSQL database API. The primary API is in C, so I’m learning a lot about unsafe and FFI. If anyone wants to try it out, I’m looking for feedback on how easy it is to use :)

                                                                      For fun, I’ve been working on docs.rs. I tracked down a bug that’s been bugging me ;) for a month and a half, and it only took me an hour of relearning SQL!

                                                                      1. 16

                                                                        Even though I love Rust, I am terrified every time I look at the dependency graph of a typical Rust library: I usually see dozens of transitive dependencies written by Internet randos whom I have zero reason to trust. Vetting all those dependencies takes far too much time, which is why I’m much less productive in Rust than Go.

                                                                        I try to also use the same level of scrutiny when bringing in dependencies in Rust. It can be a challenge and definitely uses up time. This is why the crev project exists, so that the effort can be distributed through a web of trust. I don’t think it has picked up a critical mass yet, but I’m hopeful.

                                                                        Some projects (including my own) have also been taking dependencies more seriously, and in particular, by providing feature flags to turn off things to decrease the dependency count.

                                                                        1. 9

                                                                          Also more direct tools like lichking which can help you search for deps with licenses you don’t like.

                                                                          1. 2

                                                                            Indeed. I regularly use that on my projects with more than a handful of dependencies as a sanity check that there is zero copyleft in my tree.

                                                                          2. 2

                                                                            Some projects (including my own) have also been taking dependencies more seriously

                                                                            One of my biggest pet peeves in Rust is duplicate dependencies. Docs.rs is the worst offender I build regularly (we currently compile 4 different versions of syn!) but it’s a problem throughout the ecosystem. I’ve opened a few different bugs but it usually gets marked as low priority or ‘nice to have’.

                                                                            Part of the problem is that so few crates are 1.0 (looking at you, rand), but another part is that IMO people aren’t very aware of their dependency tree. I regularly see crates with 150+ dependencies and it boggles my mind.

                                                                            Hopefully tools like cargo tree, cargo audit, and cargo outdated will help but there still has to be some effort from the maintainers.

                                                                            1. 3

                                                                              Works fine now.

                                                                              1. 1

                                                                                Maybe it’s blocked by geolocation? Your profile says you’re from Russia :/

                                                                                1. 0

                                                                                  And so? Why Microsoft should block Russians?

                                                                                  1. 1

                                                                                    Not saying they should, just that they may have decided to.

                                                                                    1. 1

                                                                                      Still a weird first assumption. Also the post prior to yours was from a russian saying it works now.

                                                                                  1. 1

                                                                                    That’s weird. Works for me, both that website and the linked PDF.

                                                                                    1. 1

                                                                                      Works here too.

                                                                                    1. 3

                                                                                      A more interesting question is, does the test code shown in the article (comparison of an uninitialized value to itself) invoke undefined behavior?

                                                                                      1. 1

                                                                                        yes it does. in this case it’s dereferencing an uninitialized pointer. if we would make the array global/static it would be dereferencing a null pointer.

                                                                                        1. 1

                                                                                          This is not correct. Uninitialized static arrays have every element initialized to zero, it’s fine to access any element of them: http://port70.net/~nsz/c/c99/n1256.html#6.7.8p10

                                                                                          10 If an object that has automatic storage duration is not initialized explicitly, its value is indeterminate. If an object that has static storage duration is not initialized explicitly, then: …

                                                                                          • if it is an aggregate, every member is initialized (recursively) according to these rules;

                                                                                          and furthermore

                                                                                          21 If there are fewer initializers in a brace-enclosed list than there are elements or members of an aggregate, or fewer characters in a string literal used to initialize an array of known size than there are elements in the array, the remainder of the aggregate shall be initialized implicitly the same as objects that have static storage duration.

                                                                                          Aggregate means struct or array (6.2.5, http://port70.net/~nsz/c/c99/n1256.html#6.2.5p21):

                                                                                          21 Arithmetic types and pointer types are collectively called scalar types. Array and structure types are collectively called aggregate types.

                                                                                          1. 3

                                                                                            The example code shows a non-static (automatic) array with a declaration and no initializer. You’ll note that the sections you quoted refer to what happens when an array (or other aggregate) has fewer items specified in their initializer than the aggregate has members. In this case, there is no initializer. Indeed, this is undefined behavior.

                                                                                            All the best,


                                                                                            1. 1

                                                                                              You appear to only have read the 2nd quote in my comment. Here’s the first one again:

                                                                                              If an object that has static storage duration is not initialized explicitly, then: …

                                                                                              • if it is an aggregate, every member is initialized (recursively) according to these rules;

                                                                                              I agree this UB for an array with automatic storage, though.

                                                                                              1. 1

                                                                                                @jyn514, the array in the example code is not declared static.

                                                                                            2. 2

                                                                                              I stand corrected. in the case that the array has static duration, all elements are indeed initialized to zero. thank you.

                                                                                              edit: and my whole sentence is wrong. a, the pointer to the head of the array is actually initialized, it is only the array elements that aren’t initialized in the case of automatic duration. so the UB is caused by using the elements and not by dereferencing.

                                                                                            3. 1

                                                                                              It has nothing to do with pointers. I’m talking about the uninitialized memory. Would the following code invoke UB ?

                                                                                              int x;
                                                                                              if (x == x)

                                                                                              I do not master the definition of UB, but it seems that if “int” can be initialized with a trap representation (highly unlikely in current implementations), it would be reasonable for this code to crash.

                                                                                              1. 1

                                                                                                yes this is UB, as far as the standard is concerned the value of x is indeterminate, and using that value in any form is undefined behaviour.

                                                                                                from http://port70.net/~nsz/c/c99/n1256.html#J.2:

                                                                                                The value of an object with automatic storage duration is used while it is indeterminate

                                                                                          1. 7

                                                                                            People are all very quick to point out how great reducing friction is when they’re using a faster compiler, faster tests, when they’ve achieved fluency in vim or emacs or whatever, but everyone always jumps to tell you to just think harder if you say typing fast matters. I don’t get it.

                                                                                            I would like to hear an “it doesn’t matter” post from someone who went from slow typing to fast.

                                                                                            1. 3

                                                                                              But all the ones you mention above are challenged! The Rust community has tons of people that hold the opinion that compiler speed doesn’t hold them back!

                                                                                              And the point is also not “slow to fast”, the point is “if you are already mediocre at this, should you invest more time”?

                                                                                              1. 1

                                                                                                The Rust community has tons of people that hold the opinion that compiler speed doesn’t hold them back!

                                                                                                I’m surprised to hear that, it’s been my experience that the slow compile times are very frustrating for rapid iteration because running the tests takes so long. I’m talking specifically about cargo test and similar, things that cargo check doesn’t help with because it only detects compile time errors.

                                                                                                I mentioned this a while back when cargo -Z timings came out, a surprising amount of the compile time after the first initial build comes from link times: https://internals.rust-lang.org/t/exploring-crate-graph-build-times-with-cargo-build-ztimings/10975/7

                                                                                                1. 2

                                                                                                  Yep link time is non-trivial. lld can help here if you’re on a supported platform. E.g. here’s a 50% reduction in debug build time on a game. Here’s an example by the Embark game studio for how they speed up compile times with lld on Windows.

                                                                                                  1. 1

                                                                                                    Some people just don’t care about rapid iteration that much. If you do, yes, it’s a problem.

                                                                                                2. 1

                                                                                                  This is an excellent point that I wish I had detailed more clearly in the post. It certainly feels odd that people will optimize so many of these other areas of friction, but then say that improving typing is not valuable.

                                                                                                  1. 2

                                                                                                    I’d rewrite it with those things in mind. There’s tangible benefits to improving typing and it is a valuable skill, but giving some guidance on what should trigger you to improve on typing would be great!