Threads for jyn514

  1. 5

    Why make this a preprocessor feature? To me it sounds like a job for the linker — “please resolve this symbol to the contents of this binary file.”

    Putting it in the preprocessor means the binary file is going to be converted to a huge ASCII list of comma-separated numbers, then parsed back down to a byte array, then written to the .o file. I’m sure that expansion can be cleverly optimized away with some work, but why is it even there?

    1. 27

      the whole point of this feature is so you don’t have to parse it - if you read the article, the author says he has to convince compiler authors that “a sufficiently clever compiler” is never going to be faster than copying a file. The intention is for implementors to turn this into some platform-specific linker directive.

      1. 9

        Well, any linker could support it without the help of the C standard, which pretty much only covers compilation. But having it dealt with by the preprocessor allows the compiler to be more intelligent about optimization and the like. I imagine that if it were a feature of the linker, the C standard would be hesitant to require that the C code could obtain the object’s size, for example, fearing that some linkers wouldn’t be able to easily provide more than a bare pointer. If it’s in the preprocessor, the compiler already knows it’s size (if the programmer wants that).

        1. 6

          To add to a good point, the compiler knows the size (very useful for some optimisations), and also the contents. The constexpr evaluations shown in the article aren’t possible otherwise.

          1. 8

            Even beyond optimization, being able to do sizeof(embedded_thing) is very useful!

          2. 5

            I didn’t really follow the standardisation process but IMHO this is, if not the, at least a correct answer. The linker approach is available in some compilers (if not all – I’ve “just included binary data” via the linker script countless times) but all it gives you is the equivalent of a void * (or, optimistically, a char *). The compiler doesn’t know anything about it. Just as bad, linters and other static analysis tools don’t know anything about it. If you want to do anything with that data short of sending it to a dumb display or whatever, the only appropriate comment above the code that does something with it is /* yolo */.

          3. 6

            Putting it in the preprocessor means the binary file is going to be converted to a huge ASCII list of comma-separated numbers,

            The article mentions the “as if” rule in the standard. The compiler has to behave “as if” it did this, but it doesn’t actually have to do this.

            1. 1

              Yes, that’s what I meant by “cleverly optimized away with some work”. But it’s architecturally ugly — it means the preprocessor is overstepping its bounds and passing stuff to the parser that isn’t source code.

              1. 6

                The preprocessor already produces non-standard C. Just running echo hello | cpp gives you the following output:

                # 1 "<stdin>"
                # 1 "<built-in>"
                # 1 "<command-line>"
                # 31 "<command-line>"
                # 1 "/usr/include/stdc-predef.h" 1 3 4
                # 32 "<command-line>" 2
                # 1 "<stdin>"

                Obviously, it does this to communicate line and file information to the user for the most part. But I don’t see why it would be a much more severe layering violation to make #embed "foo.svg" preprocess to something like # embed "/path/to/foo.svg", which the compiler can then interpret, if the preprocessor already produces non-standard C with the expectation that the compiler supports the necessary extensions.

                1. 2

                  One of the compilers uses __builtin_string_embed(“base64==”), which does parse as valid source code.

            1. 2

              requires a reasonably modern x86 CPU. It depends on certain performance counter features that are not available in older CPUs.

              Oh? Is this no longer dependent on having an Intel CPU? Might finally be time for me to play around with this.

              Does anyone know if you get not-entirely-uninteligible results with Rust binaries? A cursory search didn’t turn up and mentions of support or explicit statements on non-support.

              1. 6

                Yes, there’s support for Ryzen (3??)

                Also: rr -d rust-gdb

                1. 3

                  Omg I didn’t know you could use rust-gdb, thank you!!

                2. 3

                  rr (somewhat experimentally I iirc) supports M1:

                  1. 1

                    Oh, wow, that is great news. Do you know if it works in (macos-hosted) vms?

                    1. 1

                      It doesn’t work in VMs.

                1. 23

                  For the uninitiated: Style insensitivity is a unique feature of the Nim programming language that allows a code base to follow a consistent naming style, even if its dependencies use a different style. It works by comparing identifiers in a case-insensitive way, except for the first character, and ignoring underscores.

                  Another advantage of style insensitivity is that identifiers such as itemId, itemID or item_ID can never refer to different things, which prevents certain kinds of bad code. An exception is made for the first letter to allow the common convention of having a value foo of type Foo.

                  There’s a common misconception that this feature causes Nim programmers to mix different styles in a single codebase (which, as mentioned, is precisely the opposite of what it does), and it gets brought up every time Nim is mentioned on Lobsters/HackerNews/etc, diverting the discussion from more valuable topics.

                  1. 3

                    There’s a common misconception that this feature causes Nim programmers to mix different styles in a single codebase (which, as mentioned, is precisely the opposite of what it does)

                    But… isn’t it exactly what the feature does? If my coding habits would make me write itemId and a coworker’s code habits would make them write item_id, style insensitivity makes it likely that I would accidentally use a different style than my coworker for the same variable in the same codebase, right? While most languages would make this impossible by making item_id be a different name than itemId, right?

                    How is this a misconception?

                    To be clear, I’m not saying it’s a huge deal or that it warrants all the attention it’s getting (that’s a different discussion), but since you brought it up…

                    1. 3

                      Thanks for providing some context. Is this a thing that gets applied by default any time you use any library, or a feature you can specifically invoke at the point during which the library is imported?

                      The former seems … real bad. The latter seems … kinda neat? but a bit silly.

                      1. 6

                        Currently it’s always on, and there’s an opt-in compile flag --styleCheck:error that makes it impossible to use an identifier inconsistently within the same file. The linked issue discusses if and how this behavior should be changed in Nim 2.

                        Personally, I wouldn’t mind if it was removed, as long as:

                        • --styleCheck:error was on by default
                        • there was a mechanism to restyle identifiers when importing a library.
                        1. 2

                          I agree. People outside the Nim community can add real value to this discussion, since it is just speculation what they really think based on a few loud complainers.

                          1. 5

                            I’m someone who looked at Nim, really liked it, then saw the “style insensitivity” and thought “this isn’t for me”. (Not co-incidentally, I’ve been involved in a major, CEOs-gettting-involved, fiasco that was ultimately due to SQL case-insensitivity.)

                            Nim occupies a nice space - compiled but relatively “high level” - with only really Go as a competitor (zig/rust/c++ all seem a little too low level.) I personally recoil at the idea of “style insensitivity”, but hopefully in a friendly, manner.

                            1. 5

                              I’ve been involved in a major, CEOs-gettting-involved, fiasco that was ultimately due to SQL case-insensitivity.

                              You can’t just say this and leave us hanging 😆 tell us the story! How did that cause a fiasco?

                              1. 5

                                Our software wouldn’t start for one customer (a large bank). The problem was unreproducible and had been going on for weeks. The customer was understandably very unhappy.

                                The ultimate cause was a “does the database version match the code” check. The database had a typical version table that looked like:

                                CREATE TABLE DB_STATE (VERSION INT, ...);

                                Which was checked at startup using something like select version from db_state. This was failing for the customer because in Turkish, the upper case of “version” is “VERSİON” (look closely). Case-insensitivity is language-specific and the customer had installed the Turkish version of SQL Server.

                                Some java to demonstrate:

                                public class Turkish {
                                  public static void main(String[] args) {

                                If you look at the java documentation for toUpper, they specifically mention Turkish - others have been bit by this same issue, I’m sure.

                                Which makes me wonder - how does Nim behave if compiled on a machine in Turkey, or Germany.

                                1. 6

                                  This is making me wonder if anyone has ever used this as a stack-smashing attack. Find some C code that uppercases the input in place and send a bunch of “ß”s. Did the programmer ensure the buffer is big enough?

                                  1. 3

                                    I’m pretty sure Nim’s style insensitivity is not locale-specific. That would be very dumb.

                                2. 3

                                  There’s also crystal, but ii is failing to reach critical mass in my opinion.

                                  I think crystal did a great job providing things people usually want upfront. I want a quick and direct way to make an HTTP request. I want to extract na value from a JSON string with no fuss. I want to expose functionality via CLI or a web interface with minimal effort. Crystal got these right.

                                  I agree that above-mentioned languages are too low level.

                                  1. 1

                                    Not sure about the others, but I think exposing functionality via CLI is pretty easy in Nim. cligen is not in the Nim stdlib, though.

                                    1. 1

                                      I was not comparing to Nim directly. Just giving examples of the kind of thing I believe are the strongest drives to success of a language.

                                      But for an example of one thing that I found lacking in Nim was concurrency primitives. Crystal makes it relatively simple and direct with fairly simple and familiar fibre API.

                                      A quick way to spin up an HTTP service was another one. It even had support for websockets.

                                  2. 2

                                    Seems plenty friendly to me. Programmers can get awfully passionate about style guides/identifier conventions.

                                    I think it tracks individual, personal history a lot - what confusions someone had, what editor/tool support they had for what sort of “what meta-kind of token is this?” questions and so on. In extreme cases it can lead to, e.g. Hungarian notation like moves. It can even be influenced by font choices.

                                    Vaguely related to case insensitivity, early on, Nim only used '\l' to denote a newline character..Not sure why it was not '\n' from the start, but because lower case “l” (elle) and the numeral “1” often look so similar, '\L' was the dominant suggestion and all special character literals were case insensitive. (Well, one could argue it was just tracking general insensitivity that made them insensitive…).

                                    1. 2

                                      Personally I’d say this compares unfavorably to go, where style arguments are solved by the language shipping a single blessed formatting style.

                                      Confusions/disagreements over formatting style are - imo - a waste of the teams engineering time, so I see the go approach as inherently better.

                                      1. 2

                                        Go doesn’t enforce a style for identifiers. Try it online!

                                        1. 3

                                          Thanks, I hate it.

                                          Fair point to nim though!

                              2. 4

                                It is always on - even for keywords like f_Or in a loop. I was trying to perhaps help guide the group towards a usage like you articulate.

                                EDIT: The main use case cited is always “use a library with an alien convention according to your own convention”. So, being more precise and explicit about this as an import-time construct would seem to be less feather ruffling (IMO).

                              3. 3

                                Just for the record - the Z shell (Zsh) had style insensitivity for its setopt builtin waaay back in the very early 90s. They did not make the first letter sensitive, though. :-)

                                As this seems to be a very divisive issue and part of what is divisive is knowing how those outside the community (who do not love/have not made peace with the feature) feel, it might be helpful if Lobster people could weigh in.

                                1. 2

                                  As this seems to be a very divisive issue and part of what is divisive is knowing how those outside the community (who do not love/have not made peace with the feature) feel, it might be helpful if Lobster people could weigh in.

                                  I looked into Nim and was at least partially dissuaded by style insensitivity. I don’t think it’s fatal persay, but it did hit me very early in my evaluation. I would liken it to the dot calls in Elixir: Something that feels wrong and makes you question other decisions in the language. That said, Elixir is a fabulous language and I powered through. I imagine others feel similarly.

                                  1. 2

                                    What specifically do you not like about style insensitivity?

                                    1. 1

                                      Here’s the thing: I haven’t used style insensitivity so I can’t really say I dislike it. However, it struck me as unnecessarily inconsistent. I don’t care about snake case or camel case. I just want code to be consistent. Of course, code I write can be consistent with style insensitivity, but code I read probably won’t be.

                                      Additionally, I imagined that working in a team could have issues: repos use the original author’s preferred styling. Of course, having a clear style guide helps, but in small teams sometimes people are intractable and resistant to change. In a way, it triggers a feeling of exasperation: a memory of all the stupid little arguments you have with other developers.

                                      So here I am kicking the tires on a new exciting language and I am already thinking about arguing with people. Kind of takes the wind out of your sails. It may be a great feature, but I imagine it’s a barrier to adoption for some neurotic types like myself. (Maybe that’s a blessing?)

                                      1. 1

                                        You have it the other way around. With style insensitivity, code is much more likely to follow a consistent style — because it can’t be forced into inconsistency by using libraries from different authors.

                                        1. 1

                                          I can see how code I write is consistent, but code I read is going to be more inconsistent. If it wasn’t then why would we need style insensitivity in the first place?

                                          1. 1

                                            I can see how code I write is consistent, but code I read is going to be more inconsistent.

                                            Can you show me a serious Nim project that uses an inconsistent style?

                                            If it wasn’t then why would we need style insensitivity in the first place?

                                            Because libraries you’re using may be written in different styles.

                                            1. 1

                                              I’m saying it’s inconsistent across projects. Not within projects. Sometimes you have to read other people’s code. Style insensitivity allows/encourages people to pick the nondefault style.

                                              Ultimately, I’m not in the nim ecosystem. I posted my comment about why style insensitivity made me less interested in nim. I can tell you that this is the exact sort of argument I was looking to avoid so you have proven my initial concerns correct.

                                              1. 1

                                                I don’t see what the problem is with reading code written in a different style, as long as it’s consistent. And in practice, most Nim projects follow NEP1.

                                2. 2

                                  Doesn’t JRuby have something similar around Java native methods?

                                  1. 2

                                    I think it’s an interesting comparison, but it’s important to keep in mind that a language based around message passing is fundamentally different from what’s going on here where the compiler itself is collapsing a bunch of different identifiers during compile-time. When you call a method in Ruby, you’re not really supposed to care how it’s resolved, but when you make a call in a language like Nim, you expect it to be resolved at compile-time.

                                    1. 3

                                      I’d disagree there. The mechanism in JRuby is that the method is made available under multiple names to the application after it’s loaded. That’s not extremely different from what Nim does, except if we go down to the level to say we can’t compare languages with different runtime and module loading models.


                                      1. 2

                                        I guess what I meant was that even if the implementation works the same way, Rubyists fundamentally have different expectations around what happens when you call; they’ve already given up on greppability for other reasons.

                                1. 10

                                  Even in userland I would like to at least know “can this panic” or “does this allocate” and things like that without having to recursively read docs/code.

                                  1. 3

                                    It seems like something that could be automated by analyzing a call graph.

                                    1. 2

                                      One difficulty with this is that Rust relies on optimization to remove panics, e.g.

                                      let a = [1,2,3];
                                      for i in 0..3 { a[i]; }

                                      Can’t panic, won’t have any panicking code in the release binary, but does have panicking index call in its call graph.

                                      1. 1

                                        I can’t think of a reason why this is bad, but it is remarkable to see a compiler that actually corrects code.

                                    2. 1

                                      I wonder if this could be enforced or checked at compile time.

                                      1. 4

                                        There are some truly awful hacks to do it as a library: I don’t think there’s any inherent reason it couldn’t be in the compiler, it’s just that it’s a language addition and no one has written an RFC for it.

                                        1. 3

                                          It would be really nice.

                                          For example, there’s std::thread::spawn() → JoinHandle<T> which can panic, so instead you use .spawn() → Result<JoinHandle<T>> on a thread::Builder, like the docs suggest.

                                          The docs for that one say it can panic "if a thread name was set and it contained null bytes", but is that really the only condition? No, it can panic for other recoverable errors as well; the Result doesn’t capture all of them.

                                          So it gets hard quickly.

                                          1. 1

                                            Maybe there’s a flipside that’s easier. Crates do declare where they do (or to guarantee they never do it). Obviously this will be easier snd More reasons for special crates that will be (or have been) build with those use cases in mind.

                                      1. 5

                                        The complexity of Rust’s ownership model is not added to an otherwise simple program, it is merely the compiler being extremely pedantic about your code obeying rules it had to obey anyway.

                                        Is this true?

                                        What the author calls features are I think actually better understood as models. Functions, structs, the borrow checker, are all inventions, reified via keywords and constraints and semantics, in the hope that they make programming easier or better or whatever. No feature is just objectively good, right? Goodness is just how well the feature has stood the test of time. And good languages — systems composed of many interacting models —aren’t just a pile of good features. Each feature interacts with the others; adding a capability can easily deliver net negative value if it doesn’t compose well with the rest of the system. Good languages are designed holistically!

                                        1. 2

                                          I take a different perspective: In many modern architectures, sharing is reduced as much as possible and many modern programs are effectively constructed out of single-owned values, particularly HTTP services with a shared-nothing architecture. This is only true at the outer application level, of course frameworks have shared internal state (such a thread pools, etc.). But e.g. garbage collected languages make it hard to enforce the unique ownership of a value (because adding a reference is hard to trace). This is for example the thinking behind such efforts like JCoBox, that tried to make sure that e.g. values passed between actors don’t reference the memory space of the sending actor, unless immutable.

                                          Rusts strong-point are systems where full and accurate passing of ownership is a useful property. I don’t agree with “Rusts added complexity”. Rust comes from a wholly different direction, which needs people to rewire their brain. But it’s also a complex language that needs deep understanding of it’s properties to exploit its features.

                                          1. 3

                                            Rusts strong-point are systems where full and accurate passing of ownership is a useful property.

                                            Sure! That’s one of the models baked in to Rust’s grammar, or semantics, or something like that. Very often it’s a useful and productive model! But I think not always. Maybe not even most of the time.

                                            1. 2

                                              I disagree. Answering the question “Which component is responsible for this piece of data?” is an extremely common source of bugs.

                                              Note that Rust is a flavour of that model - currently the only one that has widespread adoption and can somewhat be seen as the most aggressive implementation of it. But other languages are growing ownership models as well.

                                          2. 2

                                            To a large extent it is true. The problem of ownership (when to call free()) is fundamental. GC is a one way of tackling it (and typically only for the heap, not other resources). If you don’t have a GC, or infinite memory, you have to solve it some other way. Tracking the ownership manually without compiler’s help gets difficult once the program outgrows what you can fit in your head at once.

                                            There are multiple ways to model it. I used to model the problem in my head as “does it leak or crash?”, and in retrospective that was a terrible way to frame it. I feel this bad perspective was complicating it like orbits in a geocentric model. Formalizing the problem as moves and borrows adds clarity to what’s happening. Changes informal practices into types.

                                            However, Rust’s borrowing model can’t express all types of ownership. Most notably, it can’t reason about circular (self-referential) structures. When you hit a case that doesn’t fit Rust’s model, then you either need unsafe or some runtime check, and that feels disappointing compared to the level of safety and performance Rust usually gives.

                                            1. 1

                                              However, Rust’s borrowing model can’t express all types of ownership. Most notably, it can’t reason about circular (self-referential) structures. When you hit a case that doesn’t fit Rust’s model, then you either need unsafe or some runtime check, and that feels disappointing compared to the level of safety and performance Rust usually gives.

                                              The trick here is that Rust - on a type system level - models logical ownership. Box<T> is a box owning T on the heap, with a raw pointer behind it (that may be fat or not). But that doesn’t matter as long as it’s a private detail. So the criticism that Rust can’t implement self-referential structs without unsafe is minor, as long as users can rely on the internals not leaking. For that reason, Rust is strong on privacy (btw. the same approach that Ada takes on that perspective).

                                              Rust is built for safe composition of large programs, while relying on the correct implementation of the components.

                                            2. 2

                                              I think it’s half-true. Some resources have an ownership model regardless of the language, like what files are open/writeable or a database connection or something. Rust goes further and applies that ownership model to all shared state. Withoutboats has an excellent post exploring what if Rust still had the borrow checker, but without treating all heap memory as a resource:

                                            1. 2

                                              This may have some impact on Windows (because they have core architectural mistakes that make processes take up to 100 milliseconds to spin up)

                                              Do you happen to have a link with more details on this? I’ve heard that Windows is slow for processes/IO several times and I’d be curious to know why (and why they can’t fix it in a backwards-compatible way).

                                              1. 5

                                                There are a number of highly upvoted answers here but it’s hard for me to distill anything. It may be that these aren’t good answers.


                                                1. 6

                                                  I think these are good answers, but I’ve had a lot of exposure to Windows internals.

                                                  What they’re saying is that the barebones NT architecture isn’t slow to create processes. But the vast majority of people are running Win32+NT programs, and Win32 behavior brings in a fair amount of overhead. Win32 has a lot of expected “startup” behavior which Unix is not obliged to do. In practice this isn’t a big deal because a native Win32 app will use multithreading instead of multiprocessing.

                                                  1. 3

                                                    I don’t think that is strictly correct. WSLv1 processes are NT processes without Win32 but still spawn relatively slowly.

                                                    1. 3

                                                      Hm, I remember WSLv1 performance issues mostly being tied to the filesystem implementation. This thread says WSLv1 process start times were fast, but they probably mean relative to Win32.

                                                      I suspect an optimized pico process microbenchmark would perform competitively, but I’m just speculating. The vast majority of Win32 process-start slowdown comes from reloading all those DLLs, reading the registry, etc. That is the “core architectural mistakes” I believe the OP is talking about.

                                                2. 4

                                                  I don’t remember for sure where I saw this, but it may have been in the WSL1 devlogs? Either way I may have been wrong about the order of magnitude but I remember that Windows takes surprisingly long to spin up new processes compared to Linux.

                                                1. 1

                                                  I hang out in a couple programming language development chat channels

                                                  Can you share these channels if they are public and accept new people?

                                                  1. 2

                                                    Probably they mean #lang-dev on the Rust discord ( and the Programming Languages discord, which I unfortunately no longer have a link to.

                                                    1. 2

                                                      Correct on both counts. The latter is ; it’s actually an offshoot of Reddit r/programminglanguages , which I’d forgotten. The Rust discord is more fun to actually talk to people in, but the Programming Languages discord occasionally has outright deluges of interesting and/or useful information.

                                                  1. 3

                                                    I’m continuing to turn a shell (dash) into a library so that I can directly integrate it into an experimental terminal.

                                                    As part of this I’m moving all the global state into a struct and passing it around, and as part of that I’m currently boggling at dash’s use of errno - it’s redefining it for glibc and I don’t understand why. If any C experts are interesting in taking a look see the following lines of code: main.h:45-50, main.c:67-69, main.c:97-99. My current best guess is that it’s somehow related to the fact that it longjumps out of its interrupt handler.

                                                    1. 2

                                                      Sounds interesting, what’s the benefit of linking it with the terminal? I can think of a few things, but I’m curious what you want to do with it.

                                                      On a cursory glance, it doesn’t look to me like the dash_errno thing does much. Like it’s not really used anywhere?

                                                      I would be inclined to take it out and see if the shell still works :) Not sure if dash has tests, but I have a whole bunch of shell tests if you like …

                                                      1. 2

                                                        I’m still working out how to best explain this, but basically I’m trying to create a more cohesive/interactive interface between the user and the shell. Things like the ability to filter output based on command/stdout/stderr after the fact. Ability to inspect the output of subshells after the fact. Live monitoring of background jobs/generally better support for running things in the background while still using the shell. An easy interface for taking commands you’ve run and turning them into a shell script. Support for running a shell script “interactively” (similar to in a debugger with step-in/step-over/step-to + live editing), etc.

                                                        This is largely inspired by the good parts of blue ocean (jenkins) and xl-release.

                                                        The cost of this is that I’m going to abandon using a tty - and therefore compatibility with a lot of programs that expect a tty. Possibly I’ll eventually implement a way to run an individual program in a tty, but it’s always going to be a second class citizen/compatibility mode.

                                                        errno within the shell is defined to be *dash_errno - and is used two or three dozen times. After looking at glibc source code as well I’m pretty sure it is just a historical artifact, and yes it appears to still work when removed.

                                                        A bunch of tests will definitely be useful in the near future. Looking at your work - is this the best/current documentation on where to find and how to use your tests?

                                                        1. 2

                                                          OK cool I’ve been thinking of things along the same lines. Although I think there could be a way not to abandon the TTY. What about a GUI that HAS a terminal for commands, so then:

                                                          • the shell itself talks to the GUI ?
                                                          • the child processes talk to the terminal

                                                          For example, someone suggested that the shell’s input line is always at the top and separate from the terminal.

                                                          So that input line could get send to the shell, and it will need ahook for autocomplete. However child processes of the shell like ps need to run in a terminal.

                                                          With the current architecture, the shell and the child process share a terminal, but if the shell were factored as a library, then that wouldn’t need to be the case.

                                                          Some links here: Shell as an engine for TUI or GUI

                                                          (it even links to a thread about debuggers)

                                                          I haven’t seen XL release but it seems related. If you have any screenshots or videos of what you’re thinking of, I’d be interested.

                                                          Oil is eventually supposed to be embeddable C++ code with a C interface to support use cases like this. In fact I had a long discussion with the maintainer of the fish shell about embedding Oil in fish. For one, Oil uses almost no global variables, unlike bash.

                                                          One problem I see with dash is that it doesn’t have any autocompletion support? I think that could limit things UI-wise.

                                                          Fish has some of the cleaner code I’ve seen for a shell, although of course it’s not a POSIX shell.

                                                          As far as tests, you can get a feel like this:

                                                          oil$ test/ smoke  ~/src/languages/dash-0.5.8/src/dash
                                                          case    line    dash    bash    mksh    osh     osh_ALT dash
                                                            0       4     pass    pass    pass    pass    pass    pass    builtin
                                                            1       8     pass    pass    pass    pass    pass    pass    command sub

                                                          Notice the extra dash column which is running against a second version of dash.

                                                          It’s not fully automated for running against say dash, but it could be. But just running those smoke tests and a few other suites could be useful, and I think is quite easy to setup ( as long as you have a Debian- / Ubuntu- ish machine).



                                                          Sample results here:

                                                          And feel free to ask me any questions… !

                                                          1. 2

                                                            Wow, yes we’re thinking along very similar lines I think.

                                                            Your description of how to use tty’s sounds like my plan to eventually implement them, but I’m not keen on it being the default way to run commands. I’d like each command running in a tty to be in its own tty (so I can filter output by which command it came from), and I’d like to have running multiple commands in parallel be a very first class experience. The main problem with tty’s in my opinion is that as soon as you are doing anything that actually needs a tty - you basically need exclusive control over the output buffer or you get weird effects. I.e. interleaving output lines from different tty’s will often not make much or any sense (despite this being approximately what happens by default in terminals).

                                                            I’m definitely not attached to dash as the shell. I would like something posix-like to start with because I think changing the terminal UI has already more or less exhausted my strangeness budget if I hope to get anyone to actually use it. The other thing is that (like you suggest in the github issue) I want to convert the shell I’m using to be async because of signals (and because it makes the api for using it cleaner). My current somewhat crazy plan for that is to:

                                                            1. Rewrite everything global to be in a struct.
                                                            2. Transpile the C source code to rust using c2rust.
                                                            3. Apply the async keyword everywhere that blocks, which causes the compiler turn the code into a giant state machine (one of the under sold reasons for using rust - which has nothing to do whatsoever with safety).
                                                            4. Replace blocking calls with async calls.
                                                            5. Ideally also change all fork calls to just be making a copy of the global shell state and keep subshells running in the same process - but I’m not sure how simple that will actually be.

                                                            Autocompletion support is an interesting point, my naive assumption was that I could handle that separately and have the interface to dash basically be execstring(shell, output_related_callbacks, "thing the user typed") - but I can see how that might be problematic. You know a lot more about shells than I do, so I’m curious if you think I need to start planning for how to handle this now, or alternatively that my original plan of “worry about this later” is going to be ok?

                                                            My frontend work isn’t far enough along that screenshots are particularly instructive, nevertheless here’s a super early stage image. I don’t have access to xl-release anymore and can’t find screenshots/video showing what I want - but the short version of it is that it’s just about the only scripting system I’ve seen that had editing future steps while a script (pipeline) was running, restarting a script from an arbitrary point, and so on as first class features. The interface was pretty clunky, but it had a lot of good ideas.

                                                            1. 2

                                                              Yes I imagine that the GUI would have multiple TTYs for running multiple commands in parallel. This requires some cooperation from the shell as far as I can tell – it can’t really be done with bash or dash as is.

                                                              The async part is interesting and tricky. wait() is a blocking call, and I wonder if you have to resort to the self-pipe trick to make it async. I feel like it changes the structure of the interpreter in nontrivial ways.

                                                              BTW in Oil the interpreter <-> OS interface is mostly in a single file:


                                                              This file has:

                                                              • class Executor
                                                                • RunSimpleCommand
                                                                • RunBackgroundJob
                                                                • RunPipeline
                                                                • RunSubshell
                                                                • ….

                                                              And actually we have a NullExecutor which I want to use for “pure” config file evaluation, and maybe we should have a WindowsExecutor, since Windows doesn’t have fork() or exec(), etc.

                                                              But yeah making this async is an interesting problem. I will be interested in the results of that.

                                                              Somehow I’m a little skeptical of the c2rust approach… it seems more direct just to prototype a mini shell in Rust, i.e. first the synchronous version, and then see how it integrates with the GUI.

                                                              One place to start might be the xv6 shell. It’s only a few hundred lines but it has pipelines and redirects!


                                                              Trying to make this async would be an interesting exercise (and it’s not clear to me how).

                                                              To me the issue with autocompletion is that it should appear in the GUI and not in the terminal ? At least the way I was thinking of it. Ditto for command history.

                                                              I guess what I was warning about is that if you do a ton of work with dash, and then it doesn’t have ANY autocomplete support, then you might have to switch to something else. But xv6 doesn’t either – although it’s much less work, and I think it exposes the core issues of shell more cleanly. In my mind what you’re doing is a fairly ambitious experiment, so it makes sense to try to surface the issues as quickly as possible!

                                                              But it looks like you already have something working so maybe my concern about dash is overblown.

                                                              What kind of app is it? Is it like a Linux desktop app?

                                                              FWIW I created a #shell-gui Zulip channel if you want to chat about it:


                                                              I seeded it with this older wiki page, which nonetheless might inspire some ideas:


                                                              1. 2

                                                                Yeah actually I went through the wiki and a lot of these have cool screenshots/animations:




                                                                Though I don’t think anyone has done anything quite like what we’re discussing, probably because you have to hack up a shell to do it !! And that’s hard. But I hope Oil can become a library for such experiments. It seems like an “obvious” next step for shell.

                                                                I use shell because I want a fast, responsive, and automatable interface… but I want to do things in parallel, with UI that’s a little nicer than tmux, and I want some GUI support.

                                                        2. 1

                                                          I was going to suggest looking at the commit history, but it looks like that’s present in the very first commit … Maybe ask on the mailing list?

                                                          The weird thing is that I wouldn’t expect this to actually do anything - isn’t main.h included after the standard library headers? That means that only dash would see the different copy of errno, but it never seems to use it!

                                                          1. 1

                                                            Dash does use its redefined errno sometimes (note the #define errno (*dash_errno)), but as far as I can tell after looking at the glibc source it is functionally identical to glibc’s errno. My best guess for now is that it’s a historical artifact.

                                                            Asking on the mailing list would probably work - but I don’t really want to bother maintainers with “so I’m curious about this part of the code” questions when it’s unlikely to be particularly important.

                                                        1. 2

                                                          No language extensions, nice!

                                                          1. 2

                                                            Haha! Like Stroustrup says, stability is a feature.

                                                            I’m curious that you don’t count intra-doc links as a language extension though, they’re a whole mini-language the docs, sort of like aggregate initializers in C.

                                                            1. 2


                                                          1. 6

                                                            @andyc Congratulations! One of the main reasons I stopped using osh was because of the performance (especially for tab-completion), so it’s great to hear that’s mostly been solved. I might try osh again soon :)

                                                            I do hope that OSH gets better support for job control, when I stopped contributing it seemed like I was the only one using it, so no one noticed when it broke. There’s also quite a few issues that have been open for a while. I know it’s hairy but I do find it really useful.

                                                            RE “writing it in Rust would be too much boilerplate” - I’m actually currently in the process of rewriting my parser in Rust, so I’ll let you know how it goes. I plan to use pratt parsing, not recursive descent, which should cut down on the amount of code: so far the most boilerplate by far has been the pretty printing (about 200 lines of code that could probably have been autogenerated). I think this would have been similarly long in any language, although I challenge others to prove me wrong. This of course will change as I make more progress with the parser. Right now I’ve only implemented binary expressions and postfix operators, but the hardest bit is parsing typedefs and function pointer declarations.

                                                            1. 2

                                                              I plan to use pratt parsing

                                                              I agree pratt parsing being nice for expressions; but is it any better than plain recursive descent for statements?

                                                              but the hardest bit is parsing typedefs and function pointer declarations

                                                              I struggled with that, too. The Right Left Rule might be useful for you:

                                                              1. 1

                                                                Yes I’m optimistic Oil will be fast. So far we’ve translated 16K lines of code but that doesn’t include tab completion at the moment. For a variety of reasons like using yield that might be the last thing translated, but we can talk about it.

                                                                I remember you had a problem with job control but I can’t find the bug right now. I know there are some other unresolved bugs like:


                                                                Baisc job control works for me – I just tested the latest 0.8.pre2 release with vim -> Ctrl-Z -> fg. But there are other hairy parts that aren’t implemented, and probably won’t be without help because I’m a tmux user :-/ But I also may not have encouraged help there because I knew we were going to translate to C++. The code right now is best viewed as a prototype for a production quality shell. I expected it will be 3-5K lines of hand-written C++ and 30-50K lines of translated C++.

                                                                We can talk about it on Zulip maybe but I don’t think pratt parsing is great for most full languages like Python or JS, only for “operator grammar” subset with precedence. Even despite writing a whole series on it!


                                                                If the language is “normal” I don’t think Rust is a bad idea – after all plenty of parsers are written in Rust. Shell is an especially large language syntactically. It’s much smaller than Python and C++ in general, but the syntax is much larger and requires a big parser.

                                                              1. 1

                                                                Switches must be exhaustive. Because one of my main personal projects deals with strings, I’m often dealing with matching against them. With IRC, you only have a small number of message types you probably want to match on, but Rust enforces you to cover all cases.

                                                                How does this work in Go? Will Go allow you to write a match statement that only encompasses a finite number of exact strings? What happens if the value to match on is a string that isn’t in the set, does it crash?

                                                                Idiomatic rust would suggest creating an enum type to represent every IRC command you care about, converting the raw string to that type early on (or failing if the input string isn’t a valid IRC command), and then using that type in the rest of the code.

                                                                1. 1

                                                                  Switches in Go have a ‘default’ case (which is optional / ‘noop if not specified’).

                                                                  There are type switches too, but I don’t think you could use subtyping with string enums the way you could in rust (I could be wrong, but I’ve done quite a bit of go and have never seen that sort of technique used).

                                                                  1. 6

                                                                    The reason why Rust cannot have the default case is that match in Rust is an expression while it is statement in Go. That mean that it is possible to do something like

                                                                    let foo = match value {
                                                                      Foo => 1,
                                                                      Bar => 2

                                                                    In Go this would need to be written:

                                                                    var foo int
                                                                    switch value {
                                                                    case "foo":
                                                                      foo = 1
                                                                    case "bar":
                                                                      foo = 2
                                                                    1. 4

                                                                      in Rust is an expression while it is statement in Go.

                                                                      That’s an interesting observation. I’m about to expand it; this is mostly for my own understanding, I don’t think I’m about to write anything you don’t already realize.

                                                                      Go’s switch constructs are imperative and each branch contains statements, which means every branch’s type is effectively Any-with-side-effects-on-the-scope.

                                                                      Rust’s match constructs are expressions, which means (in Rust) that every match arm is also an expression, and all must have the same type T-with-no-side-effects-on-the-scope.

                                                                      (Both languages are free to perform side effects on the world inside their match arms.)

                                                                      Then, if I understand what you’re getting at, statement blocks have an ‘obvious’ null/default value of ‘no-op, do nothing’, which is why Go’s compiler can automatically add a default handler ‘do nothing if no match’. If the programmer know that is the wrong default action, they must explicitly specify a different one.

                                                                      Types, on the other hand, have no notion of a default value. Which is why the Rust programmer must match exhaustively, and specify a correct value for each match arm. The compiler can’t add `_ => XXX_of_type_T’, because it cannot know what XXX should be for any type T.

                                                                      1. 3

                                                                        Yes, in theory it could use Default::default() if defined, but it is not defined for all values, so it would be confusing for the users. Also forcing exhaustive matching reduces amount of bugs, in the end you can always write:

                                                                        match value {
                                                                          1 => foo(),
                                                                          2 => bar(),
                                                                          _ => unreachable!()

                                                                        To tell compiler that value should be 1 or 2 and it can optimise rest assuming that this is true (and will result with meaningful error otherwise). unreachable!() returns bottom type ! which match any type (as it is meant for functions that do not return at all).

                                                                        1. 3

                                                                          Small nit: unreachable!() doesn’t allow for optimizations, that’s unreachable_unchecked. On the other hand, _unchecked can cause undefined behavior if your expression can actually be reached.

                                                                1. 2

                                                                  Very informative article. I have a question about this though!

                                                                  + ?Sized - The size of B can be unknown at compile time. This isn’t relevant for our use case, but it means that trait objects may be used with the Cow type.

                                                                  But isn’t B a str which doesn’t have a size known at compile time?

                                                                  1. 2


                                                                    1. 2

                                                                      str doesn’t have a known size, but &str does: it’s the size of a (ptr, len) pair.

                                                                      1. 3

                                                                        Yes, but that’s &B. The statement quoted is that the “B: ?Sized” bound doesn’t matter for the thing described. It does, as B is str.

                                                                    1. 13

                                                                      I wouldn’t say that strong typing removes the need for unit tests. You can have a well-typed function and a langauge that enforces that the function is only called with well-typed inputs; and still have that function perform the wrong business logic for your application without a unit test checking this.

                                                                      Can all the historical versions of all the events be deserialized correctly from the event store, getting converted to newer versions (or to completely different event types) as needed?

                                                                      Let’s assume that your event store stores values as some very generic format, like a list of bytes, and that your event type is some kind of complicated enum. Your deserialization function is then something like List byte -> Optional EventType - Optional, of course, because it doesn’t make sense that for every possible list of bytes, there will be a valid event value in your program that the bytes correspond to. The bytes comprising the ASCII-encodes declaration of independence are a well-typed input to this function, just as the actual bytes in your event store are. So you still need some way to check that you’re doing the business logic of decoding your bytes the right way. A unit test seems perfectly appropriate here. You might even want to have the ASCII-encoded version of the declaration of independence in your unit test, to be sure that your function returns None instead of trying to parse this as an event in your system for some reason.

                                                                      1. 4

                                                                        So, I do agree that type systems can never fully replace unit/integration tests; they can catch type errors but not logic errors.

                                                                        However, a good type system can turn logic errors into type errors. A great example of this is returning a reference to a local variable in a language without memory management: in C or C++ it’s completely legal (maybe with a warning), in Rust it’s a compile error. This isn’t unique to memory management: in C, char + float is a float; in Haskell (and most other functional languages, including Rust), adding Char to Double is a type error. One last example: I’m writing a Rust wrapper for a C library. Every C function I call can return PARAM_INVALID if the allocated buffer is null. The Rust function doesn’t even document the error because it’s not possible to have a null reference in Rust’s type system (also not unique to Rust, C++ has references too).

                                                                        My long winded point is that even though you always need tests, if you have a good type system, there are less things to test.

                                                                        1. 4

                                                                          Curry-Howard Correspondence says that types ARE logic, so they definitely DO catch logic errors.

                                                                          1. 3

                                                                            That requires a powerful enough type system to represent a proof. Theoretically this is possible, and there definitely is value in using dependent typing and formal verification tools. But at the moment, with typical programming languages, only limited parts of the business logic can be represented with types.

                                                                            Even today, with a bit of discipline, it is possible to make certain states impossible to represent in the program. This allows you to eliminate some unit tests, which is definitely a good thing, but we’re still far from the point of proving all of a program’s logic with the type system.

                                                                            1. 1

                                                                              I understand they don’t catch all errors because there are some you can’t encode in your logic, I’m just pointing out that “type errors are not logic errors” is totally incorrect!!

                                                                            2. 1

                                                                              I’m not terribly familiar with Curry-Howard, but Wikipedia says it states that

                                                                              a proof is a program, and the formula it proves is the type for the program

                                                                              I don’t see how that means that types can catch logic errors: the type is a theorem, not a proof. Furthermore, just because you can formalize code as a proof doesn’t mean it’s correct; mathematicians find wrong proofs all the time.

                                                                              1. 3

                                                                                If you declare some argument is a String and your code compiles, then the type checker has proven that the argument can only possibly be a String. Passing a non-String argument, as you could do in a dynamic language, is a logic error. You violate a precondition of your function, which only works on strings.

                                                                                1. 1

                                                                                  Type checkers are logic checkers so you can’t really screw up your proof, only the theorems. Yes, this happens sometimes, but it IS a logic system.

                                                                                  1. 3

                                                                                    I think a better phrasing of skepticism is to ask what there is that can check whether you proved the right things.

                                                                                    Whether it’s done with tests or with types, at some point you are relying on the programmer to provide a sufficiently-correct formal specification of the problem to a machine, and if you declare that it should be done via types because the programmer is fallible and types catch things the programmer will mess up, you potentially trigger an infinite regression of types that check the code, and meta-types that check the types, and meta-meta-types that check the meta-types, and so on.

                                                                                    (of course, this is all very well-trod ground in some fields, and is ultimately just a fancier version of the old “Pray, Mr. Babbage…”, but still a question worth thinking about)

                                                                              2. 2

                                                                                See also which goes much more into depth on this.

                                                                            1. 2

                                                                              Some Java GUI IDEs have something like this: you can create a component, add various events handlers, then copy paste the whole component into another menu and it will recreate all the handlers and layout. Excel will also let you copy paste rows/columns/grids within a spreadsheet. It walould be really cool to make that a standardized format so you can use it across applications.

                                                                              1. 5

                                                                                I’m looking forward to the rest in the series as I’m a fan of the author and everything they’ve done for Rust, however with only the first article out thus far which merely discusses components that may cause slow compilation it leads the reader in an overly negative direction, IMO.

                                                                                Rust compile times aren’t great, but I don’t believe they’re as bad as the author is leading onto thus far. Unless your dev-cycle relies on CI and full test suite runs (which requires full rebuilds), the compile times aren’t too bad. A project I was responsible for at work used to take ~3-5ish minutes for a full build if I remember correctly. By removing some unnecessary generics, feature gating some derived impls, feature gating esoteric functionality, and re-working some macros as well as our build script the compile times were down to around a minute which meant partial builds were mere seconds. That along with test filtering, meant the dev-test-repeat cycle was very quick. Now, it could also be argued that feature gates increase test path complexity, but that’s what our full test suite and CI is for.

                                                                                Granted, I know our particular anecdote isn’t indicative of all workloads, or even representative of large Servo style projects, but for your average medium sized project I don’t feel Rust compile times hurt productivity all that much.

                                                                                …now for full re-builds or CI reliant workloads, yes I’m very grateful for every iota of compile time improvements!

                                                                                1. 7

                                                                                  It is also subjective. For a C++ developer 5 minutes feels ok. If you are used to Go or D, then a single minute feels slow.

                                                                                  1. 5

                                                                                    Personally, slow compile times are one of my biggest concerns about Rust. This is bad enough for a normal edit/compile/run cycle, but it’s twice as bad for integration tests (cargo test --tests) which have to link a new binary for each test.

                                                                                    Of course, this is partly because I have a slow computer (I have a laptop with an HDD), but I don’t think I should need the latest and greatest technology just to get work done without being frustrated. Anecodatally, my project with ~90 dependencies is ~8 seconds for an incremental rebuild, ~30 seconds just to build the integration tests incrementally, and over 5 minutes for a full build.

                                                                                  1. 7

                                                                                    It’s not a majority opinion, but I believe some things should just be kept forever.

                                                                                    Sometimes they were deprecated in Python 2 like using Threading.is_alive in favour of Threading.isAlive to be removed in Python 3

                                                                                    Like this, for example. Is changing the spelling of something really worth breaking this for everyone?

                                                                                    1. 6

                                                                                      Yeah or just provide an alias and in the docs note that the snake case version is preferred or something.

                                                                                      I really really want to like Python. From a sys admin perspective its a fantastic language. One file scripts where you don’t have to deal with virtual env and its not gonna change much.

                                                                                      From a DevOps perspective (package creation and management, versioning, virtualenv, C based packages not playing well with cloud-oriented distros, stuff like this in doing language version upgrades) I’ve always found it to be a nightmare, and this kind of thing is just another example of that. I tried to install Ansible and am basically unable to do it on a Mac because no version of Python or Pip can agree that I have installed it and it should be in my path.

                                                                                      I don’t begrudge anyone who uses it or think it’s a bad language, that would be pretty obtuse, but I always avoid it when I can personally.

                                                                                      1. 7

                                                                                        This is what we do in Mercurial. Aliases stay forever but are removed from the documentation.

                                                                                        Git does this too. git diff --cached is now a perpetual alias for git diff --staged because the staging area has been variously called the cache and the index. Who cares. Aliases are cheap. Just keep them forever.

                                                                                        1. 2

                                                                                          I didn’t even realize --cached was deprecated, I use that all the time.

                                                                                          1. 3

                                                                                            And that’s how it should be. You shouldn’t even notice it changed.

                                                                                        2. 4

                                                                                          Yeah or just provide an alias and in the docs note that the snake case version is preferred or something.

                                                                                          That’s exactly what was done: - unfortunately, not everybody spots this stuff, and people often ignore deprecation notices.

                                                                                          The real problem is that it’s difficult to write a reliable tool to flag and fix this stuff is you can’t reliably do type inference to figure out the provenance of stuff.

                                                                                          I tried to install Ansible and am basically unable to do it on a Mac because no version of Python or Pip can agree that I have installed it and it should be in my path.

                                                                                          Some of the things Homebrew does makes that difficult. You’re right: it’s a real pain. :-( I’ve used pipx in the past to manage this a bit better, but upgrades of Python can break the symlinks, sometimes for no good reason at all.

                                                                                          As far as Ansible goes, I stick to the version installed by Homebrew and avoid any dependencies on any of Ansible’s internals.

                                                                                          1. 1

                                                                                            From someone who uses Ansible on a daily basis: the best way to use it is by creating a virtualenv for your Ansible project and keeping everything you need in there. My ‘infra’ project has a requirements.txt and a that creates the virtualenv and install all dependencies. If you try to install Ansible at the system level, you are going to hate your life.

                                                                                          2. 6

                                                                                            Yeah, or at least bring in some compatibility libraries. Deprecating and removing things that aren’t actually harmful seems like churn for the sake of it.

                                                                                            1. 3

                                                                                              That (removing cruft, even if not harmful) was basically the reason for Python 3. And everyone agreed with it 10 years ago. And most people using the language now came to it probably after all these decisions have been made, and the need to upgrade to Python 3 was talked about for all this time. Now is simply not the time to question it. Also, making most of those fixes is easy (yes, even if you have to formally fork a library to do a sed over it).

                                                                                              1. 1

                                                                                                Those breaking changes came with a major version bump. Why not just wait until Python 4 to remove the cruft?

                                                                                                1. 3

                                                                                                  There should ideally never be a Python 4: none of those deprecated bits are meant to be used in Python 3 code. They were only present to ease transition in the short term, and it’s been a decade.

                                                                                                  1. 2

                                                                                                    While there are people who prefer a semver-esque approach of bumping the major every time a deprecation cycle finishes, AFAIK Python has no plans to adopt such a process, and intends to just continue doing long deprecation cycles where something gets marked for deprecation with a target release (years in the future) for actually removing it.

                                                                                                    1. 1

                                                                                                      Python’s releases are never perfectly backwards compatible. Like most programming languages, old crufty things that have been deprecated for years are sometimes removed.

                                                                                                      1. 1

                                                                                                        That’s a shame. A lot of languages provide some mechanism for backwards compatibility, either by preserving it either at the source or ABI level, or allowing some kind of indication as to what language version or features the code expects. It’s nice to be able pick up a library from years ago without having to worry about bit rot.

                                                                                                        1. 2

                                                                                                          It’s a library compatibility issue not a language compatibility issue. It’s been deprecated for a decade honestly there’s been plenty of time to fix it

                                                                                                          1. 1

                                                                                                            This particular library is part of the language. A decade is an awfully short time for a language.

                                                                                                            1. 2

                                                                                                              Python has never promised that it will eternally support every thing that’s ever been in the language or the standard library. Aside from the extended support period of Python 2.7, it’s never even promised to maintain something as-is on a time scale of a decade.

                                                                                                              Python remains a popular and widely-adopted language despite this, which suggests that while you personally may find it a turn-off that the language deprecates things and removes them over time, there are other people who either do not, or are willing to put up with it for sake of having access to a supported version of the language.

                                                                                                              This is, incidentally, the opposite of what happens in, say, Java, where the extreme backward-compatibility policy and glacial pace of adding even backwards-compatible new features tends to split people exactly the same way.

                                                                                                              1. 2

                                                                                                                In a semver-esque world, anything deprecated in 2 is fair game for removal in 3, of course (and if this particular thing was, then I concede). In that way Python 3 is a different language to Python 2, which I believe is how most folks consider it anyway. It’s just a shame that, apparently, you can’t write a Python 3 program and expect it to work with Python 3 in 10 years with no programmatic way of specifying which Python 3 it works in. Nobody would be any worse off if they just waited for Python 4 to clean up.

                                                                                                                1. 2

                                                                                                                  If I write something today, that raises deprecation warnings today, I don’t expect to be able to walk away from it for ten years and then have it continue to work on the latest version. I expect that those deprecation warnings really do mean “this is going to change in the future”, and that I either should clean it up now to get rid of the warnings, or I’ll have to clean it up later if I want it to keep working.

                                                                                                                  That’s the kind of situation being discussed here – people who wrote code that already raised deprecation warnings at time of writing, and are surprised to find out that those warnings really did mean “this is going to change in the future”.

                                                                                                                2. 1

                                                                                                                  Everyone who wants a Python that doesn’t change has been (and probably still is) using Python 2.7. I expect that we will see more pushback in the community against these sorts of changes as that becomes less tenable.

                                                                                                                3. 2
                                                                                                    2. 4

                                                                                                      It would be nice if Python had an equivalent of go fix. It’s just a pity things like that are difficult with dynamic languages.

                                                                                                    1. 10

                                                                                                      Objective reasons:

                                                                                                      • nulls are checked
                                                                                                      • resources are safe(r)
                                                                                                      • strongly typed
                                                                                                      • reasonably efficient in my hands, incredibly efficient in skilled hands
                                                                                                      • cargo is awesome, along with the other dev tools (is this subjective? I objectively don’t think so)
                                                                                                      • small runtime
                                                                                                      • wasm

                                                                                                      Subjective reasons:

                                                                                                      • it’s not C++
                                                                                                      • it’s not Haskell
                                                                                                      • I like the community
                                                                                                      • I like the lobster
                                                                                                      1. 2

                                                                                                        One more for the list: Result instead of exceptions makes it easy easier to see what can go wrong in a function. Compare that to python or even Java where any function can throw any exception and your only hope is to read the documentation.

                                                                                                        1. 1

                                                                                                          Java does have checked exceptions though, which should have the same benefit. They’re not mandatory though.

                                                                                                          1. 2

                                                                                                            Java has checked exceptions but they’re annoying to use. I see a lot of try { return Integer.parseInt(s); } catch (NumberFormatException e) { throw new RuntimeError(e); }, especially in prototyping. In Rust errors are much easier to work with, you can implement functions on Result like .unwrap_or() or .and_then().

                                                                                                        1. 5
                                                                                                          1. 3

                                                                                                            Lol he forgot case 6:

                                                                                                                    case 5:
                                                                                                                        //Demo over
                                                                                                                        advancetext = true;
                                                                                                                        hascontrol = false;
                                                                                                                        state = 6;
                                                                                                                    case 7:

                                                                                                            Seriously though I can’t imagine writing 4099 cases.

                                                                                                            1. 2

                                                                                                              It skips tons of numbers all over the place. E.g, goes from 2514 to 3000. Seems like much of it was intentional? Either way, there are way fewer than 4099 cases.. not that that makes it much better :).

                                                                                                            2. 3

                                                                                                              If that’s not “Doing Things The Hard Way”, I don’t know what is :-)

                                                                                                              1. 3

                                                                                                                Haha, reminds me of the thousand case switch statement in Undertale’s source. Game code really can be scary – it’s seems like that’s especially true for 2D games for some reason…

                                                                                                            1. 1

                                                                                                              I’m working on course materials for a new IoT course at my university, as well as hacking on side projects when I have the time.

                                                                                                              I’m also trying to get back into a good work/sleep schedule for 2020, which is difficult, given that some of my most productive work hours seem to occur after midnight.

                                                                                                              1. 2

                                                                                                                Hey Philip! Let me know when you get those resources together. Charles said the project is going to be a webserver implemented in assembly, is that right? Sounds like a lot of fun.

                                                                                                                I’ve found that my most productive time is either way early in the morning (before 8) or after dinner (after 8). During the day I don’t seem to get as much done, I couldn’t tell you why.