1. 2

    I wish this was covered by anti competition law or other laws.

    1. 7

      The actual rejection comment of Hickson has a good point about DELETE: you usually don’t want a body.

      But for PUT? Nothing except that “you wouldn’t want to PUT a form payload.” that’s a quote weak argument.

      1. 10

        It’s no problem to make a form that sends no body though…

        1. 6

          The spec says delete MAY have body. And practically speaking, you’d always want a csrf token anyway. I didn’t understand the PUT argument at all – not that it was weak, I simply didn’t understand what he was arguing – and posted another question in this thread about it.

          1. 3

            How is that a “good point”?

            1. 1

              I wanted to be somewhat generous as in if you write an API, many delete requests don’t have bodies.

              But the sibling comment about the CSRF token is a good one.

          1. 1

            In addition, Apple doesn’t make an effort to provide the browser for testing on other platforms. So if people ask for Firefox support, that’s testable with reasonable effort and great dev tools. Asking for Safari support means asking to buy Apple hardware.

            I hope that their one browser rule falls in court.

            I really like some of Apple’s innovations and Safari was once innovative. But there closed strategy (we create closed ecosystems fully under our control) means that they don’t get pressure to stay innovative in areas that fall out of favor by their management. Especially, if they rather push native apps.

            1. 4

              I hope that their one browser rule falls in court.

              I found it fascinating that the OP had praise for the rule that WebKit is the only allowed rendering engine on iOS:

              This paints a bleak picture. The one saving grace today is that Apple blocks use of any non-WebKit engine on iOS, which protects that one environment, and the iOS market (in the US at least) is large enough that this means Safari must be prioritized.

              He sees it as a stopgap against the total domination of Blink. The viewpoints are kind of like, “Apple is a big bully” vs. “Apple is a big bully that is at least protecting us all from an even more harmful bully” in the form of Google.

              1. 2

                Apple doesn’t say this anywhere officially, but you basically can test on any WebKit browser like GNOME Web (Epiphany). You’ll even see the exact same devtools UI as Safari.

                1. 1

                  Cool tip!

              1. 58

                Safari isn’t the new IE, Chrome is. From the market share, to features only available on Chrome, to developers writing for Chrome only. Some of the points on this article even clearly show that.

                1. 22

                  The “Widely accepted standards” in the OP made me laugh…

                  When I file bug reports/support tickets to websites saying your website doesn’t work with X or Y browser, the answer I almost always gets back is: “Use Chrome.” Occasionally(and more and more rarely) I’ll get back, OH right, we should fix that. Clearly nobody even bothers to test their stuff in the “other” browsers.

                  I keep filing tickets, anyway.

                  1. 2

                    But that’s only part of what IE did.

                    On the upside, the core of Chrome is open source. Enabling Microsoft Edge to basically just rebranding it is good, in my opinion, because if chrome got bad, they could immediately increase pressure by doing some little things better. (Disclaimer: haven’t used Edge)

                    What I mostly subject to in Chrome is that Google pushed it so hard and probably unfairly by its other businesses. And that is somewhat similar to what Microsoft did with IE and Windows.

                    1. 4

                      Most Google websites work way better in Chrome than in Firefox. And most of the time, that’s a decision from Google, not technical limitations in Firefox.

                      • Google Meet lets you blur out your background. This feature only uses web features (like WebGPU) which are supported perfectly fine by Firefox - but it’s disabled if you’re in a non-Chrome browser. It used to be that you could just change your user agent in Firefox, and the feature would work perfectly, but then Google changed their browser sniffing methods and changing the UA string doesn’t work anymore.
                      • YouTube uses (used?) a pre-standard version of the Shadow DOM standard, which is implemented in fast C++ in Chrome, but Firefox only implements the actual final Shadow DOM standard, so YouTube uses (used?) an extremely slow JavaScript polyfill for non-Google browsers.

                      Those are only the cases I know of where Google explicitly sabotages Firefox through running different code paths based on the browser. Even when they’re not intentionally sabotaging Firefox, I’m certain that Google optimizes their websites exclusively for Chrome without caring about other browsers. Firefox and Chrome are both extremely fast browsers, but they’re fast and slow at different things - and Google will make sure to stay within what Chrome does well, without caring about what Firefox does well or poorly. Optimizing for non-Google browsers seems like something that’s extremely far down Google’s priority list.

                  1. 22

                    I’m honestly appalled that such an ignorant article has been written by a former EU MEP. This article completely ignores the fact that the creation of Copilot’s model itself is a copyright infringement. You give Github a license to store and distribute your code from public repositories. You do not give it a permission to Github to use it or create derivative works. And as Copilot’s model is created from various public code, it is a derivative of that code. Some may try to argue that training machine learning models is ‘fair use’, yet I doubt that you can call something that can regurgitate the entire meaningful portion of a file (example taken from Github’s own public dataset of exact generated code collisions) is not a derivative work.

                    1. 13

                      In many jurisdictions, as noted in the article, the “right to read is the right to mine” - that is the point. There is already an automatic exemption from copyright law for the purposes of computational analysis, and GitHub don’t need to get that permission from you, as long as they have the legal right to read the code (i.e. they didn’t obtain it illegally).

                      This appears to be the case in the EU and Britain - https://www.gov.uk/guidance/exceptions-to-copyright - I’m not sure about the US.

                      Something is not a derivative work in copyright law simply due to having a work as an “input” - you cannot simply argue “it is derived from” therefore “it is a derivative work”, because copyright law, not English language, defines what a “derivative work” is.

                      For example, Markov chain analysis done on SICP is not infringing.

                      Obviously, there are limits to this argument. If Copilot regurgitates a significant portion verbatim, e.g. 200 LOC, is that a derivative? If it is 1,000 lines where not one line matches, but it is essentially the same with just variables renamed, is that a derivative work? etc. I think the problem is that existing law doesn’t properly anticipate the kind of machine learning we are talking about here.

                      1. 3

                        Dunno how it is in other countries, but in Lithuania, I can not find any exceptions to use my works without me agreeing to it that fit what Github has done. The closest one could be citation, but they do not comply with the requirement of mentioning my name and work from which the citation is taken.

                        I gave them the license to reproduce, not to use or modify - these are two entirely different things. If they weren’t, then Github has the ability to use all AGPL’d code hosted on it without any problems, and that’s obviously wrong.

                        There is no separate “mining” clause. That is not a term in copyright. Notice how research is quite explicitly “non-comercial” - and I very much doubt that what Github is doing with Copilot is non-comercial in nature.

                        The fact that similar works were done previously doesn’t mean that they were legal. They might have been ignored by the copyright owners, but this one quite obviously isn’t.

                        1. 8

                          There is no separate “mining” clause. That is not a term in copyright. Notice how research is quite explicitly “non-comercial” - and I very much doubt that what Github is doing with Copilot is non-comercial in nature.

                          Ms. Reda is referring to a copyright reform adapted on the EU level in 2019. This reform entailed the DSM directive 2019/790, which is more commonly known for the regulations regarding upload filters. This directive contains a text and data mining copyright limitation in Art. 3 ff. The reason why you don’t see this limitation in Lithuanian law (yet), is probably because Lithuania has not yet transformed the DSM directive into its national law. This should probably follow soon, since Art. 29 mandates transformation into national law until June 29th, 2021. Germany has not yet completed the transformation either.

                          That is, “text and data mining” now is a term in copyright. It is even legally defined on the EU level in Art. 2 Nr. 2 DSM directive.

                          That being said, the text and data mining exception in Art. 3 ff. DSM directive does not – at first glance, I have only taken a cursory look – allow commercial use of the technique, but only permits research.

                          1. 1

                            Oh, huh, here it’s called an education and research exception and has been in law for way longer than that directive, and it doesn’t mention anything remotely translatable as mining. It didn’t even cross my mind that she could have been referring to that. I see that she pushed for that exception to be available for everyone, not only research and cultural heritage, but it is careless of her to mix up what she wants the law to be, and what the law is.

                            Just as a preventative answer, no, Art 4. of DSM directive does not allow Github to do what it does either, as it applies to work that “has not been expressly reserved by their rightholders in an appropriate manner, such as machine-readable means in the case of content made publicly available online.”, and Github was free to get the content in an appropriate manner for machine learning. It is using the content for machine learning that infringes the code owners copyright.

                          2. 5

                            I gave them the license to reproduce, not to use or modify - these are two entirely different things. If they weren’t, then Github has the ability to use all AGPL’d code hosted on it without any problems, and that’s obviously wrong.

                            Important thing is also that the copyright owner is often different person than the one, who signed a contract with GitHub and uploaded there the codes (git commit vs. git push). The uploader might agree with whatever terms and conditions, but the copyright owner’s rights must not be disrupted in any way.

                            1. 3

                              Nobody is required to accept terms of a software license. If they don’t agree to the license terms, then they don’t get additional rights granted in the license, but it doesn’t take away rights granted by the copyright law by default.

                              Even if you licensed your code under “I forbid you from even looking at this!!!”, I can still look at it, and copy portions of it, parody it, create transformative works, use it for educational purposes, etc., as permitted by copyright law exceptions (details vary from country to country, but the gist is the same).

                          3. 10

                            Ms. Reda is a member of the Pirate Party, which is primarily focused on the intersection of tech and copyright. She has a lot of experience working on copyright-related legislation, including proposals specifically about text mining. She’s been a voice of reason when the link tax and upload filters were proposed. She’s probably the copyright expert in the EU parliament.

                            So be careful when you call her ignorant and mistaken about basics of copyright. She may have drafted the laws you’re trying to explain to her.

                            1. 16

                              It is precisely because of her credentials that I am so appalled. I cannot in a good mind find this statement not ignorant.

                              The directive about text mining very explicitly specifies “only for “research institutions” and “for the purposes of scientific research”.” Github and it’s Copilot doesn’t fall into that classification at all.

                              1. 3

                                Indeed.

                                Even though my opinion of Copilot is near-instant revulsion, the basic idea is that information and code is being used to train a machine learning system.

                                This is analogous to a human reviewing and reading code, and learning how to do so from lots of examples. And someone going through higher ed school isn’t “owned” by the copyright owners of the books and code they read and review.

                                If Copilot is violating, so are humans who read. And that… that’s a very disturbing and disgusting precedent that I hope we don’t set.

                                1. 6

                                  Copilot doesn’t infringe, but GitHub does, when they distribute Copilot’s output. Analogously to humans, humans who read do not infringe, but they do when they distribute.

                                  1. 1

                                    Why is it not the human that distributes copilots output?

                                    1. 1

                                      Because Copilot first had to deliver the code to the human. Across the Internet.

                                  2. 4

                                    I don’t think that’s right. A human who learns doesn’t just parrot out pre-memorized code, and if they do they’re infringing on the copyright in that code.

                                    1. 2

                                      The real question, that I think people are missing, is learning itself is a derivative work?

                                      How that learning happens can either be with a human, or with a machine learning algorithm. And with the squishiness and lack of insight with human brains, a human can claim they insightfully invented it, even if it was derived. The ML we’re seeing here is doing a rudimentary version of what a human would do.

                                      If Copilot is ‘violating’, then humans can also be ‘violating’. And I believe that is a dangerous path, laying IP based claims on humans because they read something.

                                      And as I said upthread, as much as I have a kneejerk that Copilot is bad, I don’t see how it could be infringing without also doing the same to humans.

                                      And as a underlying idea: copyright itself is a busted concept. It worked for the time before mechanical and electrical duplication took hold at a near 0 value. Now? Not so much.

                                      1. 3

                                        I don’t agree with you that humans and Copilot are learning somewhat the same.

                                        The human may learn by rote memorization, but more likely, they are learning patterns and the why behind those patterns. Copilot also learns patterns, but there is no why in its “brain.” It is completely rote memorization of patterns.

                                        The fact that humans learn the why is what makes us different and not infringing, while Copilot infringes.

                                        1. 2

                                          Computers learn syntax, humans learn syntax and semantics.

                                          1. 1

                                            Perfect way of putting it. Thank you.

                                        2. 3

                                          No I don’t think that’s the real question. Copying is treated as an objective question (and I’m willing to be corrected by experts in copyright law) ie similarity or its lack determine copying regardless of intent to copy, unless the creation was independent.

                                          But even if we address ourselves to that question, I don’t think machine learning is qualitatively similar to human learning. Shoving a bunch of data together into a numerical model to perform sequence prediction doesn’t equate to human invention, it’s a stochastic copying tool.

                                      2. 3

                                        It seems like it could be used to shirk the effort required for a clean room implementation. What if I trained the model on one and only one piece of code I didn’t like the license of, and then used the model to regurgitate it, can I then just stick my own license on it and claim it’s not derivative?

                                      3. 2

                                        Ms. Reda is a member of the Pirate Party

                                        She has left the Pirate Party years ago, after having installed a potential MEP “successor” who was unknown to almost everyone in the party; she subsequently published a video not to vote Pirates because of him as he was allegedly a sex offender (which was proven untrue months later).

                                        1. 0

                                          Why exactly do you think someone from the ‘pirate party’ would respect any sort of copyright? That sounds like they might be pretty biased against copyright…

                                          1. 3

                                            Despite a cheeky name, it’s a serious party. Check out their programme. Even if the party is biased against copyright monopolies, DRM, frivolous patents, etc. they still need expertise in how things work currently in order to effectively oppose them.

                                        2. 3

                                          Have you read the article?

                                          She addresses these concerns directly. You might not agree but you claim she “ignores” this.

                                          1. 1

                                            And as Copilot’s model is created from various public code, it is a derivative of that code.

                                            Depends on the legal system. I don’t know what happens if I am based in Europe but the guys doing this are in USA. It probably just means that they can do whatever they want. The article makes a ton of claims about various legal aspects of all of this but as far as I know Julia is not actually a lawyer so I think we can ignore this article.

                                            In Poland maybe this could be considered a “derivative work” but then work which was “inspired” by the original is not covered (so maybe the output of the network is inspired?) and then you have a separate section about databases so maybe this is a database in some weird way of understanding it? If you are not a lawyer I doubt you can properly analyse this. The article tries to analyse the legal aspect and a moral aspect at the same time while those are completely different things.

                                          1. 2

                                            In our company, we have started to generate the k8s resources with the language that we use in our backends: kotlin

                                            We check in the generated resources as yamls. The yamls are applied with fluxcd.

                                            This feels incredibly nice, e. g. :

                                            • you can easily inspect the resulting resources,
                                            • you can easily diff in for what you deploy,
                                            • you can make quick emergency adjustments by editing the generated files directly (haven’t needed that yet but nice to have),
                                            • you can easily unit test your resources.
                                            1. 4

                                              In my experience, TCO is often relied on for correct behavior as opposed to just being bonus performance. That means the following is a pretty significant downside!

                                              But since it is only applied under certain circumstances, the downside of is that when it is not applied, we won’t be made aware of it unless we check for it.

                                              Are there any plans to add some sort of syntax to indicate to the compiler that it should error if it can’t perform TCO? OCaml has @tailcall for this purpose and Scala has @tailrec, although in Scala’s case the compiler won’t even try to do TCO unless you request it.

                                              Also: how does Elm handle source maps for TCO functions? As I recall, increased debugging difficulty was one of the reason V8 backed out of automatically doing TCE (and switched to backing explicit tail calls instead).

                                              1. 2

                                                The article buries the lede, but it’s exactly announcement of a tool that checks Elm code for TCO.

                                                1. 1

                                                  Maybe my coffee hasn’t fully kicked in yet, or maybe it’s been too long since I’ve programmed in a proper functional language, but how or when would TCO change behavior?

                                                  1. 4

                                                    One example that comes to mind: TCO can be the difference between “calling this function with these arguments always returns the correct answer” and “calling this function with these arguments sometimes returns the correct answer, and sometimes crashes with a stack overflow.”

                                                    1. 1

                                                      Put slightly differently, TCO makes some partial functions total.

                                                      1. 3

                                                        If running out of stack frames and/or stack memory counts as making a function partial, then does the OS possibly running out of memory mean that no functions are ever total?

                                                        Right? Since “the stack” isn’t an explicit abstraction in most programming languages, I don’t think it’s quite correct/useful to say that a recursive function is partial when it can’t be TCO’d.

                                                        1. 3

                                                          I don’t think it’s out of bounds to say that. It really depends on the conceptual model that your language is providing. For example, it seems to be an operating principle of zig: every function in the stdlib that allocates takes an allocator so you can handle out of memory exceptions intelligently.

                                                          But, I get your point: it isn’t an explicit part of the conceptual model of most languages so it’s shifting the window a bit to refer to non-TCO functions as partial. I think it’s potentially useful perspective and, for what it’s worth, most languages don’t really describe their functions as total/partial anyways.

                                                    2. 2

                                                      Recursion in a language without TCO feels like a fool’s errand. Source: I tried to implement some recursive algorithms in C…. on 16-bit Windows. On a modern CPU, you can probably get away with recursion even if it eats stack, because you have virtual memory and a shitload of address space to recurse into. Not so much on a 286….

                                                      1. 1

                                                        I definitely agree! I never, ever, write recursive code in a language that doesn’t have a way to at least opt-in to recursion optimization.

                                                        But to me, that’s still “performance” and not really “behavior”. But maybe I’m defining these things a little differently.

                                                      2. 1

                                                        Not sure if @harpocrates meant this but:

                                                        Often, if the recursion depth is large enough, the unoptimized version uses a lot of stack space, even potentially an unbounded amount, where the optimized version uses constant space.

                                                        So the unoptimized version is not only slower but actually crashes if the stack is used up.

                                                        1. 1

                                                          Hmm. I assumed that “bonus performance” would include memory usage. And I would’ve lumped the resulting stack overflow in with “performance” concern, but I guess I can see how that might actually be considered behavior since the “optimized” version will complete and an unoptimized version might not.

                                                          It’s just weird because I don’t think anybody would tell me that putting an extra private field that I never use on a class would be a “behavior change” even though that makes my class use more memory and therefore will OOM on some algorithms when it might not OOM if I remove the field.

                                                          1. 1

                                                            An additional private field in a class that is used a million times on the stack might be similar, true. The heap may often be bigger (citation needed).

                                                            With recursive calls, you can use up a lot of memory for innocent looking code that e.g. just sums up a list of integers.

                                                    1. 4

                                                      This is really exciting: For Android C++ FFI, Rust is a surprisingly good match. I honestly would have expected a larger API share to be problematic.

                                                      This is obviously dependent on the code base. The stats for the chrome base library are worse.

                                                      I guess Android works better because it is already designed as an FFI and not an internal library.

                                                      1. 29

                                                        Sun Microsystems tries to sell Brendan Gregg’s own software back to him, with the GPL and author credit stripped (circa 2005).

                                                        Great article.

                                                        1. 15

                                                          Hey, spoiler alert missing ;)

                                                          1. 11

                                                            I think we should have a spoiler tag

                                                            1. 5

                                                              Or maybe just add the TLDR acronym

                                                            2. 1

                                                              Of course by replying we help keep this comment at the top. (I’m doing it as well, argh.)

                                                          1. 28

                                                            2019 Intel i9 16” MacBook Pro

                                                            Apple reinforced Intel’s “contextless tier name” marketing trick so much. It was always just “i9” which sounds impressive, doesn’t it. The actual chip name is i9-9880H. That’s Coffee Lake, a mild refresh of a mild refresh of Skylake, still on the 14nm process which was around since 2014.

                                                            1. 19

                                                              That is important info.

                                                              That said, they kind of deserved this with their obstruse product names. Up till Pentium 4 or so, I could easily follow what is the new model. But having newer i5 being faster than an old i9, I hate that. The important info is then a cryptic number.

                                                              1. 16

                                                                But the i9-9880H was launched in Q2 2019. It’s not like Apple is putting old chips in their laptops; the i9-9880H was about the best mobile 45W chip Intel offered at the time.

                                                                It’s just both the best Intel had to offer in 2019 and a refresh of Skylake on their 14nm process.

                                                                There’s a reason Apple and AMD are both surpassing Intel at the moment.

                                                                1. 3

                                                                  Note that it’s a i9-9880H inside a MacBook which has a cooling system that heavily prioritises quietness and slimness over performance. This is advantageous for the M1 since Intel chips are heavily dependent on thermals in order to reach and maintain boost clocks.

                                                              1. 34

                                                                I have seen him stepping into heated discussions and making them better.

                                                                I haven’t reviewed all his comments, obviously, but this just makes me sad.

                                                                1. 2

                                                                  The incomplete process isolation and spectre mitigation is really scary. One tab can read the data of the other tabs, slowly but still. That’s game over for many things, isn’t it?

                                                                  1. 2

                                                                    It’s the web, web security from a cynic’s point of view has always been broken. I mean you are literally executing RANDOM code on your computer, with zero hope it is safe to do so. Web browsers have thrown a bunch of stuff around trying to make sure at least the code that’s run for a given website is only useful for that given website.

                                                                    Web security has been game over, technically, for basically ever. It hasn’t changed adoption, or feature creep, to the point that a web browser is basically an entire OS hiding as a friendly user application, with essentially zero privacy.

                                                                    We have things like sub-resource integrity for HTTP now, but nobody uses it, because practically every website in existence loads gobs and gobs of 3rd party code they have zero hope of ever getting verified, because no 3rd party will every knowingly shoot themselves in the foot from being able to update code whenever they feel like it…

                                                                    Until websites STOP willy-nilly allowing 3rd party code to run on their website(which they can control with CSP headers), there is little hope.

                                                                    Of course basically every popular website fails terribly @ HTTP security headers(see https://securityheaders.com/ and type in your fav. website for proof)

                                                                    Of course the optimist perspective is, it’s getting BETTER, and most of the time it’s generally possible to at least be sure the code you are randomly running was authorized/approved by the website you visited, provided the server hasn’t been hacked.

                                                                    But right now, we are still trying to get past the low-hanging security fruit of things like XSS(cross site scripting) protection, which is now technically possible to fix, but … well lots of websites still mostly suck at it.

                                                                    1. 2

                                                                      Web security is hard and untrusted code execution is the hardest part.

                                                                      A lot of the things that you mention are true but at least they are fixible if the web site providers cares to do so. But not being able to protect their user’s secrets from other sides if they do everything right, is even worse.

                                                                      It means that if I create a minimal website carefuly, any private data can be stolen from another tab by known techniques of the user uses Firefox. There are probably 100,000s of programners that would be able to exploit that, given public resources on how to do that. That is scary to me.

                                                                      A lot of exploits at least aren’t public knowledge for months or years before being fixed.

                                                                      1. 1

                                                                        Sure, but that is not remotely limited to only Firefox. Browser security around running untrusted code is improving and FF might be a bit behind, but it’s not like Chromium is somehow immune.

                                                                  1. 6

                                                                    I’m curious how people think of morph vis a vis the nixops tool - is one obviously better than the other for managing several NixOS machines?

                                                                    1. 11

                                                                      All of them suck. Morph is at least tolerable for my needs.

                                                                      EDIT: the main reason I don’t like nixops is that it’s built on Python 2 and doesn’t really scale well to machines managed by multiple humans. Morph is a simpler tool without as much persistent state and it’s made with a language toolchain that isn’t obsolete.

                                                                      1. 4

                                                                        I started using nixops. It has some support for provisioning machines that I liked. When that broke, I used morph. It can’t do provisioning but than maybe also had less chance of breaking?

                                                                        I had some trouble making Auto-updates possible with morph - I ended up with a pretty weird hack.

                                                                        1. 3

                                                                          I’ve been looking at doing some horrifying hack in locally hosted CI to get auto-updates working on my servers, so that seems par for the course! What hack are you using?

                                                                          1. 2

                                                                            To be honest, when I look at the code that I used, I am not fully confident that it still works :)

                                                                            The idea is to setup a “normal” nixos configuration (in configuration.nix) and use nixos for auto-updating. + separating some bigger derivations out of that into their own “source” derivations.

                                                                            In my morph config, I am using configuration.nix in a pretty, ahem, rough way by merging it with my other morph config (health checks etc) like this: ...} // (import ./configuration.nix { inherit config pkgs; })

                                                                            I hope that makes sense.

                                                                            My supposed auto-update (minimally censored to exclude the host/blog name) functionality that import via imports in configuration.nix:

                                                                            {pkgs, lib, ...}:
                                                                            {
                                                                              system.autoUpgrade = {
                                                                                enable = true;
                                                                                channel = https://nixos.org/channels/nixos-unstable;
                                                                                allowReboot = true;
                                                                                dates = "20:25";
                                                                              };
                                                                            
                                                                              environment.etc = {
                                                                                "nixos/configuration_nix" =
                                                                                let myPath = ../.;
                                                                                    configPath =
                                                                                      if lib.canCleanSource myPath
                                                                                      then lib.sourceByRegex myPath [ ''^nix(/.*)?$'' ''^subdir(/.*\.nix)?$'' ]
                                                                                      else myPath;
                                                                            
                                                                                    # By separating the sources into a separate derivation, we do not need to copy
                                                                                    # the whole blog over whenever any nix config changes.
                                                                                    config = pkgs.runCommandNoCCLocal "config-with-sources" {} ''
                                                                                      mkdir $out
                                                                                      cp -R ${configPath}/* $out
                                                                                      chmod +w $out/subdir
                                                                                      ln -s ${./blog} $out/subdir/blog
                                                                                    '';
                                                                                in
                                                                                {
                                                                                  target = "nixos/configuration.nix";
                                                                                  text = "import ${builtins.trace "using ${config}" config}/subdir/configuration.nix";
                                                                                };
                                                                              };
                                                                            }
                                                                            

                                                                            So here you go, there is probably a much better simpler way.

                                                                    1. 1

                                                                      I am, at heart, a technologist.

                                                                      I feel similarly. Like the author, I also think that most start ups need mostly something else and that scares me sometimes.

                                                                      1. 3

                                                                        This talk is nice to watch and listen to. It obviously inspires and convinces people.

                                                                        Sigh

                                                                        Therefore, it makes me sad that it contains bits like as a basis for why you need concurrent programming:

                                                                        “Concurrency because of real world” argument

                                                                        I don’t want to understand this other world. That other world is a very strange world [meaning sequential programming]. The real world is parallel. […]

                                                                        At about https://youtu.be/cNICGEwmXLU?t=224

                                                                        If you wanted to similate a group of humans, maybe. But for an ordinary business app? If you take that as a basis why don’t you also say: “Well, humans also run on bio-chemical processes, why don’t we use that for computing? Let’s replicate brains!”

                                                                        Programming is ultimately about idealized models which are not reality. One of the most important things about programming is to model

                                                                        1. something useful
                                                                        2. in a way that humans can understand and modify.

                                                                        Concurrency is incredibly hard for humans to understand.

                                                                        Highly Available Data is Hard

                                                                        Furtherdown Joe basically says that highly available data is really, really hard and you should normally not code something like a consensus protocol on your own. I totally agree. Joe dives into the topic of highly-available (HA) data quite a bit, just to drive this point home. I wish he would spent more time on things that would actually help people write better software bu hey. At least we agree.

                                                                        The Architecture Erlang Should Be Benchmarked Against

                                                                        But if you combine the facts that

                                                                        1. Concurrency is hard for humans to reason about and
                                                                        2. You normaly shouldn’t do HA data yourself anyways,

                                                                        I arrive at a quite different architecture, at least for typical business software:

                                                                        1. Use lots of stateless processes written in pretty much any programming language.
                                                                        2. Use a highly available data layer for most of the concurrency.
                                                                        3. Limit your own code that deals with real concurrency to the bare minimum.

                                                                        If you say, Erlang isn’t made for these kind of apps, that’s OK. I’d like to see that clearly spelled out. If something can tell me why Erlang is better than that, or whatever is better than this for the common case, I am seriously interested.

                                                                        Unreliable message passing

                                                                        In my mind, sequential programming combined with a highly capable and available databse is going to be so much simpler then having a programming model with unreliable message sending. The few projects I have been in that used actors tended to just implicitly assume that the messages would indeed arrive because they mostly did. Not saying, it is impossible to write this in a better way but intelligent people everywhere will fail to do so, normally.

                                                                        If you want to deal with unreliable message passing, you have a few options:

                                                                        1. Implement resending messages, waiting for acks in every actor. Maybe use a library for that. If that is so, makes you wonder why this is not part of the standard stack.
                                                                        2. For 1, if it is any effort at all and people take correctness seriously, they would try to restrict the number of actors to simplify reasoning. Because, guess what, reasoning about a sequential systems is tons easier for humans. But then you lose the parellizability and your failures are isolated but in larger chunks.
                                                                        3. Adopt a reconciling state programming model where you basically sync your state periodically, pampering over any lost messages. Resulting in all messages usually sent unnnecessarily and hiding efficiency errors because they are repaired.
                                                                        4. magic solution nobody ever talks about that I really would like to hear about finally
                                                                        Let it crash

                                                                        Then let’s go over “let it crash”. I like the idea in principle but he doesn’t even touch the problem of crash loops etc. If you restart things, things do not magically start working.

                                                                        Somehow, the talks always stop at that point and don’t go into details.

                                                                        If restarting thing helps, that also means that things are not reproducible which usually means you’ll have a bad debugging and testing experience. If you chunked your app into small pieces/actors, then at least you have very restricted state to reason about which is nice. But that is probably hardly useful without considering the state of the other actors.

                                                                        1. 1

                                                                          There’s a lot to unpack here. I think the most important thing for me to say is that the actor model is not only about concurrency or scalability; it can (and does) help with fault tolerance and the maintainability of code.

                                                                          Programming is ultimately about idealized models which are not reality. One of the most important things about programming is…

                                                                          I thought programming was primarily about getting computers to perform the desired calculations. Everything above coding directly in machine code is to make programming easier for humans, but they are a means to the end, not the end itself.

                                                                          Concurrency is incredibly hard for humans to understand.

                                                                          Agreed, but it’s a programming luxury to get to pretend that concurrency doesn’t exist. We’ve approached a point of diminishing returns on single-core clock speed, and it’s more cost effective to work with multiple cores, multiple CPUs, multiple computers/servers, multiple datacenters, etc.

                                                                          What’s nice about the actor model is that it lets you keep thinking about single-threaded execution a lot of the time; instead of thinking about mutexes, semaphores and critical sections, in Erlang, you’re dealing mainly with isolated process state and with the transmission of immutable data between processes.

                                                                          1. Use lots of stateless processes written in pretty much any programming language.
                                                                          2. Use a highly available data layer for most of the concurrency.
                                                                          3. Limit your own code that deals with real concurrency to the bare minimum.

                                                                          This model seems to take for granted that an HTTP server, a relational DBMS, or an operating system, is written to execute code concurrently. While you correctly point out that some people might say “Erlang is not for these kinds of apps”, others might say “Erlang is for making the substrate on top of which these apps run”.

                                                                          With respect to point #1, your state has to live somewhere. Of course, that state can exist in a HA database, but there’s a cost associated with such an architecture that might not be appropriate in all situations. Joe talks a lot about the merits of isolated state in the linked talk, which is a very powerful idea that is reflected in languages like Rust as well as in Erlang.

                                                                          Point #2 is widely practiced among Erlang programmers. It’s perfectly possible for an Erlang application to speak to a traditional RDBMS, or there are solutions that are written in Erlang, such as mnesia, or Riak, or CouchDB.

                                                                          Point #3 is also widely practiced among Erlang programmers. Many of the people writing web apps that run on the BEAM are not always directly dealing with the task of concurrently handling requests or connections; they’re writing callback functions to perform the appropriate business logic, and then these callbacks are executed concurrently by a library such as cowboy.

                                                                          sequential programming combined with a highly capable and available databse is going to be so much simpler then having a programming model with unreliable message sending

                                                                          Again, there’s no arguing which of these is simpler, but the simplicity you’re talking about is a luxury. What happens when your single-threaded program has more work to do than a single CPU core can handle? What happens when your single-threaded program has to communicate over an unreliable network with a program running on another computer?

                                                                          Then let’s go over “let it crash”. I like the idea in principle but he doesn’t even touch the problem of crash loops etc. If you restart things, things do not magically start working.

                                                                          A shockingly large amount of the time, however, restarting things does work. Transient failures are all over the place. What’s nice about “let it crash” is that it can help eliminate a huge amount of defensive programming to handle these transient failures. Unable to connect to a server? The result of some request is not what you’re expecting? Instead of handling an exception and retrying some number of times, Erlang programmers have the choice to let the process crash, and delegate the responsibility of trying again to a supervisor process. This helps keep business logic code clean of fault-handling code.

                                                                          If restarting thing helps, that also means that things are not reproducible which usually means you’ll have a bad debugging and testing experience

                                                                          Over long periods of execution, you’re inevitably going to encounter situations that are not reproducible. There’s a point of diminishing returns when it comes to testing, and it’s usually a long way away from simulating the kinds of conditions your app might experience over months or years of uptime.

                                                                          If you chunked your app into small pieces/actors, then at least you have very restricted state to reason about which is nice. But that is probably hardly useful without considering the state of the other actors.

                                                                          My experience using Erlang daily for the last 5 years is that I might have to consider the state of 2-3 actors when debugging an Erlang application, but I have almost never had to consider processes outside of the application I’m debugging, let alone the state of all the running actors. For the most part, decomposing applications into communicating actors helps greatly with writing and debugging concurrent code, rather than hindering it.

                                                                          TL;DR: Yes, concurrent programming is hard, but it’s a luxury to pretend like concurrency doesn’t exist. When it comes to writing concurrent code, I find that Erlang’s actor model helps more than it hinders. Erlang’s actor model helps with more than just concurrency; it helps with fault tolerance and with code maintenance, to an almost greater degree than it helps when writing concurrent code.

                                                                        1. 12

                                                                          It’s nice to bring some nuance to the discussion: some languages and ecosystems have it worse than others.

                                                                          To add some more nuance, here’s a tradeoff about the “throw it in an executor” solution that I rarely see discussed. How many threads do you create?

                                                                          Well, first, you can either have it be bounded or unbounded. Unbounded seems obviously problematic because the whole point of async code is to avoid the heaviness one thread per task, and you may end up hitting that worst case.

                                                                          But bounded has a less obvious issue in that it effectively becomes a semaphore. Imaging having two sync tasks, A and B where the result of B ends up unblocking A (think mutex) and a thread pool of size 1. If you attempt to throw both on a thread pool, and A ends up scheduled and B doesn’t, you get a deadlock.

                                                                          You don’t even need dependencies between tasks, either. If you have an async task that dispatches a sync task that dispatches an async task that dispatches a sync task, and your threadpool doesn’t have enough room, you can hit it again. Switching between the worlds still comes with edge cases.

                                                                          This may seem rare and it probably is, especially for threadpools of any appreciable size, but I’ve hit it in production before (on Twisted Python). It was a relief when I stopped having to think about these issues entirely.

                                                                          1. 3

                                                                            Imaging having two sync tasks, A and B where the result of B ends up unblocking A (think mutex)

                                                                            Isn’t this an antipattern for async in general? Typically you’d either a) make sure to release the mutex before yielding, or b) change he interaction to “B notifies A”, right?

                                                                            1. 4

                                                                              Changing the interaction to “B notifies A” doesn’t fix anything because presumably A waits until it is notified, taking up a threadpool slot, making it so that B can never notify A. Additionally, it’s not always obvious when one sync task depends on another, especially if you allow your sync tasks to block on the result of an async task. In my experience, that sort of thing happens when you have to bolt the two worlds together.

                                                                              1. 2

                                                                                It’s a general problem. It can happen whenever you have a threadpool, no matter whether it’s sync or async.

                                                                              2. 3

                                                                                But bounded has a less obvious issue in that it effectively becomes a semaphore. Imaging having two sync tasks, A and B where the result of B ends up unblocking A (think mutex) and a thread pool of size 1. If you attempt to throw both on a thread pool, and A ends up scheduled and B doesn’t, you get a deadlock.

                                                                                I’ve never designed a system like this or worked on a system designed like this before. I’ve never had one task depend on the value of another task while both tasks were scheduled simultaneously. As long as your tasks spawn dependent tasks and transitively one of those eventually dependent tasks does not have to wait on another task, we can ensure that the entire chain of tasks will finish 1. That said, running out of threads in a thread pool is a real problem that plagues lots of thread-based applications. There are multiple strategies here. Sometimes we try to acquire a thread from the pool with a deadline and we retry a few times to grab a thread from the pool, eventually failing the computation if we just cannot grab a thread from the pool. Other times we just spawn a new thread, but this can lead to scheduler thrashing if we end up spawning too many threads. Another common solution is to create multiple thread pools and allocate different pools to different workloads, so that you can make large thread pools for long running threads and make smaller thread pools of short running tasks.

                                                                                Thread-based work scheduling can, imo, be just as complicated as async scheduling. The biggest difference is that async scheduling makes you pay the cost in code complexity (through function coloring, concurrency runtimes, etc) while thread-based scheduling makes you pay the cost in operational and architectural complexity (by deciding how many thread pools to have, which tasks should run on which pools, how large each pool should be, how long we should wait to retry to grab a thread from the pool, etc, etc). While shifting the complexity to operational and architectural complexity might seem to shift the work up to operators or some dedicate operationalizing phase, in practice the context lost by lifting decisions up to this level can make tradeoffs for pools and tasks non-obvious, making it harder to make good decisions. Also, as workloads change over time, new thread pools may need to be created and these new pools necessitate rebalancing of other pools, which requires a lot of churn. Async has none of these drawbacks (though to be clear, it has its own unique drawbacks.)

                                                                                1. 8

                                                                                  I’ve never designed a system like this or worked on a system designed like this before. I’ve never had one task depend on the value of another task while both tasks were scheduled simultaneously.

                                                                                  Here’s perhaps a not-unreasonable scenario: imagine a cache with an API to retrieve some value for a key if it exists and otherwise compute, store, and return it. The cache exports an async API and the callback it runs to compute the value ends up dispatching a sync task to a threadpool (maybe it’s a database query using a sync library). We want the cache to be able to be accessed from multiple threads, so it is wrapped in a sync mutex.

                                                                                  Now imagine that an async task tries to use the cache that is backed by a threadpool of size 1. The task disaptches a thread which acquires the sync mutex, calls to get some value (waiting however on the returned future), and assuming it doesn’t exist, the cache blocks forever because it cannot dispatch the task to produce the value. The size of 1 isn’t special: this can happen with any bounded size thread pool under enough concurrent load.

                                                                                  One may object to the sync mutex, but you can have the same issue if the cache is recursive in the sense that producing a value may depend on the cache populating other values. I don’t think that’s very far fetched either. Alternatively, the cache may be a library used as a component of a sync object that is expected to be used concurrently and that is the part that contains the mutex.

                                                                                  In my experience, the problem is surprisingly easy to accidentally introduce when you have a code base that frequently mixes async and sync code dispatching to each other. Once I started really looking for it, I found many places where it could have happened in the (admittedly very wacky) code base.

                                                                                  1. 3

                                                                                    Fair enough that is a situation that can arise. Those situations I would probably reach for either adding an expiry to my threaded tasks or separating thread pools for DB or cache threads from general application threads. (Perhaps an R/W Lock would help over a regular mutex, but I realize that’s orthogonal to the problem at hand here and probably a pedagogical simplification.) The reality is that mixing sync and async code can be pretty fraught if you’re not careful.

                                                                                2. 2

                                                                                  I have seen similar scenarios without a user visible mutex: you get deadlocks if a thread on a bounded thread pool waits for another task scheduled on the same thread pool

                                                                                  Of course, there are remedies, e.g. never schedule subtasks on the same thread pool. Timeouts help but still lead to abysmall behavior under load because your threads just idle around until the timeout triggers.

                                                                                  1. 1

                                                                                    Note that you can also run async Rust functions with zero (extra) threads, by polling it on your current thread. A threadpool is not a requirement.

                                                                                    1. 3

                                                                                      Isn’t that equivalent to either a threadpool of size 1 or going back to epoll style event loops? If it’s the former, you haven’t gained anything, and if it’s the latter, you’ve thrown out the benefits of the async keyword.

                                                                                      1. 3

                                                                                        Async has always been a syntax sugar for epoll-style event loops. Number of threads has nothing to do with it, e.g. tokio can switch between single and multi-threaded execution, but so can nginx.

                                                                                        Async gives you higher-level composability of futures, and the ease of writing imperative-like code to build state machines.

                                                                                  1. 46

                                                                                    FWIW, this is how you express conditional arguments / struct fields in Rust. The condition has to encompass the name as well as the type, not just the type as was first attempted.

                                                                                    I feel like Rust has definitely obliterated its complexity budget in unfortunate ways. Every day somebody comes to the sled discord chat with confusion over some async interaction. The fixation on async-await, despite it slowing down almost every real-world workload it is applied to, and despite it adding additional bug classes and compiler errors that simply don’t exist unless you start using it, has been particularly detrimental to the ecosystem. Sure, the async ecosystem is a “thriving subcommunity” but it’s thriving in the Kuhnian sense where a field must be sufficiently problematic to warrant collaboration. There’s no if-statement community anymore because they tend to work and be reasonably well understood. With async-await, I observed that the problematic space of the overall community shifted a bit from addressing external issues through memory safety bug class mitigation to generally coping with internal issues encountered due to some future not executing as assumed.

                                                                                    The problematic space of a community is a nice lens to think about in general. It is always in-flux with the shifting userbase and their goals. What do people talk about? What are people using it for? As the problematic space shifts, at best it can introduce the community to new ideas, but there’s also always an aspect of it that causes mourning over what it once represented. Most of my friends who I’ve met through Rust have taken steps to cut interactions with “the Rust community” down to an absolute minimum due to it tending to produce a feeling of alienation over time. I think this is pretty normal.

                                                                                    I’m going to keep using Rust to build my database in and other things that need to go very fast, but I see communities like Zig’s as being in some ways more aligned with the problematic spaces I enjoy geeking out with in conversations. I’m also thinking about getting a lot more involved in Erlang since I realized I haven’t felt that kind of problem space overlap in any language community since I stopped using it.

                                                                                    1. 31

                                                                                      I was surprised to see the Rust community jump on the async-await bandwagon, because it was clear from the beginning it’s a bandwagon. When building a stable platform (e.g. a language) you wait for the current fashion to crest and fall, so you can hear the other side of the story – the people who used it in anger, and discovered what the strengths and weaknesses really are. Rust unwisely didn’t do that.

                                                                                      I will note though that the weaknesses of the async-await model were apparent right from the beginning, and yet here we are. A lesson for future languages.

                                                                                      1. 28

                                                                                        This hits me particularly hard because I had experienced a lot of nearly-identical pain around async when using various flavors of Scala futures for a few years before picking up Rust in 2014. I went to the very first Rust conference, Rust Camp in 2015 at Berkeley, and described a lot of the pain points that had caused significant issues in the Scala community to several of the people directly working on the core async functionality in Rust. Over the years I’ve had lots of personal conversations with many of the people involved, hoping that sharing my experiences would somehow encourage others to avoid well-known painful paths. This overall experience has caused me to learn a lot about human psychology - especially our inability to avoid problems when there are positive social feedback loops that lead to those problems. It makes me really pessimistic about climate apocalypse and rising authoritarianism leading us to war and genocides, and the importance of taking months and years away from work to enjoy life for as long as it is possible to do so.

                                                                                        The content of ideas does not matter very much compared to the incredibly powerful drive to exist in a tribe. Later on when I read Kuhn’s Structure of Scientific Revolutions, Feyerabend’s Against Method, and Ian Hacking’s Representing and Intervening, which are ostensibly about the social aspects of science, I was blown away by how strongly their explanations of how science often moves in strange directions that may not actually cause “progress” mapped directly to the experiences I’ve had while watching Rust grow and fail to avoid obvious traps due to the naysayers being drowned out by eager participants in the social process of Making Rust.

                                                                                        1. 7

                                                                                          Reminds me of the theory that Haskell and Scala appeal because they’re a way for the programmer to needsnipe themselves

                                                                                          1. 5

                                                                                            Thanks for fighting the good fight. Just say “no” to complexity.

                                                                                            Which of those three books you mentioned do you think is most worthwhile?

                                                                                            1. 10

                                                                                              I think that Kuhn’s Structure of Scientific Revolutions has the broadest appeal and I think that nearly anyone who has any interaction with open source software will find a tremendous number of connections to their own work. Science’s progressions are described in a way that applies equally to social-technical communities of all kinds. Kuhn is also the most heavily cited thinker in later books on the subject, so by reading his book, you gain deeper access to much of the content of the others, as it is often assumed that you have some familiarity with Kuhn.

                                                                                              You can more or less replace any mention of “paper citation” with “software dependency” without much loss in generality while reading Kuhn. Hacking and Feyerabend are more challenging, but I would recommend them both highly. Feyerabend is a bit more radical and critical, and Hacking zooms out a bit more and talks about a variety of viewpoints, including many perspectives on Kuhn and Feyerabend. Hacking’s writing style is really worth experiencing, even by just skimming something random by him, by anyone who writes about deep subjects. I find his writing to be enviably clear, although sometimes he leans a bit into sarcasm in a way that I’ve been put off by.

                                                                                            2. 4

                                                                                              If you don’t mind, what’s an example of async/await pain that’s common among languages and not to do with how Rust uniquely works? I ask because I’ve had a good time with async/await, but in plainer, application-level languages.

                                                                                              (Ed: thanks for the thoughtful replies)

                                                                                              1. 12

                                                                                                The classic “what color is your function” blog post describes what is, I think, such a pain? You have to choose in your API whether a function can block or not, and it doesn’t compose well.

                                                                                                1. 3

                                                                                                  I read that one, and I took their point. All this tends to make me wonder if Swift (roughly, Rust minus borrow checker plus Apple backing) is doing the right thing by working on async/await now.

                                                                                                  But so far I don’t mind function coloring as I use it daily in TypeScript. In my experience, functions that need to be async tend to be the most major steps of work. The incoming network request is async, the API call it makes is async, and then all subsequent parsing and page rendering aren’t async, but can be if I like.

                                                                                                  Maybe, like another commenter said, whether async/await is a net positive has more to do with adapting the language to a domain that isn’t otherwise its strong suit.

                                                                                                  1. 16

                                                                                                    You might be interested in knowing that Zig has async/await but there is no function coloring problem.

                                                                                                    https://kristoff.it/blog/zig-colorblind-async-await/

                                                                                                    1. 3

                                                                                                      Indeed this is an interesting difference at least in presentation. Usually, async/await provides sugar for an existing concurrency type like Promise or Task. It doesn’t provide the concurrency in the first place. Function colors are then a tradeoff for hiding the type, letting you think about the task and read it just like plain synchronous code. You retain the option to call without await, such that colors are not totally restrictive, and sometimes you want to use the type by hand; think Promise.all([…]).

                                                                                                      Zig seems like it might provide all these same benefits by another method, but it’s hard to tell without trying it. I also can’t tell yet if the async frame type is sugared in by the call, or by the function definition. It seems like it’s a sort of generic, where the nature of the call will specialize it all the way down. If so, neat!

                                                                                                      1. 7

                                                                                                        It seems like it’s a sort of generic, where the nature of the call will specialize it all the way down. If so, neat!

                                                                                                        That’s precisely it!

                                                                                                        1. 2

                                                                                                          I’ve been poking at Zig a bit since this thread; thank you for stirring my interest. :)

                                                                                                2. 6

                                                                                                  Well, I think that async/await was a great thing for javascript, and generally it seems to work well in languages that have poor threading support. But Rust has great threading support, and Rust’s future-based strategy aimed from the beginning at copying Scala’s approach. A few loud research-oriented voices in the Rust community said “we think Scala’s approach looks great” and it drowned out the chorus of non-academic users of Scala who had spent years of dealing with frustrating compilation issues and issues where different future implementations were incompatible with each other and overall lots of tribalism that ended up repeating in a similar way in the Rust async/await timeline.

                                                                                                  1. 5

                                                                                                    I am somewhat surprised that you say Rust’s futures are modeled after Scala’s. I assume the ones that ended up in the standard library. As for commonalities: They also offer combinators on top of a common futures trait and you need explicit support in libraries - that’s pretty much all that is similar to Rust’s.

                                                                                                    In Scala, futures were annoying because exceptions and meaningless stacktraces. In Rust, you get the right stacktraces and error propagation.

                                                                                                    In Rust, Futures sucked for me due to error conversions and borrowing being basically unsupported until async await. Now they are still annoying because of ecosystem split (sync vs various partially compatible async).

                                                                                                    The mentioned problem of competing libraries is basically unpreventable in fields without wide consensus and would have happened with ANY future alternative. If you get humans to agree on sensible solutions and not fight about irrelevant details, you are a wizard.

                                                                                                    Where I agree is that it was super risky to spend language complexity budget on async/await, even though solving the underlying generator/state machine problem felt like a good priority. While async await feels a bit to special-cased and hacky to be part of the language… It could be worse. If we find a better solution for async in Rust, we wouldn’t have to teach the current way anymore.

                                                                                                    Other solutions would just have different pros and cons. E.g. go’s or zig’s approach seemed the solution even deeper into the language with the pro of setting a somewhat universal standard for the language.

                                                                                                    1. 3

                                                                                                      It was emulating Finagle from the beginning: https://medium.com/@carllerche/announcing-tokio-df6bb4ddb34 but then the decision to push so much additional complexity into the language itself so that people could have an easier time writing strictly worse systems was just baffling.

                                                                                                      Having worked in Finagle for a few years before that, I tried to encourage some of the folks to aim for something lighter weight since the subset of Finagle users who felt happy about its high complexity seemed to be the ones who went to work at Twitter where the complexity was justified, but it seemed like most of the community was pretty relieved to switch to Akka which didn’t cause so much type noise once it was available.

                                                                                                      I don’t expect humans not to fragment now, but over time I’ve learned that it’s a much more irrational process than I had maybe believed in 2014. Mostly I’ve been disappointed about being unable to communicate with what was a tiny community about something that I felt like I had a lot of experience with and could help other people avoid pain around, but nevertheless watch it bloom into a huge crappy thing that now comes back into my life every day even when I try to ignore it by just using a different feature set in my own stuff.

                                                                                                  2. 3

                                                                                                    I hope you will get a reply by someone with more Rust experience than me, but I imagine that the primary problem is that even if you don’t have to manually free memory in Rust, you still have to think about where the memory comes from, which tends to make lifetime management more complicated, requiring to occasionally forcefully move things unto the heap (Box) and also use identity semantics (Pin), and so all of this contributes to having to deal with a lot of additional complexity to bake into the application the extra dynamicism that async/await enables, while still maintaining the safety assurances of the borrow checker.

                                                                                                    Normally, in higher level languages you don’t ever get to decide where the memory comes from, so this is a design dimension that you never get to explore.

                                                                                                  3. 2

                                                                                                    I’m curious if you stuck around in Scala or pay attention to what’s going on now because I think it has one of the best stories when it comes to managing concurrency. Zio, Cats Effect, Monix, fs2 and Akka all have different goals and trade offs but the old problem of Future is easily avoided

                                                                                                  4. 6

                                                                                                    I was surprised to see the Rust community jump on the async-await bandwagon, because it was clear from the beginning it’s a bandwagon.

                                                                                                    I’m not surprised. I don’t know how async/await works exactly, but it definitely has a clear use case. I once implemented a 3-way handshake in C. There was some crypto underneath, but the idea was, from the server’s point of view was to receive a first message, respond, then wait for the reply. Once the reply comes and is validated the handshake is complete. (The crypto details were handled by a library.)

                                                                                                    Even that simple handshake was a pain in the butt to handle in C. Every time the server gets a new message, it needs to either spawn a new state machine, or dispatch it to an existing one. Then the state machine can do something, suspend, and wait for a new message. Note that it can go on after the handshake, as part of normal network communication.

                                                                                                    That state machine business is cumbersome and error prone, I don’t want to deal with it. The programming model I want is blocking I/O with threads. The efficiency model I want is async I/O. So having a language construct that easily lets me suspend & resume execution at will is very enticing, and I would jump to anything that gives me that —at least until I know better, which I currently don’t.

                                                                                                    I’d even go further: given the performance of our machines (high latencies and high throughputs), I believe non-blocking I/O at every level is the only reasonable way forward. Not just for networking, but for disk I/O, filling graphics card buffers, everything. Language support for this is becoming as critical as generics themselves. We laughed “lol no generics” at Go, but now I do believe it is time to start laughing “lol no async I/O” as well. The problem now is to figure out how to do it. Current solutions don’t seem to be perfect (though there may be one I’m not aware of).

                                                                                                    1. 2

                                                                                                      The whole thing with async I/O is that process creation is too slow, and then thread creation was too slow, and some might even consider coroutine creation too slow [1]. It appears that concerns that formerly were of the kernel (managing I/O among tasks; scheduling tasks) are now being pushed out to userland. Is this the direction we really want to go?

                                                                                                      [1] I use coroutines in Lua to manage async I/O and I think it’s fine. It makes the code look like non-blocking, but it’s not.

                                                                                                      1. 2

                                                                                                        I don’t think it’s unreasonable to think that the kernel should have as few concerns as possible. It’s a singleton, it doesn’t run with the benefit of memory protection, and its internal APIs aren’t as stable as the ones it provides to userland.

                                                                                                        … and, yes, I think a lot of async/await work is LARPing. But that’s because a lot of benchmark-oriented development is LARPing, and probably isn’t special to async/await specifically.

                                                                                                        1. 1

                                                                                                          I’m not sure what you’re getting at. I want async I/O to avoid process creation and thread creation and context switches, and even scheduling to some extent. What I want is one thread per core, and short tasks being sent to them. No process creation, no thread creation, no context switching. Just jump the instruction pointer to the relevant task, and return to the caller when it’s done.

                                                                                                          And when the task needs to, say, read from the disk, then it should do so asynchronously: suspend execution, return to the caller, wait for the response to come back, and when it does resume execution. It can be done explicitly with message passing, but that’s excruciating. A programming model where I can kinda pretend the call is blocking (but in fact we’re yielding to the caller) is much nicer, and I believe very fast.

                                                                                                      2. 5

                                                                                                        Agreed. I always told people that async/await will be just as popular as Java’s synchronized a few years down the road. Some were surprised, some were offended, but sometimes reality is uncomfortable.

                                                                                                      3. 29

                                                                                                        Thank you for sharing, Zig has been much more conservative than Rust in terms of complexity but we too have splurged a good chunk of the budget on async/await. Based on my experience producing introductory materials for Zig, async/await is by far the hardest thing to explain and probably is going to be my biggest challenge to tackle for 2021. (that said, it’s continuations, these things are confusing by nature)

                                                                                                        On the upside

                                                                                                        despite it slowing down almost every real-world workload it is applied to

                                                                                                        This is hopefully not going to be a problem in our case. Part of the complexity of async/await in Zig is that a single library implementation can be used in both blocking and evented mode, so in the end it should never be the case that you can only find an async version of a client library, assuming authors are willing to do the work, but even if not, support can be added incrementally by contributors interested in having their use case supported.

                                                                                                        1. 17

                                                                                                          I feel like Rust has definitely obliterated its complexity budget in unfortunate ways.

                                                                                                          I remember the time I asked one of the more well-known Rust proponents “so you think adding features improves a language” and he said “yes”. So it was pretty clear to me early on that Rust would join the feature death march of C++, C#, …

                                                                                                          Rust has many language features and they’re all largely disjoint from each other, so knowing some doesn’t help me guess the others.

                                                                                                          That’s so painfully true.

                                                                                                          For instance, it has different syntax for struct creation and function calls, their poor syntax choices also mean that structs/functions won’t get default values any time soon.

                                                                                                          ; is mandatory (what is this, 1980?), but you can leave out , at the end.

                                                                                                          The severe design mistake of using <> for generics also means you have to learn 4 different syntax variations, and when to use them.

                                                                                                          The whole module stuff is way too complex and only makes sense if you programmed in C before. I have basically given up on getting to know the intricacies, and just let IntelliJ handle uses.

                                                                                                          Super weird that both if and switch exist.

                                                                                                          Most of my friends who I’ve met through Rust have taken steps to cut interactions with “the Rust community” down to an absolute minimum due to it tending to produce a feeling of alienation over time.

                                                                                                          Yes, that’s my experience too. I have some (rather popular) projects on GitHub that I archive from time to time to not having to deal with Rust people. There are some incredibly toxic ones, which seem to be – for whatever reason – close to some “core” Rust people, so they can do whatever the hell they like.

                                                                                                          1. 6

                                                                                                            For instance, it has different syntax for struct creation and function calls

                                                                                                            Perhaps they are trying to avoid the C++ thing where you can’t tell whether foo(bar) is struct creation or a function call without knowing what foo is?

                                                                                                            The whole module stuff is way too complex and only makes sense if you programmed in C before. I have basically given up on getting to know the intricacies, and just let IntelliJ handle uses.

                                                                                                            It only makes sense to someone who has programmed in C++. C’s “module” system is far simpler and easier to grok.

                                                                                                            Super weird that both if and switch exist.

                                                                                                            Would you have preferred

                                                                                                            match condition() {
                                                                                                                true => {
                                                                                                                
                                                                                                                },
                                                                                                                false => {
                                                                                                            
                                                                                                                },
                                                                                                            }
                                                                                                            

                                                                                                            I think that syntax is clunky when you start needing else if.

                                                                                                            1. 1

                                                                                                              Perhaps they are trying to avoid the C++ thing where you can’t tell whether foo(bar) is struct creation or a function call without knowing what foo is?

                                                                                                              Why wouldn’t you be able to tell?

                                                                                                              Even if that was the issue (it isn’t), that’s not the problem C++ has – it’s that foo also could be 3 dozen other things.

                                                                                                              Would you have preferred […]

                                                                                                              No, I prefer having one unified construct that can deal with both usecases reasonably well.

                                                                                                              1. 2

                                                                                                                Why wouldn’t you be able to tell?

                                                                                                                struct M { };
                                                                                                                void L(M m);
                                                                                                                
                                                                                                                void f() {
                                                                                                                    M(m); // e.g. M m;
                                                                                                                    L(m); // function call
                                                                                                                }
                                                                                                                

                                                                                                                The only way to tell what is going on is if you already know the types of all the symbols.

                                                                                                                No, I prefer having one unified construct that can deal with both usecases reasonably well.

                                                                                                                Ok, do you have an example from another language which you think handles this reasonably well?

                                                                                                                1. 2

                                                                                                                  The only way to tell what is going on is if you already know the types of all the symbols.

                                                                                                                  Let the IDE color things accordingly. Solved problem.

                                                                                                                  Ok, do you have an example from another language which you think handles this reasonably well?

                                                                                                                  I’m currently in the process of implementing it, but I think this is a good intro to my plans.

                                                                                                                  1. 1

                                                                                                                    Let the IDE color things accordingly. Solved problem.

                                                                                                                    The problem of course is for the writer of the IDE :)

                                                                                                                    Constructs like these in C++ make it not only harder for humans to parse the code, but for compilers as well. This turns into real-world performance decreases which are avoided in other languages.

                                                                                                                    I’m currently in the process of implementing it, but I think this is a good intro to my plans.

                                                                                                                    That’s interesting, but I think there’s a conflict with Rust’s goal of being a systems-level programming language. Part of that is having primitives which map reasonably well onto things that the compiler can translate into machine code. Part of the reason that languages like C have both if and switch is because switch statements of the correct form may be translated into an indirect jump instead of repeated branches. Of course, a Sufficiently Smart Compiler could optimize this even in the if case, but it is very easy to write code which is not optimizable in such a way. I think there is value to both humans and computers in having separate constructs for arbitrary conditionals and for equality. It helps separate intent and provides some good optimization hints.

                                                                                                                    Another reason why this exists is for exhaustiveness checks. Languages with switch can check that you handle all cases of an enum.

                                                                                                                    The other half of this is that Rust is the bastard child of ML and C++. ML and C++ both have match/switch, so Rust has one too.


                                                                                                                    I think you will have a lot of trouble producing good error messages with such a syntax. For example, say someone forgets an = or even both ==s. If your language does false-y and truth-y coercion, then there may be no error at all here. And to the parser, it is not clear at all where the error is. Further, this sort of extension cannot be generalized to one-liners. That is, you cannot unambiguously parse if a == b then c == d then e without line-breaks.

                                                                                                                    On the subject, in terms of prior-art, verilog allows expressions in its case labels. This allows for some similar syntax constructions (though more limited since functions are not as useful as in regular programming languages).

                                                                                                            2. 3

                                                                                                              For instance, it has different syntax for struct creation and function calls, their poor syntax choices also mean that structs/functions won’t get default values any time soon.

                                                                                                              This is a good thing. Creating a struct is a meaningfully different operation from calling a function, and there’s no problem with having there be separate syntax for these two separate things.

                                                                                                              The Rust standard library provides a Default trait, with examples of how to use it and customize it. I don’t find it at all difficult to work with structs with default values in Rust.

                                                                                                              The whole module stuff is way too complex and only makes sense if you programmed in C before. I have basically given up on getting to know the intricacies, and just let IntelliJ handle uses.

                                                                                                              I don’t understand this comment at all. Rust’s module system seems fairly similar to module systems in some other languages I’ve used, although I’m having trouble thinking of other languages that allow you to create a module hierarchy within a single file, like you can do with the mod { } keyword (C++ allows nested namespaces I think, but that’s it). I don’t see how knowing C has anything to do with understand Rust modules better. C has no module system at all.

                                                                                                              1. 2

                                                                                                                I’m having trouble thinking of other languages that allow you to create a module hierarchy within a single file

                                                                                                                Lua can do this, although it’s not common.

                                                                                                                1. 1

                                                                                                                  This is a good thing.

                                                                                                                  I guess that’s why many Rust devs – immediately after writing a struct – also define a fun to wrap their struct creation? :-)

                                                                                                                  Creating a struct is a meaningfully different operation from calling a function

                                                                                                                  It really isn’t.

                                                                                                                  The Rust standard library provides a Default trait, with examples of how to use it and customize it. I don’t find it at all difficult to work with structs with default values in Rust.

                                                                                                                  That’s clearly not what I alluded to.

                                                                                                                  I don’t see how knowing C has anything to do with understand Rust modules better. C has no module system at all.

                                                                                                                  Rust’s module system only makes sense if you keep in mind that it’s main goal is to produce one big ball of compiled code in the end. In that sense, Rust’s module system is a round-about way to describe which parts of the code end up being part of that big ball.

                                                                                                                  1. 3

                                                                                                                    Putting a OCaml hat on:

                                                                                                                    • Struct creation and function calls are quite different. In particular it’s good to have structure syntax that can be mirrored in pattern matching, whereas function call has no equivalent in match.
                                                                                                                    • Multiple modules in one file is also possible in ML/OCaml. Maybe in some Wirth language, though I’m not sure on that one.

                                                                                                                    it’s main goal is to produce one big ball of compiled code in the end.

                                                                                                                    What other goal would there be? That’s what 100% of compiled languages aim at… Comparing rust to C which has 0 notion of module is just weird.

                                                                                                                    1. 1

                                                                                                                      Struct creation and function calls are quite different. In particular it’s good to have structure syntax that can be mirrored in pattern matching, whereas function call has no equivalent in match.

                                                                                                                      In what sense would this be an obstacle? I would expect that a modern language let’s you match on anything that provides the required method/has the right signature. “This is a struct, so you can match on it” feels rather antiquated.

                                                                                                                      What other goal would there be? That’s what 100% of compiled languages aim at… Comparing rust to C which has 0 notion of module is just weird.

                                                                                                                      It feels like it was built by someone who never used anything but C in his life, and then went “wouldn’t it be nice if it was clearer than in C which parts of the code contribute to the result?”.

                                                                                                                      The whole aliasing, reexporting etc. functionality feels like it exists as a replacement for some convenience C macros, and not something one actually would want. I prefer that there is a direct relationship between placing a file somewhere and it ending up in a specific place, without having to wire up everything again with the module system.

                                                                                                                      1. 1

                                                                                                                        There is documented inspiration from OCaml from the rust original creator. The first compiler was even in OCaml, and a lot of names stuck (like Some/None rather than the Haskell Just/Nothing). It also has obvious C++ influences, notably the namespace syntax being :: and <> for generics. The module system most closely reminds me of a mix of OCaml and… python, with special file names (mod.rs, like __init__.py or something like that?), even though it’s much much simpler than OCaml. Again not just “auto wiring” files in is a net benefit (another lesson from OCaml I’d guess, where the build system has to clarify what’s in or out a specific library). It makes build more declarative.

                                                                                                                        As for the matching: rust doesn’t have active patterns or the scala-style deconstruction. In this context (match against values you can pre-compile pattern-matching very efficiently to decision trees and constant time access to fields by offset. This would be harder to do efficiently with “just call this deconstuct method”. This is more speculation on my side, but it squares with rust’s efficiency concerns.

                                                                                                                        1. 1

                                                                                                                          I see your point, but in that case Rust would need to disallow match guards too (because what else are guards, but less reusable unapply methods?).

                                                                                                                      2. 1

                                                                                                                        Comparing rust to C which has 0 notion of module is just weird.

                                                                                                                        Well there are translation units :) (though you can only import using the linker)

                                                                                                                    2. 1

                                                                                                                      I’m having trouble thinking of other languages that allow you to create a module hierarchy within a single file,

                                                                                                                      Perl can do this.

                                                                                                                      1. 3

                                                                                                                        Elixir also allows you to create a module hierarchy within a single file.

                                                                                                                        1. 2

                                                                                                                          And Julia. Maybe this isn’t so rare.

                                                                                                                    3. 1

                                                                                                                      ; is mandatory (what is this, 1980?), but you can leave out , at the end.

                                                                                                                      Ugh this one gets me every time. Why Rust, why.

                                                                                                                      1. 2

                                                                                                                        Same in Zig? Curious to know Zig rationale for this.

                                                                                                                        1. 10

                                                                                                                          In almost all languages with mandatory semicolons, they exist to prevent multi-line syntax ambiguities. The designers of Go and Lua both went to great pains to avoid such problems in their language grammars. Unlike, for example, JavaScript. This article about semicolon insertion rules causing ambiguity and unexpected results should help illustrate some of these problems.

                                                                                                                          1. 3

                                                                                                                            Pointing out Javascript isn’t a valid excuse.

                                                                                                                            Javascript’s problems are solely Javascript’s. If we discarded every concept that was implemented poorly in Javascript, we wouldn’t have many concepts left to program with.

                                                                                                                            I want semicolon inference done right, simple as that.

                                                                                                                            1. 4

                                                                                                                              That’s not what I’m saying. JavaScript is merely an easy example of some syntax problems that can occur. I merely assume that Rust, which has many more features than Go or Lua, decided not to maintain an unambiguous grammar without using semicolons.

                                                                                                                              1. 2

                                                                                                                                Why would the grammar be ambiguous? Are you sure that you don’t keep arguing from a JavaScript POV?

                                                                                                                                Not needing ; doesn’t mean the grammar is ambiguous.

                                                                                                                                1. 4

                                                                                                                                  ~_~

                                                                                                                                  Semicolons are an easy way to eliminate grammar ambiguity for multi-line syntax. For any language. C++ for example would have numerous similar problems without semicolons.

                                                                                                                                  Not needing ; doesn’t mean the grammar is ambiguous.

                                                                                                                                  Of course. Go and Lua are examples of languages designed specifically to avoid ambiguity without semicolons. JavaScript, C++, and Rust were not designed that way. JavaScript happens to be an easy way to illustrate possible problems because it has janky automatic semicolon insertion, whereas C++ and Rust do not.

                                                                                                                                  1. 0

                                                                                                                                    I’m completely unsure what you are trying to argue – it doesn’t make much sense. Has your triple negation above perhaps confused you a bit?

                                                                                                                                    The main point is that a language created after 2000 simply shouldn’t need ;.

                                                                                                                                    1. 5

                                                                                                                                      ; is mandatory (what is this, 1980?), but you can leave out , at the end.

                                                                                                                                      Same in Zig? Curious to know Zig rationale for this.

                                                                                                                                      The rationale for semicolons. They make parsing simpler, particularly for multi-line syntax constructs. I have been extremely clear about this the entire time. I have rephrased my thesis multiple times:

                                                                                                                                      In almost all languages with mandatory semicolons, they exist to prevent multi-line syntax ambiguities.

                                                                                                                                      Semicolons are an easy way to eliminate grammar ambiguity for multi-line syntax.

                                                                                                                                      Many underestimate the difficulty of creating a language without semicolons. Go has done so with substantial effort, and maintaining that property has by no means been effortless for them when adding new syntax to the language.

                                                                                                                                      1. 0

                                                                                                                                        Yeah, you know, maybe we should stop building languages that are so complex that they need explicitly inserted tokens to mark “previous thing ends here”? That’s the point I’m making.

                                                                                                                                        when adding new syntax to the language

                                                                                                                                        Cry me a river. Adding features does not improve a language.

                                                                                                                                        1. 1

                                                                                                                                          Having a clear syntax where errors don’t occur 15 lines below the missing ) or } (as would unavoidably happen without some separator — trust me, it’s one of OCaml’s big syntax problems for toplevel statements) is a net plus and not bloat.

                                                                                                                                          What language has no semicolon (or another separator, or parenthesis, like lisp) and still has a simple syntax? Even python has ; for same-line statements. Using vertical whitespace as a heuristic for automatic insertion isn’t a win in my book.

                                                                                                                                          1. 2

                                                                                                                                            Both Kotlin and Swift have managed to make a working , unambiguous C-like syntax without semicolons.

                                                                                                                                            1. 2

                                                                                                                                              I didn’t know. That involves no whitespace/lexer trick at all? I mean, if you flatten a whole file into one line, does it still work? Is it still in LALR(1)/LR(1)/some nice fragment?

                                                                                                                                              The typical problem in this kind of grammar is that, while binding constructs are easy to delimit (var/val/let…), pure sequencing is not. If you have a = 1 b = 2 + 3 c = 4 d = f(a) semicolons make things just simpler for the parser.

                                                                                                                                              1. 1

                                                                                                                                                Why are line breaks not allowed to be significant? I don’t think I care if I can write an arbitrarily long program on one line…

                                                                                                                                            2. 0

                                                                                                                                              Using vertical whitespace as a heuristic for automatic insertion isn’t a win in my book.

                                                                                                                                              I agree completely. I love Lua in particular. You can have zero newlines yet it requires no semicolons, due to its extreme simplicity. Lua has only one ambiguous case: when a line begins with a ( and the previous line ends with a value.

                                                                                                                                              a = b
                                                                                                                                              (f or g)() -- call f, or g when f is nil
                                                                                                                                              

                                                                                                                                              Since Lua has no semantic newlines, this is exactly equivalent to:

                                                                                                                                              a = b(f or g)()
                                                                                                                                              

                                                                                                                                              The Lua manual thus recommends inserting a ; before any line starting with (.

                                                                                                                                              a = b
                                                                                                                                              ;(f or g)()
                                                                                                                                              

                                                                                                                                              But I have never needed to do this. And if I did, I would probably write this instead:

                                                                                                                                              a = b
                                                                                                                                              local helpful_explanatory_name = f or g
                                                                                                                                              helpful_explanatory_name()
                                                                                                                                              
                                                                                                                            2. 3

                                                                                                                              Also curious, as well as why Zig uses parentheses in ifs etc. I know what I’ll say is lame, but those two things frustrate me when looking at Zig’s code. If I could learn the rationale, it might hopefully at least make those a bit easier for me to accept and get over.

                                                                                                                              1. 3

                                                                                                                                One reason for this choice is to remove the need for a ternary operator without greatly harming ergonomics. Having the parentheses means that the blocks may be made optional which allows for example:

                                                                                                                                const foo = if (bar) a else b;
                                                                                                                                
                                                                                                                                1. 9

                                                                                                                                  There’s a blog post by Graydon Hoare that I can’t find at the moment, where he enumerates features of Rust he thinks are clear improvements over C/C++ that have nothing to do with the borrow checker. Forcing if statements to always use braces is one of the items on his list; which I completely agree with. It’s annoying that in C/C++, if you want to add an additional line to a block of a brace-less if statement, you have to remember to go back and add the braces; and there have been major security vulnerabilities caused by people forgetting to do this.

                                                                                                                                  1. 6
                                                                                                                                  2. 6

                                                                                                                                    The following would work just as well:

                                                                                                                                    const foo = if bar { a } else { b };
                                                                                                                                    

                                                                                                                                    I’ve written an expression oriented language, where the parenthesis were optional, and the braces mandatory. I could use the exact same syntactic construct in regular code and in the ternary operator situation.

                                                                                                                                    Another solution is inserting another keyword between the condition and the first branch, as many ML languages do:

                                                                                                                                    const foo = if bar then a else b;
                                                                                                                                    
                                                                                                                                    1. 2

                                                                                                                                      I don’t get how that’s worth making everything else ugly. I imagine there’s some larger reason. The parens on ifs really do feel terrible after using go and rust for so long.

                                                                                                                                      1. 1

                                                                                                                                        For what values of a, b, c would this be ambiguous?

                                                                                                                                        const x = if a b else c
                                                                                                                                        

                                                                                                                                        I guess it looks a little ugly?

                                                                                                                                        1. 5

                                                                                                                                          If b is actually a parenthesised expression like (2+2), then the whole thing looks like a function call:

                                                                                                                                          const x = if a (2+2) else c
                                                                                                                                          

                                                                                                                                          Parsing is no longer enough, you need to notice that a is not a function. Lua has a similar problem with optional semicolon, and chose to interpret such situations as function calls. (Basically, a Lua instruction stops as soon as not doing so would cause a parse error).

                                                                                                                                          Your syntax would make sense in a world of optional semicolons, with a parser (and programmers) ready to handle this ambiguity. With mandatory semicolons however, I would tend to have mandatory curly braces as well:

                                                                                                                                          const x = if a { b } else { c };
                                                                                                                                          
                                                                                                                                          1. 4

                                                                                                                                            Ah, Julia gets around this by banning whitespace between the function name and the opening parenthesis, but I know some people would miss that extra spacing.

                                                                                                                                          2. 3
                                                                                                                                            abs() { x = if a < 0 - a else a }
                                                                                                                                            
                                                                                                                                            1. 1

                                                                                                                                              Thanks for the example!

                                                                                                                                              I think this is another case where banning bad whitespace makes this unambiguous.

                                                                                                                                              a - b => binary
                                                                                                                                              -a => unary
                                                                                                                                              a-b => binary
                                                                                                                                              a -b => error
                                                                                                                                              a- b => error
                                                                                                                                              - a => error
                                                                                                                                              

                                                                                                                                              You can summarise these rules as “infix operators must have balanced whitespace” and “unary operators must not be followed by whitespace”.

                                                                                                                                              Following these rules, your expression is unambiguously a syntax error, but if you remove the whitespace between - and a it works.

                                                                                                                                              1. 1

                                                                                                                                                Or you simply ban unary operators.

                                                                                                                                                1. 1

                                                                                                                                                  Sure, seems a bit drastic, tho. I like unary logical not, and negation is useful sometimes too.

                                                                                                                                                  1. 1

                                                                                                                                                    Not sure how some cryptic operator without working jump-to-declaration is better than some bog-standard method …

                                                                                                                                                    1. 1

                                                                                                                                                      A minus sign before a number to indicate a negative number is probably recognizable as a negative number to most people in my country. I imagine most would recognise -x as “negative x”, too. Generalising that to other identifiers is not difficult.

                                                                                                                                                      An exclamation mark for boolean negation is less well known, but it’s not very difficult to learn. I don’t see why jump-to should fail if you’re using a language server, either.

                                                                                                                                                      More generally, people have been using specialist notations for centuries. Some mathematicians get a lot of credit for simply inventing a new way to write an older concept. Maybe we’d be better off with only named function calls, maybe our existing notations are made obsolete by auto-complete, but I am not convinced.

                                                                                                                                2. 9

                                                                                                                                  My current feeling is that async/await is the worst way to express concurrency … except for all the other ways.

                                                                                                                                  I have only minor experience with it (in Nim), but a good amount of experience with concurrency. Doing it with explicit threads sends you into a world of pain with mutexes everywhere and deadlocks and race conditions aplenty. For my current C++ project I built an Actor library atop thread pools (or dispatch queues), which works pretty well except that all calls to other actors are one-way so you now need callbacks, which become painful. I’m looking forward to C++ coroutines.

                                                                                                                                  1. 3

                                                                                                                                    except for all the other ways

                                                                                                                                    I think people are complaining about the current trend to just always use async for everything. Which ends up complaining about rust having async at all.

                                                                                                                                  2. 8

                                                                                                                                    This is amazing. I had similar feelings (looking previously at JS/Scala futures) when the the plans for async/await were floating around but decided to suspend my disbelief because of how good previous design decisions in the language were. Do you think there’s some other approach to concurrency fit for a runtime-less language that would have worked better?

                                                                                                                                    1. 17

                                                                                                                                      My belief is generally that threads as they exist today (not as they existed in 2001 when the C10K problem was written, but nevertheless keeps existing as zombie perf canon that no longer refers to living characteristics) are the nicest choice for the vast majority of use cases, and that Rust-style executor-backed tasks are inappropriate even in the rare cases where M:N pays off in languages like Go or Erlang (pretty much just a small subset of latency-bound load balancers that don’t perform very much CPU work per socket). When you start caring about millions of concurrent tasks, having all of the sources of accidental implicit state and interactions of async tasks is a massive liability.

                                                                                                                                      I think The ADA Ravenscar profile (see chapter 2 for “motivation” which starts at pdf page 7 / marked page 3) and its successful application to safety critical hard real time systems is worth looking at for inspiration. It can be broken down to this set of specific features if you want to dig deeper. ADA has a runtime but I’m kind of ignoring that part of your question since it is suitable for hard real-time. In some ways it reminds me of an attempt to get the program to look like a pretty simple petri net.

                                                                                                                                      I think that message passing and STM are not utilized enough, and when used judiciously they can reduce a lot of risk in concurrent systems. STM can additionally be made wait-free and thus suitable for use in some hard real-time systems.

                                                                                                                                      I think that Send and Sync are amazing primitives, and I only wish I could prove more properties at compile time. The research on session types is cool to look at, and you can get a lot of inspiration about how to encode various interactions safely in the type system from the papers coming out around this. But it can get cumbersome and thus create more risks to the overall engineering effort than it solves if you’re not careful.

                                                                                                                                      A lot of the hard parts of concurrency become a bit easier when we’re able to establish maximum bounds on how concurrent we’re going to be. Threads have a little bit more of a forcing function to keep this complexity minimized due to the fact that spawning is fallible due to often under-configured system thread limits. Having fixed concurrency avoids many sources of bugs and performance issues, and enables a lot of relatively unexplored wait-free algorithmic design space that gets bounded worst-case performance (while still usually being able to attempt a lock-free fast path and only falling back to wait-free when contention picks up). Structured concurrency often leans into this for getting more determinism, and I think this is an area with a lot of great techniques for containing risk.

                                                                                                                                      In the end we just have code and data and risk. It’s best to have a language with forcing functions that pressure us to minimize all of these over time. Languages that let you forget about accruing data and code and risk tend to keep people very busy over time. Friction in some places can be a good thing if it encourages less code, less data, and less risk.

                                                                                                                                      1. 17

                                                                                                                                        I like rust and I like threads, and do indeed regret that most libraries have been switching to async-only. It’s a lot more complex and almost a new sub-language to learn.

                                                                                                                                        That being said, I don’t see a better technical solution for rust (i.e. no mandatory runtime, no implicit allocations, no compromise on performance) for people who want to manage millions of connections. Sadly a lot of language design is driven by the use case of giant internet companies in the cloud and that’s a problem they have; not sure why anyone else cares. But if you want to do that, threads start getting in the way at 10k threads-ish? Maybe 100k if you tune linux well, but even then the memory overhead and latency are not insignificant, whereas a future can be very tiny.

                                                                                                                                        Ada’s tasks seem awesome but to the best of my knowledge they’re for very limited concurrency (i.e the number of tasks is small, or even fixed beforehand), so it’s not a solution to this particular problem.

                                                                                                                                        Of course async/await in other languages with runtimes is just a bad choice. Python in particular could have gone with “goroutines” (for lack of a better word) like stackless python already had, and avoid a lot of complexity. (How do people still say python is simple?!). At least java’s Loom project is heading in the right direction.

                                                                                                                                        1. 12

                                                                                                                                          Just like some teenagers enjoy making their slow cars super loud to emulate people who they look up to who drive fast cars, we all make similar aesthetic statements when we program. I think I may write on the internet in a way that attempts to emulate a grumpy grey-beard for similarly aesthetic socially motivated reasons. The actual effect of a program or its maintenance is only a part of our expression while coding. Without thinking about it, we also code as an expression of our social status among other coders. I find myself testing random things with quickcheck, even if they don’t actually matter for anything, because I think of myself as the kind of person who tests more than others. Maybe it’s kind of chicken-and-egg, but I think maybe we all do these things as statements of values - even to ourselves even when nobody else is looking.

                                                                                                                                          Sometimes these costumes tend to work out in terms of the effects they grant us. But the overhead of Rust executors is just perf theater that comes with nasty correctness hazards, and it’s not a good choice beyond prototyping if you’re actually trying to build a system that handles millions of concurrent in-flight bits of work. It locks you into a bunch of default decisions around QoS, low level epoll behavior etc… that will always be suboptimal unless you rewrite a big chunk of the stack yourself, and at that point, the abstraction has lost its value and just adds cycles and cognitive complexity on top of the stack that you’ve already fully tweaked.

                                                                                                                                          1. 3

                                                                                                                                            The green process abstraction seems to work well enough in Erlang to serve tens of thousands of concurrent connections. Why do you think the async/await abstraction won’t work for Rust? (I understand they are very different solutions to a similar problem.)

                                                                                                                                            1. 4

                                                                                                                                              Not who you’re asking, but the reason why rust can’t have green threads (as it used to have pre-1.0, and it was scraped), as far as I undertand:

                                                                                                                                              Rust is shooting for C or C++-like levels of performance, with the ability to go pretty close to the metal (or close to whatever C does). This adds some constraints, such as the necessity to support some calling conventions (esp. for C interop), and precludes the use of a GC. I’m also pretty sure the overhead of the probes inserted in Erlang’s bytecode to check for reduction counts in recursive calls would contradict that (in rust they’d also have to be in loops, btw); afaik that’s how Erlang implements its preemptive scheduling of processes. I think Go has split stacks (so that each goroutine takes less stack space) and some probes for preemption, but the costs are real and in particular the C FFI is slower as a result. (saying that as a total non-expert on the topic).

                                                                                                                                              I don’t see why async/await wouldn’t work… since it does; the biggest issues are additional complexity (a very real problem), fragmentation (the ecosystem hasn’t converged yet on a common event loop), and the lack of real preemption which can sometimes cause unfairness. I think Tokio hit some problems on the unfairness side.

                                                                                                                                              1. 4

                                                                                                                                                The biggest problem with green threads is literally C interop. If you have tiny call stacks, then whenever you call into C you have to make sure there’s enough stack space for it, because the C code you’re calling into doesn’t know how to grow your tiny stack. If you do a lot of C FFI, then you either lose the ability to use small stacks in practice (because every “green” thread winds up making an FFI call and growing its stack) or implementing some complex “stack switching” machinery (where you have a dedicated FFI stack that’s shared between multiple green threads).

                                                                                                                                                Stack probes themselves aren’t that big of a deal. Rust already inserts them sometimes anyway, to avoid stack smashing attacks.

                                                                                                                                                In both cases, you don’t really have zero-overhead C FFI any more, and Rust really wants zero-overhead FFI.

                                                                                                                                                I think Go has split stacks (so that each goroutine takes less stack space)

                                                                                                                                                No they don’t any more. Split Stacks have some really annoying performance cliffs. They instead use movable stacks: when they run out of stack space, they copy it to a larger allocation, a lot like how Vec works, with all the nice “amortized linear” performance patterns that result.

                                                                                                                                              2. 3

                                                                                                                                                Two huge differences:

                                                                                                                                                • Erlang’s data structures are immutable (and it has much slower single threaded speed).
                                                                                                                                                • Erlang doesn’t have threads like Rust does.

                                                                                                                                                That changes everything with regard to concurrency, so you can’t really compare the two. A comparison to Python makes more sense, and Python async has many of the same problems (mutable state, and the need to compose with code and libraries written with other concurrency models)

                                                                                                                                          2. 4

                                                                                                                                            I’d like to see a good STM implementation in a library in Rust.

                                                                                                                                        2. 6

                                                                                                                                          The fixation on async-await, despite it slowing down almost every real-world workload it is applied to, and despite it adding additional bug classes and compiler errors that simply don’t exist unless you start using it, has been particularly detrimental to the ecosystem.

                                                                                                                                          I’m curious about this perspective. The number of individual threads available on most commodity machines even today is quite low, and if you’re doing anything involving external requests on an incoming-request basis (serializing external APIs, rewriting HTML served by another site, reading from slow disk, etc) and these external requests take anything longer than a few milliseconds (which is mostly anything assuming you have a commodity connection in most parts of the world, or on slower disks), then you are better off with a some form of “async” (or otherwise lightweight concurrent model of execution.) I understand that badly-used synchronization can absolutely tank performance with this many “tasks”, but in situations where synchronization is low (e.g. making remote calls, storing state in a db or separate in-memory cache), performance should be better than threaded execution.

                                                                                                                                          Also, if I reach for Rust I’m deliberately avoiding GC. Go, Python, and Haskell are the languages I tend to reach for if I just want to write code and not think too hard about who owns which portion of data or how exactly the runtime schedules my code. With Rust I’m in it specifically to think about these details and think hard about them. That means I’m more prone to write complicated solutions in Rust, because I wouldn’t reach for Rust if I wanted to write something “simple and obvious”. I suspect a lot of other Rust authors are the same.

                                                                                                                                          1. 5

                                                                                                                                            The number of individual threads available on most commodity machines even today is quite low

                                                                                                                                            I don’t agree with the premise here. It depends more on the kernel, not the “machine”, and Linux in particular has very good threading performance. You can have 10,000 simultaneous threads on vanilla Linux on a vanilla machine. async may be better for certain specific problems, but that’s not the claim.

                                                                                                                                            Also a pure async model doesn’t let you use all your cores, whereas a pure threading model does. If you really care about performance and utilization, your system will need threads or process level concurrency in some form.

                                                                                                                                            1. 4

                                                                                                                                              I don’t agree with the premise here. It depends more on the kernel, not the “machine”, and Linux in particular has very good threading performance. You can have 10,000 simultaneous threads on vanilla Linux on a vanilla machine. async may be better for certain specific problems, but that’s not the claim.

                                                                                                                                              I wasn’t rigorous enough in my reply, apologies.

                                                                                                                                              What I meant to say was, the number of cores available on a commodity machine is quite low. Even if you spawn thousands of threads, your actual thread-level parallelism is limited to the # of cores available. If you’re at the point where you need to spawn more kernel threads than there are available cores, then you need to put engineering into determining how many threads to create and when. For IO bound workloads (which I described in my previous post), the typical strategy is to create a thread pool, and to allocate threads from this pool. Thread pools themselves are a solution so that applications don’t saturate available memory with threads and so you don’t overwhelm the kernel with time spent switching threads. At this point, your most granular “unit of concurrency” is each thread in this thread pool. If most of your workload is IO bound, you end up having to play around with your thread pool sizes to ensure that your workload is processed without thread contention on the one hand (too few threads) or up against resource limits (too many threads). You could of course build a more granular scheduler atop these threads, to put threads “to sleep” once they begin to wait on IO, but that is essentially what most async implementations are, just optimizations on “thread-grained” applications. Given that you’re already putting in the work to create thread pools and all of the fiddly logic with locking the pool, pulling out a thread, then locking and putting threads back, it’s not a huge lift to deal with async tasks. Of course if your workload is CPU bound, then these are all silly, as your main limiting resource is not IO but is CPU, so performing work beyond the amount of available CPU you have necessitates queuing.

                                                                                                                                              Moreover the context with which I was saying this is that most Rust async libraries I’ve seen are async because they deal with IO and not CPU, which is what async models are good at.

                                                                                                                                            2. 3

                                                                                                                                              Various downsides are elaborated at length in this thread.

                                                                                                                                              1. 2

                                                                                                                                                Thanks for the listed points. What it’s made me realize is that there isn’t really a detailed model which allows us to demonstrate tradeoffs that come with selecting an async model vs a threaded model. Thanks for some food for thought.

                                                                                                                                                My main concern with Rust async is mostly just its immaturity. Forget the code semantics; I have very little actual visibility into Tokio’s (for example) scheduler without reading the code. How does it treat many small jobs? Is starvation a problem, and under what conditions? If I wanted to write a high reliability web service with IO bound logic, I would not want my event loop to starve a long running request that may have to wait longer on IO than a short running request and cause long running requests to timeout and fail. With a threaded model and an understanding of my IO processing latency, I can ensure that I have the correct # of threads available with some simple math and not be afraid of things like starvation because I trust the Linux kernel thread scheduler much more than Tokio’s async scheduler.

                                                                                                                                            3. 3

                                                                                                                                              There’s no if-statement community

                                                                                                                                              That had me laughing out loud!

                                                                                                                                              1. 2

                                                                                                                                                probably because it’s if-expressions 🙃

                                                                                                                                              2. 2

                                                                                                                                                I hope I’m not opening any wounds or whacking a bee-hive for asking but… what sort of problematic interactions occur with the Rust community? I follow Rust mostly as an intellectual curiosity and therefor aren’t in deep enough to see the more annoying aspects. When I think of unprofessional language community behavior my mind mostly goes to Rails during the aughts when it was just straight-up hostile towards Java and PHP stuff. Is Rust doing a similar thing to C/C++?

                                                                                                                                              1. 8

                                                                                                                                                I used to work at a company that distributed a Linux appliance that was basically a heavily customized CentOS server and I used to regularly get “Security Reports” that was usually a rebranded Nessus Scan with inflated severities. I had a spiel and a list of CVEs that was very close to this article.

                                                                                                                                                After 10 years of this, I lost all respect for “IT Security consultants”. They’re basically fear-mongers that try and scare you into paying them.

                                                                                                                                                Oh, my absolute favourite was the “scan” that claimed that we had a bunch of server-side CGI scripts from Matt’s Script Archive (bonus points to those who can remember Matt’s Script Archive), and we were all scratching our heads trying to figure out where they were getting this from.

                                                                                                                                                After a bunch of digging and head-scratching we discovered that they were calling GET https://server:10000/cgi-bin/random-script-name and they got back … something encrypted, so they decided the script was there and marked it as a security flaw.

                                                                                                                                                1. 9

                                                                                                                                                  I get the frustration, but I don’t think:

                                                                                                                                                  I lost all respect for “IT Security consultants”

                                                                                                                                                  is a good position. Imagine I told you I get spammed by various medication offers for years and because of that lost all respect for healthcare professionals.

                                                                                                                                                  There’s a lot of security consultants which are not scammers and there’s a tiny percentage of the cold-call reports which are valid.

                                                                                                                                                  1. 2

                                                                                                                                                    I live in Canada, where pharmaceutical advertising is heavily regulated, so your argument is not as convincing as it would be in the US. If your doctor is constantly hawking a certain drug manufacturer, then I would look for a different doctor if I were you…

                                                                                                                                                    1. 2

                                                                                                                                                      I meant the ads about various enhancement pills in my spam folder.

                                                                                                                                                  2. 3

                                                                                                                                                    As an IT security consultant, I feel like I’ve gotta chime in here - you’re not wrong that there are tons of ambulance chasers looking for a quick payout, but there’s also valid work done. Some of us will do our own testing, weed out false positives, and write our own reports instead of just running scans. Then again, I do speak from the perspective of somebody works on pre-agreed contracts which sounds like it’s a bit different from what you’ve experienced.

                                                                                                                                                    I am legitimately sorry folks in my profession have wasted so much of your time though.

                                                                                                                                                    1. 6

                                                                                                                                                      Years ago, when I was wearing both the sysadmin and netadmin hats at a small web hosting company, I got the results of a PCI compliance report. It was (seriously) 500 pages of nothing but “OH MY GOD YOU HAVE PING ENABLED! SOME HACKER MIGHT FIND YOUR NETWORK! OH MY GOD YOU HAVE DNS ENABLED! SOMEBODY MIGHT NOW YOUR SERVER NAMES! OH MY GOD YOU HAVE WEBSERVERS RUNNING! SOMEBODY MIGHT DOWNLOAD SOMETHING!” For. Every. Web. Site. We. Hosted. My god, it was annoying.

                                                                                                                                                      1. 8

                                                                                                                                                        What I hate more about it, is that these kind of “security” by making your systems less discoverable harms debuggability.

                                                                                                                                                        How much more security do you seriously get from DROP vs REJECT in your firewall? But it can turn a 5 debug session into one that takes hours.

                                                                                                                                                        1. 7

                                                                                                                                                          Strong agreement here. Even just from a security perspective, obscurity has costs. For instance, it’s common to point out “Your app doesn’t do root/jailbreak detection” as a security risk for a mobile test, but since security testing typically is done on a rooted/jailbroken device we now typically have to bake into our scopes that we need two versions of the app, one with that feature (to test the detection) and one without (to test literally everything else). It’s a mess, so while I have to make the client aware of the option, I do try to communicate that it’s not cost-free and the decision should be made with care, not according to a blind policy.

                                                                                                                                                          If you can find a security vendor that actually presents tradeoffs and options though, treasure them. We exist, I swear.

                                                                                                                                                          1. 2

                                                                                                                                                            Root detection really just causes frustration.. and it is futile overall, because magisk can hide itself :D

                                                                                                                                                            TBH I would argue that the possibility of root detection is a problem with the Android security model. Why the hell are apps even allowed to probe the existence of other apps, system binaries, etc?!

                                                                                                                                                          2. 4

                                                                                                                                                            A lot of this comes from people remembering rules but not remembering why the rules exist. Dropping traffic gives slightly less information to an attacker than rejecting. The idea behind this rules is that you want to prevent an attacker from being able to probe your network. You can ping every IPv4 address in a /16 in a couple of seconds and then focus on the ones that exist for further attacks. This actually made sense as a (weak) mitigation back when:

                                                                                                                                                            • If you were on the Internet, you probably had a sparsely used /8.
                                                                                                                                                            • Most machines were listening on at least one port with an insecure service, often on an obscure port.
                                                                                                                                                            • Every machine on your network had a publicly routable IP address.
                                                                                                                                                            • Doing a ping of every machine on a /8 was just about feasible but doing a port scan of every machine on a /8 wasn’t.
                                                                                                                                                            • Most machines were not discoverable through DNS.
                                                                                                                                                            • Scanning all ports (or even all well-known ports) on an entire /8 was infeasible.

                                                                                                                                                            Almost none of those are no longer the case. Even if your network is pure IPv6 in a /48 and every machine has a publicly routable address, it doesn’t make sense because most the only machines running potentially insecure services that are not blocked at the router are typically those that have publicly discoverable addresses. The secrecy IP addresses of machines on your network is not something that any sane security policy would depend on.

                                                                                                                                                      2. 3

                                                                                                                                                        My experience is that you get crap when you pay crap with these things. If you’re paying a pittance to Honest Bob’s Code Review Shack in order to be able to claim that your stuff has been audited by a third party in the marketing fluff (*), you’ll get handed the output of a crap automated scanner. The money you paid didn’t cover anything else.

                                                                                                                                                        Not that spending a lot is any guarantee of good service either though. :)

                                                                                                                                                        (* have worked somewhere that did this, the reports always contained obviously wrong feedback)

                                                                                                                                                        1. 2

                                                                                                                                                          Not that spending a lot is any guarantee of good service either though. :)

                                                                                                                                                          The example I cited above was by a division of one of the most wealthy and prestigious technology firms in the world. It was definitely not Bob’s Code Review and Bait Store.

                                                                                                                                                          1. 2

                                                                                                                                                            I’ve actually had better experience with Bob’s shop than big consulting corps. Ymmv though. Find recommendations from friends. Read public reports where available. Check references.

                                                                                                                                                      1. 26

                                                                                                                                                        After performing over 100 interviews: interviewing is thoroughly broken. I also have no idea how to actually make it better.

                                                                                                                                                        yep

                                                                                                                                                        1. 2

                                                                                                                                                          Maybe Amazon’s interview is broken. This data-structure bullshit doesn’t help at all if the applicant doesn’t know shit about real work, system designs, soft skills, security, team work, etc

                                                                                                                                                          1. 4

                                                                                                                                                            As much as I dislike FAANG interviews, every attempt I’ve seen to fix them is also fraught with problems

                                                                                                                                                            1. 6

                                                                                                                                                              I’d love to work for one of those big FAANG but I don’t know from the top of my head how to do a BFS on a tree. So fuck it, my 20 years of development is garbage for them.

                                                                                                                                                              1. 4

                                                                                                                                                                There are many books and courses to prep candidates for FAANG interviews. For senior engineers might be daunting but to join a FAANG some drills are to be expected.

                                                                                                                                                                Whatever company will have a big pool of candidates will end up in similar situation: assess things that are largely irrelevant to the day-to-day job.

                                                                                                                                                                The real drama is that very smart people who could use their brain-power to improve society at large, deal with low utility projects for years.

                                                                                                                                                                1. 4

                                                                                                                                                                  You’re doing yourself a disservice by having this mindset

                                                                                                                                                                  1. 1

                                                                                                                                                                    Why?

                                                                                                                                                                    1. 1

                                                                                                                                                                      Because you don’t get to work at FAANG

                                                                                                                                                                  2. 3

                                                                                                                                                                    I seriously believe that if you are a good programmer, went to uni or similar and spend two weekends with “cracking the coding interview”, do 1 or 2 mock interviews to train the communication style, you have a good chance. If you aren’t anxious or have other such problems during the interview.

                                                                                                                                                                    Without preparation most would be lost.

                                                                                                                                                                    You can, of course, still think this is fucked but it’s not unpassable for good programmers without anxiety problems, reasonably good communication skills and time to prepare.

                                                                                                                                                                    If you are interested, I can do a mock interview with you.

                                                                                                                                                                    1. 16

                                                                                                                                                                      I don’t think the problem for most people is the details of playing the game. The game is learnable, and if one has gotten anywhere in this field its because one can learn things. The problem people have is they question why the game, which everyone knows has no bearing on the ability to do the job at hand, needs to be played at all?

                                                                                                                                                                      If we put our cynicism hat on (mine is pretty worn-out by now), we can answer that question by saying that what the game is about is testing people’s willingness to jump through arbitrary hoops. In that sense, it may actually accurately test their ability to function within the organization at hand, and thus may in fact be very good at its job of filtering out candidates who would not work out.

                                                                                                                                                                      1. 5

                                                                                                                                                                        but it’s not unpassable for good programmers without anxiety problems, reasonably good communication skills and time to prepare.

                                                                                                                                                                        It’s not, but good programmers with 20 years of experience can always get a job someplace where they don’t have to jump through these silly hoops.

                                                                                                                                                                        It works surprisingly well for both parties. It’s not like recruitment heads in Big Corp don’t already know this puts off experienced programmers, everyone’s been aware of that for a long time now. They just don’t want that many experienced programmers. If you’re recruiting for senior and lead positions, it’s much more efficient to go through recommendations (or promote from within) in which case the interview is… somewhat more relaxed, so to speak. The interviews are designed for something else.

                                                                                                                                                                        (Edit: I’m with @gthm on what they’re designed for . The main aim is to select young graduates and mid-career developers who will put up with arbitrary requirements and don’t mind spending some of their free time on it every once in a while.)

                                                                                                                                                                        1. 2

                                                                                                                                                                          Having been through the Google interview gauntlet a few years ago, there’s quite a bit more than just whiteboarding algorithms.

                                                                                                                                                                          I was completely unprepared for the ‘scale this data query service’ chunk, which I didn’t even know was going to be part of the interview (which is a failure of the Google recruiter frankly) but I now know is pretty standard amongst FAANG company interviews for SRE type roles. Didn’t help that the interviewer was a jerk who refused to answer any of my questions, but that’s hardly unusual!

                                                                                                                                                                          1. 2

                                                                                                                                                                            That part is also covered in “Cracking the Coding Interview”

                                                                                                                                                                            Not to invalidate your experience but the vast majority of my interviewing experience was pleasant. Maybe you have had bad luck or me good luck or your standards are different.

                                                                                                                                                                            1. 3

                                                                                                                                                                              1 grumpy jerk who clearly didn’t want to be there, 2 decent guys & a third who was OK but stone walled me when I asked questions about the problem he posed. Which was a little weird, but there it is.

                                                                                                                                                                              (CtCI has 5 pages on system design & about 100 pages on data structures, algorithms & all the rest. When a quarter of the interview is system design, that’s not going to help you much. There are some good online resources around these days though.)

                                                                                                                                                                1. 6

                                                                                                                                                                  When I was playing around with Rocket, I felt that Rust was an easy way to make a web API/website.

                                                                                                                                                                  Compared to Django or Rails or similar frameworks, it seemed to me like there was significantly less to worry about.

                                                                                                                                                                  But I haven’t built anything serious with it, so maybe I would change my mind if I did.

                                                                                                                                                                  Rust is not my favourite language (that would be Zig) but I would use it if I had to build a web app.

                                                                                                                                                                  1. 2

                                                                                                                                                                    I think the article makes some good points on why writing APIs might be easy, but proper websites / web applications not so much if you factor in things like form validation or CSRF tokens. And where there might be middlewares for that they are not standardized across web frameworks or toolkits.

                                                                                                                                                                    1. 1

                                                                                                                                                                      I’ve used Rocket in some internal tooling in my company. I’ve liked the API; but the requirement of using nightly Rust compiler build is a real dealbreaker, regularly I had to spend time fighting with the compiler in order to compile the library. Even if it finally compiled, few months later and few rustup’s later, it broke again.

                                                                                                                                                                      I’ve switched to Actix, which requires a stable compiler version.

                                                                                                                                                                      1. 1

                                                                                                                                                                        I had an argument with a person that said: the rust compiler changes too much, they need to update their code constantly.

                                                                                                                                                                        I was really surprised because my experience was had the opposite with very rare exceptions.

                                                                                                                                                                        They failed to mention that they used the nightly compiler. When we discovered that this was the cause for our different experiences, they basically said that in their opinion, you had to use the nightly compiler to use anything in their niche (ML).

                                                                                                                                                                        I wonder for how many people this seems to be the case and if popular libraries like rocket which depend on nightly, support this.

                                                                                                                                                                        1. 1

                                                                                                                                                                          Yeah.

                                                                                                                                                                          That’s one reason why I haven’t done anything serious with it.

                                                                                                                                                                          It should work on stable in the next release it seems.

                                                                                                                                                                          1. 1

                                                                                                                                                                            It should work on stable in the next release it seems.

                                                                                                                                                                            If I’d knew that, I’d leave those tools running on Rocket, dammit! ;)