1. 14
  1. 19

    I really dislike it when people make posts telling developers they should do something. How about instead “Rust: Several reasons why you might like to learn in it 2019”.

    Rust’s low overhead is a good fit for embedded programming

    ≠ you should learn it

    Rust can build powerful web apps

    ≠ you should learn it

    Rust is good for building distributed online services

    ≠ you should learn it

    Rust is suited to creating powerful, cross-platform command-line tools

    ≠ you should learn it

    Rust now has new developer tools and better IDE support

    ≠ you should learn it

    That said, I’ve really enjoyed learning Rust, it’s a great language.

    P.S. Autoplay on videos WITH sound? So annoying.

    1. 14

      I rarely just post to confirm, but I’d like to press that this is also very much the official project stance. It’s condescending, and we don’t think that helps.

      We sve problems and prefer speaking to people that actually have these problems. Also, there’s alternatives and we believe people should check themselves.

    2. 5

      I honestly tried several times to get into it but I don’t know, it just didn’t click yet.

      I find golang much nicer to work with if I need to do what rust promises to be best at.

      I won’t give up on it yet but that’s just my feeling today.

      1. 12

        I’ll likely take some heat for this but my mental model has been:

        • Go is the Python of 2019
        • Rust is the C++ of 2019

        Go has found its niche in small-to-medium web services and CLI tools where “server-side scripting languages” were the previous favorite. Rust has found its niche in large (or performance-sensitive) interactive applications where using GC is untenable. These aren’t strict boundaries, of course, but that’s my impression of where things have ended up.

        1. 9

          I agree with the mental model, although I usually think of Go as the Java for (current year).

          1. 5

            The tooling is light years behind though

            1. 3

              “go fmt” offers a standard way to format code, which removes noise from diffs and makes code other people have written more readable.

              “go build” compiles code faster than javac.

              The editor support is excellent.

              In which way is the Java tooling better than Go, especially for development or deployment?

              1. 8

                How is the debugger these days?

                When I was doing go a few years ago, the answer was “it doesn’t work”, whereas java had time-travel debugging.

                1. 1

                  Delve is a pretty great debugger. VSCode, Atom etc all have good debugging support for Go, through the use of Delve. Delve does not have time-travel, but it works.

                  Packaging Java applications for Arch Linux is often a nightmare (with ant downloading dependencies in the build process), while with Go, packaging does not feel like an afterthought (but it does require setting the right environment variables, especially when using the module system that was introduced in Go 1.11).

                  Go has some flaws, for example it’s a horrible language to write something like the equivalent of a 3D Vector class in Java or C++, due to the lack of operator overloading and multiple dispatch.

                  If there are two things I would point out as one of the big advantages of Go, compared to other languages, it’s the tooling (go fmt, godoc, go vet, go build -race (built-in race detector), go test etc.) and the fast compilation times.

                  In my opinion, the tooling of Go is not “light years behind” Java, but ahead (with the exception of time-travel when debugging).

                2. 2

                  My three favourite features when developing Java:

                  • The refactoring support and IDE experience for plain Java is outstanding. Method extraction (with duplicate detection), code inlining, and rearranging classes (extract classes, move methods, extract/collapse hierarchies) makes it very easy to re-structure code.
                  • Java Flight Recorder is an outstandingly good tool for insight into performance and behaviour, at virtually no overhead.
                  • Being able to change code live, drop frames, and restart the system in the debugger is a life-saver when debugging hard-to-reach issues. that the process being debugged can be essentially anywhere is a wonderful bonus.

                  Sure, it would be nice if there was a single Java style, but pick one and use that, and IDE’s generally reformat well. Also, the compile times can be somewhat long, but for plain Java thay are usually ok.

                  Note that I have never had to work in a Spring/Hibernate/… project with lots of XML-configurations, dependency injections, and annotation processing. The experience then might be very different.

                  1. 1

                    Just the other day I connected the debugger in my IDE to a process running in a datacenter across the ocean and I could step through everything, interactively explore variables etc. etc. There is nothing like it for golang.

              2. 5

                “I’ll likely take some heat for this but my mental model has been:”

                All kinds of people say that. Especially on HN. So, not likely. :)

                “These aren’t strict boundaries, of course, but that’s my impression of where things have ended up.”

                Yup. I would like to see more exploration of the middle in Rust. As in, the people who couldn’t get past the borrow checker just try to use reference counting or something. They get other benefits of Rust with performance characteristics of a low-latency GC. They still borrow-checker benefits in other people’s code which borrow checks. They can even study it to learn how it’s organized. Things click gradually over time while they still reap some benefits of the language.

                This might not only be for new folks. Others know know Rust might occasionally do this for non-performance-sensitive code that’s not borrow-checking for some reason. They just skip it because the performance-critical part is an imported library that does borrow-check. They decide to fight with the borrow-checker later if it’s not worth their time within their constraints. Most people say they get used to avoiding problems, though, so I don’t know if this scenario can play out in regular use of the language.

                1. 6

                  I agree, for 99% of people, the level of strictness in Rust is the wrong default. We need an ownership system where you can get by with a lot less errors for non-performance sensitive code.

                  The approach I am pursuing in my current language is to essentially default to “compile time reference counting”, i.e. it does implement a borrow checker, but where Rust would error out, it inserts a refcount increase. This is able to check pretty much all code which previously used runtime reference counting (but with 10x or so less runtime overhead), so it doesn’t need any lifetime annotations to work.

                  Then, you can optionally annotate types or variables as “unique”, which will then selectively get you something more like Rust, with errors you have to work around. Doing this ensures that a) you don’t need space for a refcount in those objects, and b) you will not get unwanted refcount increase ops in your hot loop.

                  1. 2

                    Ha ha, just by reading this comment I was thinking ‘this guy sounds a bit like Wouter van Oortmerssen’, funny that it turns out to be true :-) Welcome to lobste.rs!

                    Interesting comment, I assume you are exploring this idea in the Lobster programming language? I would love to hear more about it.

                    1. 2

                      Wow, I’m that predictable eh? :P

                      Yup this is in Lobster (how appropriate on this site :)

                      I am actually implementing this as we speak. Finished the analysis phase, now working on the runtime part. The core algorithm is pretty simple, the hard part is getting all the details right (each language feature, and each builtin function, has to correctly declare to its children and its parent wether it is borrowing or owning the values involved, and then keep those promises at runtime). But I’m getting there, should have something to show for in not too long. I should definite do a write-up on the algorithm when I finish.

                      If it all works, the value should be that you can get most of the benefit of Rust while programmers mostly don’t need to understand the details.

                      Meanwhile, happy to answer any more specific questions :)

                2. 4

                  I don’t understand, and have never understood, the comparisons between Go and Python. Ditto for former Pythonistas who are now Gophers. There is no equivalent of itertools in Go, and there can’t be due to the lack of generics. Are all these ex-Python programmers writing for loops for everything? If so, why???

                  1. 1

                    Not Go but Julia is more likely the Python of 2019.

                3. 2

                  One thing that put me off from learning Rust (previously) was reading stackoverflow comment like the following:

                  I think something changed since every answer and even the documentation uses rand::thread_rng. By the way, rand is unstable now, so you have to add #![feature(rand)] to the top of your file and use the nightly rustc. All I want to do is test something; I’m this close to just using the C rand() function through FFI and calling it a day.


                  I completely understand that this is the way it goes for new stuff. I will get through my ‘Rust programming language’ book this year and take my time with learning rust, through all of the changes!

                  1. 3

                    I think the person who wrote that comment was confused.

                    A thing to remember about Rust is that it was changing very quickly up until version 1.0 in May 2015. In particular, a lot of half-baked things that were in the standard library got turned into external libraries so that their half-baked-ness wasn’t preserved forever by backwards-compatibility requirements. A lot of those things hung around secretly for the benefit of the compiler, which couldn’t be ported to use the external libraries overnight, but the official solution is “use the external library” (or “crate”, in Rust terminology).

                    The original question you linked to was asked in 2013, well before the 2015-05 cutoff, so it talks about std::rand and uint that no longer exist.

                    The answer just above the comment you linked to is correct today, but was edited in 2018-08, so it may have been wrong or misleading before then.

                    The actual comment you link to was written in 2017-02, but the bits about #![feature(rand)] and using the nightly compiler make me suspect the author stumbled onto the secret compiler-only version of Rust’s random-number library, instead of using the official, advertised, easy-to-use version instead.

                  2. 2

                    “To write extremely fast code with a low memory footprint previously meant using C or C++”

                    No it didn’t. Those were by far the most popular languages to do that in, but they weren’t the only ones, and never have been since Fortran is older. Sigh.