1. 32

  2. 4

    If you’re writing a function like unwrap that may panic, you can put this annotation on your functions, and the default panic formatter will use its caller as the location in its error message.

    Millenials reinvent exceptions? ;)

    1. 8

      This isn’t about exceptions, but about attribution of the error. Some errors are the fault of the caller (e.g. passes parameters that were forbidden by the contract of the function), and some errors are fault of the function itself (a bug).

      With access to caller’s location you can blame the correct line of code. Exception/panic stack traces are independent of this feature, because Rust didn’t want to pull in dependence on debug info and relatively expensive unwinding machinery.

      1. 1

        Using a strong type system for error handling predates exceptions.

        1. 1

          Panic is for errors that the type systems cannot catch. A pragma that adds caller location to a panic message is a symptom of nostalgia for exception traces.

          I don’t know what compiler developers’ reasoning is. As a side observer, I can’t help but think “they could as well be implementing full exception traces now”.

          1. 2

            They actually do have full exception traces. You turn them on with the RUST_BACKTRACE=1 environment variable, and by making sure you don’t strip debug info.

            This is a convenience feature so that you get useful info when you’ve got traces turned off, or when you’re trying to get useful diagnostics from a released executable (if you’re stripping the executable, then the whole point of doing so is to ship less data than you would need for full traces, but there might still be a happy medium between nothing and full-exceptions).

            1. 1

              There’s a substantial runtime cost to full exception traces that is much smaller to non existent for the track caller annotation. For the latter, the compiler inserts some static info to be printed on panic. There will be some binary bloat and potentially some instruction density loss, but the performance impact will be very small. To do full exception traces, you have to ~constantly maintain full unwind tables somewhere and update them on every function call/return. You can already get that info via seeing RUST_BACKTRACE, but it is off by default.

        2. 1

          I’m somewhat curious how long it will take for the console fn improvements to trickle out to crates enough to see performance improvements.

          1. 2

            I love the idea of compile time computation, but I imagine that it could be fraught for rust given it’s notorious compile times. Definitely leans into existing orientation towards runtime performance over compile time performance. That said, it’s just another arrow in the quiver.

            1. 9

              One unintuitive thing about compilers is that assessing time spent in general is really hard. E.g. early optimisations that remove code may make later optimisation passes or code generation cheaper.

              const functions are a case where you get a guaranteed and fault-free transformation at compile time (const function calls also have an upper runtime limit, compared to e.g. proc macros), without relying on additional optimisations to e.g. catch certain patterns and resolve them at compile time.

              1. 1

                That is an interesting point! Substituting a const function for a macro could lead to an improvement in compile time.

                My initial thought was that having a new compile time facility might lead people to handle new things at compile time. It’s always hard to tell how something shakes out in a dynamic system at first glance.

              2. 9

                My expectation is that const fn will speed up Rust programs. Running snippets of rust on miri, which is how const fn works, is not a slow process. Running LLVM to optimize things perfectly is. Any code that LLVM doesn’t have to deal with because it was const’d is a win.

                1. 8

                  If it’s used to replace proc macro’s, it should improve compile times. But there are certainly use cases where it can degrade compile times. E.g. generating a perfect hash function. For cases like that, you’re probably better off using code generation.

                  1. 2

                    There’s some work on providing tools for profiling compile times. I think some of the larger crates are doing so to manage their compile times and could use that to manage the trade-off.

                    1. 1

                      That seems like it would help quite a bit. Once you can measure something it becomes much more actionable and you can make informed decisions about trade offs.