Threads for ohrv

    1. 2

      As an application developer: why not give me access to #[feature(const_item)] (for a subset of features) on stable? It’s pretty clear that the tradeoff is more work when you update, but this isn’t that bad? Seems like putting the maintenance burden on the rust project is a bit of a waste. Same for non-published crates.

      As a (public) crate developer: Cool! But it’s hard to wrap my head around exposing preview features to my users, I think? What if I use preview v1 and a different dep uses preview v2?

      1. 2

        What if I use preview v1 and a different dep uses preview v2?

        It would be perfectly fine. They’d just be macros that translate a propose syntax into something the compiler understands, and you could have an infinite number of different versions at the same time without issue.

      2. 12

        The outcome is worse software for everyone: less secure, more buggy, with fewer features and worse performance.

        Citation needed; this is very rare in my experience. The majority of updates to libraries involve changes that are not relevant to the application updating them, or if they are relevant, they are as likely to introduce new bugs as to fix existing ones.

        1. 12

          In Rust? Regressions absolutely happen (I’ve dealt with three in the last year) but overall Rust dependencies tend to be quite solid.

          The risk of not updating is really high in cases where a security bug is found and you’re wildly out of date. In an emergency you have to either backport the patch yourself (not being an expert on the dependency), hope the maintainers backport the patch for you (or pay them large sums of money), or update to the latest version (introducing a ton of risk at a time where risk should be minimized).

          Staying abreast amortizes this risk over non-emergency periods of time.

          1. 19

            Staying abreast amortizes this risk over non-emergency periods of time.

            This is the most important thing: if you only update when a CVE is published, then you’re doing so under duress, in a hurry, and all at once. It’s much harder to update past several versions of changes because we woke up to an active security threat than to update dependencies on a schedule with smaller piece meal changes needed each time.

            1. 2

              People keep saying this, but I have only ever encountered it once in the two decades I’ve been writing software. And when it did happen, it was for a subsystem we were about to delete, so if we had taken this advice we would have wasted a bunch of time updating code that never even would have used the fixed update.

              Maybe it’s just a thing that happens in frontend work?

              1. 6

                This happens all the time across several tech stacks in companies I’ve worked in. I mean even if we just look at famous cases from the last few years there’s been issues with log4j, xz, openssl. It feels if you’re not encountering CVEs in dependencies, you’re just not looking.

                Now granted, not all CVEs are real issues. The security department when I worked at Yahoo loved to insist on urgent updates of Jackson because their security scanner said it had a RCE. The RCE being if you turned on the feature to let the client choose the class to instantiate (off by default, we didn’t enable it, it’s highly warned about in the docs), and loaded one of the classes in the JVM ecosystem that executed shell commands from the constructor, then the client could ask the server to construct that object and execute code. The response of the Jackson project is generally to add such classes to their blacklist for that feature when they’re reported, and to continue to warn people not to enable that feature. Which shut the CVE scanners up for a few months, until someone discovered the next library with a ShellCommand class or equivalent.

                1. 2

                  It feels if you’re not encountering CVEs in dependencies, you’re just not looking.

                  I didn’t say I don’t encounter CVEs.

                  What I said was that the situation of “you’ll wish you had been upgrading all along because a CVE will come along suddenly where the upgrade that fixes it will take a lot of work, and you would have been better off if you had already done most of that work by doing the upgrades in between” is not something I’ve ever encountered outside code that I was going to delete anyway.

                  (And yes, of course most CVEs are actually “Curriculum Vitae Enhancers” and not legitimate problems.)

                  1. 3

                    I’m glad you’ve been so lucky! I have been in situations where it’s been necessary to update across many versions to fix a security bug. It’s terrifying, basically requiring one person to manually review everything in between whole another person prepares the patch.

            2. 5

              As the source of one of those regressions: adding cargo-server-checks to my GitHub workflow gives me peace of mind, especially since most of my crates only see a small spike of activity in a year which makes mistakes much more likely.

            3. 4

              Agreed. I came to Rust because Rust is so much better at “fearless upgrades” than any other language I’ve ever used and I’m not sure I’ve ever had a semver-compatible GitHub Dependabot bump fail the resulting CI test run without it being my cargo deny task complaining that a dependency has added a new license via transitive dependency which I need to audit and whitelist.

              1. 16

                Three regressions I’ve dealt with recently are:

                • mio/Tokio on illumos stopped working (I work at Oxide where we use illumos)
                • a pretty bad memory corruption issue in libc
                • a behavior change in config-rs

                But these are rare, and the fact that they’re rare means that when they happen, it’s ok to prioritize dealing with them.

                1. 4

                  The time inference regression comes to mind, but that was the stdlib, not the ecosystem.

                2. 1

                  Just out of curiosity, but when did you come to Rust? In the first couple of years it seemed like every important/useful crate was 0.x and even I, with only a couple of toy projects, ran into lots of issues. (regex, date, etc).

                  I would not be surprised if people had learned this lesson early. I’m not saying it’s a good thing.

                  1. 5

                    It was sort of a gradual ramp-up. After discovering Rust via Planet Mozilla, I was lurking in /r/rust/ waiting for v1.0 and started learning the same day the syntax stabilized, but things like using Dependabot came later. (And I’m not usually the kind of person to try out a new language every year or two. Rust just filled a niche that I’d been looking for.)

                    Still, even then, Rust’s type system made it more “fearless upgrades” than Python.

                    1. 2

                      Unfortunately, 0.x doesn’t mean much in the Rust ecosystem.

                      There are of course unstable, broken, and toy crates with 0.x versions, but there also many 0.x crates that are stable and production-ready: libc, rand, log, futures, itertools, toml, hashbrown are all 0.x, but they’re serious packages, with millions of downloads, and they’re used even in rustc and Cargo themselves, so they are as reliable as Rust is.

                      1. 2

                        There’s absolutely still an issue with critical crates being at 0.x versions. Winit for example is up in the brain stem of a lot of GUI work in Rust and they updated the way their event loop worked to go from taking a closure that accepts an event and will dispatch it as the user sees fit to requiring you to implement a trait that has methods for different types of events, ostensibly for compatibility reasons although I’m unsure of the details.

                  2. 3

                    Deploying code to prod was never a bottleneck in any project I worked on, so this is optimizing (for speed) something that has very minor effect on actual dev flow.

                    IMO, the bottleneck is always “tests” (in the borader sense of “a way to test that the code does what you think it does”), so you should optimize the testing part of the CI, make sure you have (fast) tests that can check everything you care about, and that only the right tests run for every commit/PR/release/etc.

                    1. 6

                      It’s tempting to forswear argument altogether and refuse to engage, but I have come to believe this is also a mistake. The problem is finding a way to argue productively.

                      I am reminded of one of my favorite Hegel quotes:

                      Since the man of common sense makes his appeal to feeling, to an oracle within his breast, he is finished and done with anyone who does not agree; he only has to explain that he has nothing more to say to anyone who does not find and feel the same in himself. In other words, he tramples underfoot the roots of humanity. For it is the nature of humanity to press onward to agreement with others; human nature only really exists in an achieved community of minds. The anti-human, the merely animal, consists in staying within the sphere of feelings, and being able to communicate only at that level. (Phenomenology of Spirit, §69)

                      1. 6

                        I agree that some things are worth an argument, but (1) much fewer things than we think are like that and (2) productively and civility are two related but different things

                        1. 4

                          much fewer things [are worth an argument] than we think are like that

                          The first step to arguing productively: ask yourself what you are trying to accomplish. What does agreement change about the world that disagreement prevents? The point is to change something. Simply honing your own understanding is a worthwhile exercise!

                          The fellow arguing about Italian food with the pissman is obviously wasting his time, but we often invest energy in debate under much more uncertain circumstances.

                          productively and civility are two related but different things

                          Absolutely true, but I think it is far more important to recognize that form and content are also different.

                          Language is frequently used as tools of pure negation against ideas, in a purely formal way disconnected from any truth content or theory of knowledge. This is the most important thing to recognize, because if your interlocutor possesses no theory of knowledge you are not going to get anywhere useful.

                        2. 1

                          It’s tempting to forswear argument altogether and refuse to engage, but I have come to believe this is also a mistake. The problem is finding a way to argue productively.

                          I try to assume good faith but I’m a comment section veteran and as soon as I smell a non-productive conversation I’m gonezo.

                          I responded to a comment once that said “what’s so bad about mysql?” like they hadn’t heard of the Oracle acquisition thing and the fork to MariaDB. I responded in good faith and then they started explaining why Oracle is actually a good company and I felt like a huge idiot for engaging.

                          1. 1

                            It’s perfectly reasonable to demand a standard of discourse, and peace out when you encounter bad faith. But you have to engage in the first place to determine which is which.

                            Also, when debating in a public forum, you have very little perspective on how many people are seeing the exchange. Engaging with a bad-faith interlocutor may be illuminating for others in ways you will never know. I wouldn’t discount that.

                        3. 1

                          let me know if you feel less angry now!

                          1. 2

                            The FastLanes paper defines a virtual 1024-bit SIMD register called MM1024

                            Wow

                            Also: very cool article! Can’t wait for the next part (:

                            1. 5

                              Such an awesome write-up! It’s really interesting to see how something so low-level like this is both built & debugged

                              1. 2

                                It’s a personal practice until you review a PR and you ask for better tests and now that’s too hard or time consuming.

                                Designing for testability is hard, and TDD is not the only way to do it - but a lot of bad tests are the result of people waiting too long to write the tests, for no clear benefit.

                                1. 3

                                  (Author here) The Pin pointer type is a super interesting example of a safe-unsafe abstraction in Rust. I couldn’t wrap my head around it the first few times, so I wrote this blog post to try and fix this in my brain.

                                  1. 19

                                    The problem with this approach [with mocks] is it’s completely tautological.

                                    Yes! I once deleted an entire Python class and the tests kept passing because it was mocks all the way down. Less then worthless tests, 99% of the time.

                                    1. 2

                                      To be fair, that’s an issue an issue with how the tool is used – you can do bad work with any tool, after all. If you’re doing TDD, that’s why the first step is to write some test and make sure it fails for the right reasons. It’s also why in mockist-style TDD, you should only mock interfaces you own, as other wise, you end up guessing at the semantics of the interface, rather than designing it yourself.