1. 55
  1.  

  2. 39

    I don’t this article is very useful for a general audience, and I don’t think this article makes a convincing argument about what you think it does. I do think it’s a pretty decent article though.

    This article is basically “rust is bad because I find it’s closures confusing and they’re becoming really common”. Now that’s a pretty reasonable complaint, rust closures are complex. But this article isn’t really thorough enough to make a convincing argument that this isn’t more of an issue with how the author is trying to do things, rather than the language itself.

    As someone who uses these features regularly, I have to say that

    • The closure types (Fn, FnMut, FnOnce) make sense if you understand the borrow checker, I think these fall into the category of unavoidable complexity.
    • You should almost always use move |args| body instead of |args| body, and doing this will make closures make a whole lot easier to use. I really wish this was the default (but understand why it isn’t).
    • Callbacks are usually a mistake in rust because of the memory model, with exceptions, but this is also one of those unavoidable complexity things.
    • Unnameable types annoys me too.
    • I think there are some legitimate complaints about async rust, but these aren’t them. I haven’t really sat down and thought about how to make the complaints about async rust that I have, but I would be looking at subjects like Pin, Waker, lack of standardization, lack of generators, closures being too opaque, and so on.

    I hope this article will be useful for the people working on rust documentation and tooling, because it’s pretty obvious that we could do a better job of helping people with this subject (if just because it is one of the harder topics in one of the harder languages out there).

    1. 12

      You’re right in that the article doesn’t really go into enough depth to conclusively prove the points made – that’s partially because it was a long enough article anyway, and it’s hard to get the balance right :)

      This article is basically “rust is bad because I find it’s closures confusing and they’re becoming really common”. Now that’s a pretty reasonable complaint, rust closures are complex.

      This isn’t really what I wanted to get across, although I can certainly see why you’d come to that conclusion (again, article isn’t that thorough). The point was not that I find Rust’s closures confusing (I mean, I do a little, but I’m more than used to it now ;P), but that a lot of Rust’s design decisions around ownership, borrowing, and how closures are represented (i.e. opaque types) make their ergonomics really bad.

      And then that doing asynchronous programming is fundamentally equivalent to making a bunch of closures (because async stuff ends up being some form of CPS transform). I definitely agree that this point could have been proved more conclusively – but then again, the “what colour is your function” post already talked about that a lot.

      I think there are some legitimate complaints about async rust, but these aren’t them.

      I mean, yeah, I could definitely have talked more about the 2021 async library ecosystem specifically – the 2017 post did more of that (but, uh, for 2017). This was an attempt to try and identify some sort of root cause for all of the ecosystem instability & churn (in the spirit of this recent Lobsters comment by @spacejam), instead of fixating on specific issues that will probably go away (and be replaced with other similar issues >_<)


      I do actually write async Rust code for my day job (and have been using async stuff since it came out in 2016/7). As I attempted to say, it’s not terrible – but I do think it’s all built on rather precarious foundations, and that leads to a fair deal of instability.

      1. 3

        I do actually write async Rust code for my day job

        Hm, that’s weird. Why do you have to do async then? Can’t you just stick with threads?

        1. 3

          It wasn’t my choice; the work codebase had already started using it in earnest before I joined :)

          (which is ok – I wouldn’t have used it myself due to the ecosystem volatility and pain points, but doing it as a team at work isn’t that bad, and it’s not just me who has to maintain it when it breaks)

          1. 2

            Hm, that’s weird. Why do you have to do async then? Can’t you just stick with threads?

            Async and threads don’t really solve the same problems and aren’t interchangeable. The alternative to async would more likely be to hand-roll a reactor loop.

            1. 2

              To quote the original article:

              Was spinning up a bunch of OS threads not an acceptable solution for the majority of situations?

              I tend to agree that “just use threads” would probably be a solution for many problems. Though “just” might mean that someone needs to once write an epoll loop to handle http and offload complete requests/responses to a thread pool.

              1. 2

                Using OS threads in this way generally doesn’t scale well. We already know this from years of experience in many other languages. In concrete example terms, to avoid terminology issues: if the codebase were an HTTP server handling 10K concurrent TCP connections on a machine with 32 hardware CPU cores: a thread per TCP connection (10K threads) isn’t a great way to scale, but a thread per CPU core dedicated to async TCP connection handling (~32 threads, handling ~312 connections each using some kind of nonblocking eventloop (or async/await, which is just the fashionable way to factor eventloop-driven programs now)) is a great way to scale.

                1. 6

                  We already know this from years of experience in many other languages.

                  I’d like to challenge this: people have been writing web apps with Django & Rails for years, and that works. async/await can simultaneously be a tremendous help for niche use-cases and unnecessary for majority of use-cases.

                  I haven’t seen a conclusive benchmark that says: “if you are doing HTTP with a database, async/await needs X times less CPU and Y times less RAM”. Up until earlier this year, I haven’t even seen a conclusive micro benchmark which compares threads and async await (there’s https://github.com/jimblandy/context-switch now). I don’t agree that there’s anything “we know” here: I don’t know, and I’ve been regularly re-researching this topic for several years.

                  EDIT: to put some numbers into discussion, 10k threads on Linux give on the order of 100MB overhead: https://github.com/matklad/10k_linux_threads.

                  1. 1

                    My point isn’t so much about the numbers themselves or some particular 10K limit. We could re-make the argument at some arbitrary, higher threshold. The point is about how the SMP machines of today work, and how synchronization primitives and context-switching and locking works, etc. In general, irrespective of programming language, thread-per-cpu scales better at the limit than thread-per-connection (or substitute “connection” for some other logical object which can get much larger than the cpu count as you try to scale up).

                    Obviously, if one doesn’t care about scalability, then none of this debate matters. You can always claim that the problems you care about or are working on simply don’t stretch current hardware’s capabilities and you don’t care, but that’s kind of a cop-out when discussing language features built to enable concurrency in the first place.

                    1. 1

                      I do think that specific thresholds matter, the words “majority of situations” / “many problems” are important.

                      Of course you can do more connections per cycle when you go down the levels of abstractions from Os threads to stackful coroutines to stackless coroutines to manually coded state-machines to custom hardware.

                      The question, for a specific application, where is the threshold when this stops being important and is dominated by other performance concerns? It doesn’t make sense to make network infinitely scalable if the bottleneck is the database you use.

                      The current fashion is to claim that you always need async, at least when doing web stuff, and that threads are just never good enough. That’s a discussion about culture, not a discussion about a language feature,

                      Language-feature wise, I would lament the absence of other kinds of benchmarks: how dynamically-dispatched+match based resumption compares to a hand-coded event loop, how thread-per-core works with mandatory synchronized wake ups, etc.

                  2. 2

                    C10K has been solved for years. I mean, you’re not wrong that having a full thread stack for every connection is expensive, but 10k connections isn’t the point at which thread-per-connection breaks.

        2. 26

          The “What color is your function” article doesn’t really apply to Rust. Unlike JS, Rust can wait for async functions in a blocking context, and can run thread-blocking code in an async context. There are block_on(async {}) and spawn_blocking(sync).await. So the “color” of functions only requires some fiddling with runtimes to jump between sync and async, but doesn’t split the program into two separate worlds.

          Author also dismisses async fn as merely equivalent to closures, but that’s not the case in Rust. In Rust async fn allows self-referential types. From memory management perspective this is more flexible than closures. The older callback-based futures in Rust required closures to use Arc<Mutex> instead of references, which was exactly the problem author has with “radioactive” borrowing in closures.

          IMHO async/await in Rust is really well done. I’m talking from experience:

          • I’ve migrated a 20K-line Rust project from sync thread-blocking to async. The fact that Rust allows mixing “colors” of functions allowed a gradual change. The async version handles timeouts and aborts much better (it’s an under-appreciated aspect of Rust’s async model), and it’s easier to control concurrency of both CPU-bound and network-bound tasks.

          • I’ve migrated another 15K-line project from closures-based futures 0.1 to async/await syntax. It has significantly simplified the code. Avoided rightwards-drift of closures and a ton of Arc wrappers. Dance with state machines, callbacks, and Either wrapper types could be changed to just basic for and if statements. The project is very performance-critical, and Rust has been fast and stable.

          So overall I’m very happy with async Rust. You say it doesn’t work, I say it goes brrrrrrr.

          1. 15

            This matches my experience.

            The promise was a language without non-ref-counting GC, with straight-forward async support, and that hasn’t been delivered. I mean it’s fine so long as you don’t use traits, which is like saying C is fine if you don’t use pointers.

            1. 12

              Understanding ownership is the price of admission.

              For users who are used to GC languages it is admittedly very hard to switch the mental model. It’s a common mistake to assume Rust’s references are a substitute for pointers/passing-by-reference semantics, and makes people completely stuck with what seems like a language that is absolutely impossible to use.

              But once you “get” single ownership and moves, it’s really not a big deal.

              1. 10

                AFAICT people who “fight” the borrow checker want to use refs for everything. If you default to move and clone when you need something in two places this gets you 80%+ of the way there IME. Then treat refs or fancy smart pointer types as optimizations or for advanced cases. Over time one gets comfortable using refs for short-lived cases and such.

                1. 1

                  Maybe those people are right? As in: refs are good enough for 80-90% of the cases and you reserve the move/clone and other fancy mechanisms for advanced cases.

                  1. 8

                    No, they really aren’t. Rust refs are temporary* scope-bound borrows. If something can’t be guaranteed to be used only temporarily, they’re not good for it. If something can’t be bound to a single scope, they’re not good for it. If something doesn’t already have an owning binding, they’re not good for it.

                    * except &'static, but that is effectively a memory leak, so it’s not a solution in general.

                    In GC languages references are ownership-agnostic and their lifetime can be extended, which makes them suitable for everything. But in the land of Rust, references are very specific tool with narrow use-cases.

                    If you find yourself fighting with the borrow checker: use references only for function arguments, and nothing else. That’s a 90% accurate guidance.

                    1. 1

                      I guess I should have been less terse, apologies.

                      This is obviously wrong within the constraints of the current Rust design.

                      My point was that maybe those people are right and this aspect of Rust’s design is wrong.

                    2. 5

                      That is exactly the recipe for “fighting the borrow checker”. Move is what you want more thn half the time. If you use move and clone when you can’t the main case you can’t cover are shared mutability (which you usually want to avoid anyway). Can go very far this way.

                  2. 1

                    Understanding ownership is fine but the issue is around the way that async generates types that can’t be typed. The async_trait hack is a confusing, imperfect hack but impossible to live without in an async rust world.

                2. 25

                  Note a couple things:

                  • With the 2021 edition, Rust plans to change closure capturing to only capture the necessary fields of a struct when possible, which will make closures easier to work with.
                  • With the currently in-the-works existential type declarations feature, you’ll be able to create named existential types which implement a particular trait, permitting the naming of closure types indirectly.

                  My general view is that some of the ergonomics here can currently be challenging, but there exist patterns for writing this code successfully, and the ergonomics are improving and will continue to improve.

                  1. 12

                    I have very mixed feelings about stuff like this. On the one hand, these are really cool (and innovative – not many other languages try and do async this way) solutions to the problems async Rust faces, and they’ll definitely improve the ergonomics (like async/await did).

                    On the other hand, adding all of this complexity to the language makes me slightly uneasy – it kind of reminds me of C++, where they just keep tacking things on. One of the things I liked about Rust 1.0 was that it wasn’t incredibly complicated, and that simplicity somewhat forced you into doing things a particular way.

                    Maybe it’s for the best – but I really do question how necessary all of this async stuff really is in the first place (as in, aren’t threads a simpler solution?). My hypothesis is that 90% of Rust code doesn’t actually need the extreme performance optimizations of asynchronous code, and will do just fine with threads (and for the remaining 10%, you can use mio or similar manually) – which makes all of the complexity hard to justify.

                    I may well be wrong, though (and, to a certain extent, I just have nostalgia from when everything was simpler) :)

                    1. 9

                      I don’t view either of these changes as much of a complexity add. The first, improving closure capturing, to me works akin to partial moves or support for disjoint borrows in the language already, making it more logically consistent, not less. For the second, Rust already has existential types (impl Trait). This is enabling them to be used in more places. They work the same in all places though.

                      1. 11

                        I’m excited about the extra power being added to existential types, but I would definitely throw it in the “more complexity” bin. AIUI, existential types will be usable in more places, but it isn’t like you’ll be able to treat it like any other type. It’s this special separate thing that you have to learn about for its own sake, but also in how it will be expressed in the language proper.

                        This doesn’t mean it’s incidental complexity or that the problem isn’t worth solving or that it would be best solved some other way. But it’s absolutely extra stuff you have to learn.

                        1. 2

                          Yeah, I guess my view is that the challenge of “learn existential types” is already present with impl Trait, but you’re right that making the feature usable in more places increases the pressure to learn it. Coincidentally, the next post for Possible Rust (“How to Read Rust Functions, Part 2”) includes a guide to impl Trait / existential types intended to be a more accessible alternative to Varkor’s “Existential Types in Rust.”

                          1. 6

                            but you’re right that making the feature usable in more places increases the pressure to learn it

                            And in particular, by making existential types more useful, folks will start using them more. Right now, for example, I would never use impl Trait in a public API of a library unless it was my only option, due to the constraints surrounding it. I suspect others share my reasons too. So it winds up not getting as much visible use as maybe it will get in the future. But time will tell.

                        2. 2

                          eh, fair enough! I’m more concerned about how complex these are to implement in rustc (slash alternative compilers like mrustc), but what do I know :P

                          1. 7

                            We already use this kind of analysis for splitting borrows, so I don’t expect this will be hard. I think rustc already has a reusable visitor for this.

                            (mrustc does not intend to compile newer versions of rust)

                            1. 1

                              I do think it is the case that implementation complexity is ranked unusually low in Rust’s design decisions, but if I think about it, I really can’t say it’s the wrong choice.

                          2. 3

                            Definitely second your view here. The added complexity and the trajectory means I dont feel comfortable using Rust in a professional setting anymore. You need significant self control to write maintainable Rust, not a good fit for large teams.

                            What I want is Go-style focus on readability, pragmatism and maintainability, with a borrow checker. Not ticking off ever-growing feature lists.

                            1. 9

                              The problem with a Go-style focus here is: what do you remove from Rust? A lot of the complexity in Rust is, IMO, necessary complexity given its design constraints. If you relax some of its design constraints, then it is very likely that the language could be simplified greatly. But if you keep the “no UB outside of unsafe and zero cost abstractions” goals, then I would be really curious to hear some alternative designs. Go more or less has the “no UB outside of unsafe” (sans data races), but doesn’t have any affinity for zero cost abstractions. Because of that, many things can be greatly simplified.

                              Not ticking off ever-growing feature lists.

                              Do you really think that’s what we’re doing? Just adding shit for its own sake?

                              1. 8

                                Do you really think that’s what we’re doing?

                                No, and that last bit of my comment is unfairly attributed venting, I’m sorry. Rust seemed like the holy grail to me, I don’t want to write database code in GCd languages ever again; I’m frustrated I no longer feel confident I could corral a team to write large codebases with Rust, because I really, really want to.

                                I don’t know that my input other than as a frustrated user is helpful. But I’ll give you two data points.

                                One; I’ve argued inside my org - a Java shop - to start doing work in Rust. The learning curve of Rust is a damper. To me and my use case, the killer Rust feature would be reducing that learning curve. So, what I mean by “Go-style pragmatism” is things like append.

                                Rather than say “Go does not have generics, we must solve that to have append”, they said “lets just hack in append”. It’s not pretty, but it means a massive language feature was not built to hammer that one itch. If “all abstractions must have zero cost” is forcing introduction of language features that in turn make the language increasingly hard to understand, perhaps the right thing to do is, sometimes, to break the rule.

                                I guess this is exactly what you’re saying, and I guess - from the outside at least - it certainly doesn’t look like this is where things are headed.

                                Two, I have personal subjective opinions about async in general, mostly as it pertains to memory management. That would be fine and well, I could just not use it. But the last few times I’ve started new projects, libraries I wanted to use had abandoned their previously synchronous implementations and gone all-in on async. In other words, introducing async forked the crate community, leaving - at least from my vantage point - fewer maintained crates on each side of the fork than there were before the fork.

                                Two being there as an anecdote from me as a user. ie. my experience of async was that it (1) makes the Rust hill even steeper to climb, regressing on the no. 1 problem I have as someone wanting to bring the language into my org; (2) it forked the crate community, such that the library ecosystem is now less valuable. And, I suppose, (3) it makes me worried Rust will continue adding very large core features, further growing the complexity and thus making it harder to keep a large team writing maintainable code.

                                1. 8

                                  the right thing to do is, sometimes, to break the rule.

                                  I would say that the right thing to do is to just implement a high-level language without obvious mistakes. There shouldn’t be a reason for a Java shop to even consider Rust, it should be able to just pick a sane high-level language for application development. The problem is, this spot in the language landscape is currently conspicuously void, and Rust often manages to squeeze in there despite the zero-cost abstraction principle being antithetical to app dev.

                                  That’s systems dynamics that worries me a lot about Rust: there’s a pressure to make it a better language for app development at the cost of making it a worse language for systems programming, for the lack of an actual reasonable app dev language one can use instead of Rust. I can’t say it is bad. Maybe the world would be a better place if we had just a “good enough” language today. Maybe the world would be better if we wait until “actually good” language is implemented.

                                  So far, Rust resisted this pressure successfully, even exceptionally. It managed to absorb “programmers can have nice things” properties of high level languages, while moving downwards in the stack (it started a lot more similar to go). But now Rust is actually popular, so the pressure increases.

                                  1. 4

                                    I mean, we are a java shop that builds databases. We have a lot of pain from the intersection of distributed consensus and GC. Rust is a really good fit for a large bulk of our domain problem - virtual memory, concurrent B+-trees, low latency networking - in theory.

                                  2. 4

                                    I guess personally, I would say that learning how to write and maintain a production quality database has to be at least an order of magnitude more difficult than learning Rust. Obviously this is an opinion of mine, since everyone has their own learning experiences and styles.

                                    As to your aesthetic preferences, I agree with them too! It’s why I’m a huge fan of Go. I love how simple they made the language. It’s why I’m also watching Zig development closely (I was a financial backer for some time). Zero cost abstractions (or Zig’s case, memory safety at compile time) isn’t a necessary constraint for all problems, so there’s no reason to pay the cost of that constraint in all cases. This is why I’m trying to ask how to make the design simpler. The problem with breaking the zero cost abstraction rule is that it will invariably become a line of division: “I would use Rust, but since Foo is not zero cost, it’s not appropriate to use in domain Quux, so I have to stick with C or C++.” It’s antithetical to Rust’s goals, so it’s really really hard to break that rule.

                                    I’ve written about this before, but just take generics as one example. Without generics, Rust doesn’t exist. Without generics (and, specifically, monomorphized generics), you aren’t able to write reusable high performance data structures. This means that when folks need said data structures, they have to go implement them on their own. This in turn likely increases the use of unsafe in Rust code and thus significantly diminishes its value proposition.

                                    Generics are a significant source of complexity. But there’s just no real way to get rid of them. I realize you didn’t suggest that, but you brought up the append example, so I figured I’d run with it.

                                    1. 2

                                      I guess personally, I would say that learning how to write and maintain a production quality database has to be at least an order of magnitude more difficult than learning Rust.

                                      Agree, but precisely because it’s a hard problem, ensuring everything else reduces mental overhead becomes critical, I think.

                                      If writing a production db is 10x as hard as learning Rust, but reading Rust is 10x as hard as reading Go, then writing a production grade database in Go is 10x easier overall, hand wavingly (and discounting the GC corner you now have painted yourself into).

                                      One thing worth highlighting is why we’ve managed to stick to the JVM, and where it bites us: most of the database is boring, simpleish code. Hundreds of thousands of LOC dealing with stuff that isn’t on the hot-path. In some world, we’d have a language like Go for writing all that stuff - simple and super focused on maintainability - and then a way to enter “hard mode” for writing performance critical portions.

                                      Java kind of lets us do that; most of the code is vanilla Java; and then critical stuff can drop into unsafe Java, like in the virtual memory implementation. The problem with that in the JVM is that the inefficiency of vanilla Java causes GC stalls in the critical code.. and that unsafe Java is horrifying to work with.

                                      But, of course, then you need to understand both languages as you read the code.

                                  3. 4

                                    I think the argument is to “remove” async/await. Neither C or C++ have async/await and people write tons of event driven code with them; they’re probably the pre-eminent languages for that. My bias for servers is to have small “manual” event loops that dispatch to threads.

                                    You could also write Rust in a completely “inverted” style like nginx (I personally dislike that, but some people have a taste for it; it’s more like “EE code” in my mind). The other option is code generation which I pointed out here:

                                    https://lobste.rs/s/rzhxyk/plain_text_protocols#c_gnp4fm

                                    Actually that seems like the best of both worlds to me. High level and event driven/single threaded at the same time. (BTW the video also indicated that the generated llparse code is 2x faster than the 10 year old battle-tested, hand-written code in node.js)

                                    So basically it seems like you can have no UB and zero cost abstractions, without async/await.

                                    1. 5

                                      After our last exchange, I don’t really want to get into a big discussion with you. But I will say that I’m quite skeptical. The async ecosystem did not look good prior to adding async/await. By that, I mean, that the code was much harder to read. So I suppose my argument is that adding some complexity to language reduces complexity in a lot of code. But I suppose folks can disagree here, particularly if you’re someone who thinks async is overused. (I do tend to fall into that camp.)

                                      1. 2

                                        Well it doesn’t have to be a long argument… I’m not arguing against async/await, just saying that you need more than 2 design constraints to get to “Rust should have async/await”. (The language would be a lot simpler without it, which was the original question AFAICT.)

                                        You also need:

                                        1. “ergonomics”, for some definition of it
                                        2. textual code generation is frowned upon

                                        Without constraint 3, Rust would be fine with people writing nginx-style code (which I don’t think is a great solution).

                                        Without constraint 4, you would use something like llparse.

                                        I happen to like code generation because it enables code that’s extremely high level and efficient at the same time (like llparse), but my experience with Oil suggests that most “casual” contributors are stymied by it. It causes a bunch of problems in the build system (build systems are broken and that’s a different story). It also causes some problems with tooling like debuggers and profilers.

                                        But I think those are fixable with systems design rather than language design (e.g. the C preprocessor and JS source maps do some limited things)

                                        On a related note, Go’s philosophy is to fix unergonomic code with code generation (“go generate” IIRC). There are lots of things that are unergonomic in Go, but code generation is sort of your “way out” without complicating the language.

                              2. 1

                                I didn’t know about the first change. That’s very exciting! This is definitely something that bit me when I was first learning the language.

                              3. 13

                                Damn, this describes very well the sentiment I’ve been having in my spine towards Rust for the last few years. It kinda seems like async was its sharkjump moment.

                                (Common Lisp is pretty nice, though. We have crazy macros and parentheses and a language ecosystem that is older than I am and isn’t showing any signs of changing…)

                                Something is subtly wrong about Common Lisp as well. On paper, it seems like a perfect ecosystem for 99% of the problems out there. It has an unsurpassed runtime, pretty good performance (on SBCL) for such a dynamic language and better stability than anything. What’s wrong with it is that nobody’s using it despite these virtues.

                                1. 5

                                  If I’d hazard a guess, it’s probably because CL never really figured out its deployment story (and it doesn’t really integrate well with its environment in other respects because the spec is so generic). Going from “runs inside my Emacs/SLIME setup” to “runs in production” is surprisingly non-trivial (as opposed to golang where you can scp a static binary).

                                  There are other reasons, though (see: the Lisp Curse, “worse is better”, etc).

                                2. 8

                                  I feel that this is particular blistering to read because of its nature. It’s not a statement about Rust’s maturity or even ecosystem fragmentation so much as it’s an argument against design decisions the author feels to be flawed. Regarding whether or not the critique in the article holds water or not, I might need some help deciding.

                                  On a personal note, I am more excited about the Bastion and RIker projects than I am moved to optimism by async and .await.

                                  1. 6

                                    While the Rust async ecosystem will never be able to match the sort of built-in seamless ergonomics of something like Haskell, I never found it to be any worse than what I’m used to in, say, Ruby. Note that all my Rust async experience predates async/await, so I’m not sure what’s up with that side of things.

                                    1. 6

                                      async/await has changed things pretty substantially, but generally for the better.

                                    2. 2

                                      Closures make capturing a reference to a chunk of memory easy (so easy it can be accidental/unnoticed).

                                      Async code really wants ergonomic closures as callbacks.

                                      This means idiomatic async code tends to promiscuously grab references to data in scope.

                                      In rust, this causes appropriate lifetime enforcement (the “radioactivity” from the article).

                                      In a reference counted GC language, it can easily cause reference cycles (and hence memory leaks).

                                      You can break cycles with weakrefs, just as you can manage lifetimes in rust. But the combination of “easy capture” and “short, inline closures are common in async code as callbacks” is perhaps the dangerous mix, if you need to do any manual memory management (break cycles, manage lifetimes).