The correct Rust version also looks very much like the idiomatic C++. Iterator invalidation is an annoying problem and is one of the problems that the Rust borrow checker addresses.
It also looks wrong. What happens if two people shoot the same player and the first laser strike kills them? Won’t you now respawn them and then the second laser strike will reduce their health? The correct thing, I would think, would be to collect a map of players to damage, iterate over the players and collect the amount of damage inflicted per player, then iterate over the map and apply that damage to the target.
The correct Rust version also looks very much like the idiomatic C++.
This is a good observation. In my opinion, it applies in general. Idiomatic C++ looks very much like Rust in many cases.
What happens if two people shoot the same player and the first laser strike kills them? Won’t you now respawn them and then the second laser strike will reduce their health?
It depends on how the hit test is implemented. But it’s true that the player with the lower index in the vector will score the kill if two players deal a killing blow, so there’s still an index-based quirk in there. There are layers to this problem. In the interest of brevity, I omitted a lot of details. There are also issues when a single laser beam hits two enemies (it is only supposed to hit the first obstacle).
But it’s true that the player with the lower index in the vector will score the kill if two players deal a killing blow, so there’s still an index-based quirk in there
That’s not my issue. If A and B both shoot C, who is low on health, I believe the current code will do the following:
A will kill C.
C will respawn.
B will damage the respawned C.
That’s almost certainly not what players expect. If the damage is calculated up front and applied once then A and B will both apply damage to C, then C will die once and respawn. This also makes it easy to ensure that both get credit (or half credit) for the kill.
I see. I misread the code, you’re collecting lasers not shooters. In that case, the failure mode is the one that @proctrap indicated. A shoots at C’s respawn point, B shoots at C and kills them. C will take damage after respawn if B is processed before A, but not if A is processed before B.
This corner case is avoided if you are temporarily invulnerable after respawn, but having to think about these corner cases is avoided if you process hits and damage taking and then process deaths. If your first loop applies damage (and allows hit points to become negative: and adds dead players to a collection if the accrued damage made the hit point counter stop being positive) and the second handles respawn and reporting who shot the killed players then all of the corner cases are resolved naturally and appear as obvious choices in the code.
What happens if two people shoot the same player and the first laser strike kills them? Won’t you now respawn them and then the second laser strike will reduce their health?
The other day I was playing a multiplayer FPS (a quake 3 mod incidentally) and I killed an enemy on what must have been the same frame as they switched to my team - causing them to instantly respawn somewhere not at all where I was shooting.
The UI showed me shooting killing them while they were on my team. I’m not sure if it added a death to their score and a re-spawn counter (the latter of which at least should never exist after switching teams) but it wouldn’t surprise me if it did.
While I’m a big fan of Rust and the borrow checker, this isn’t necessarily a logic bug. You might have reasonably wanted players earlier in the list to be able to kill players later in the list but not vice versa. Changing your code to satisfy the borrow checker just revealed that you didn’t want the previous behavior.
It’s also actually fairly trivial to satisfy the borrow checker in the original case, precisely because you’re using a sequentially indexed collection. Instead of:
for (shooter_idx, shooter) in players.iter().enumerate() {
if shooter.firing {
for (other_idx, other) in players.iter_mut().enumerate() {
You could have done this:
for shooter_idx in 0..players.len() {
if players[shooter_idx].firing {
for other_idx 0..players.len() {
Then just modify other with players[other_idx] and the borrow checker will happily let you treat both as mutable. I removed the Copy derive which can mask subtle bugs, and changed it to modify both players in the inner loop just to show you can.
It’s not as idiomatic as the iterator solution, but it’s a trivial change that the borrow checker would not have prevented. Just worth keeping in mind because I have seen that kind of code before (and used it myself recently on a few advent of code problems when I didn’t feel like doing things the “right” way….).
But that’s exactly right. This is the kind of error that often results from sloppy thinking about the interactions of mutable, aliased state. By highlighting it as an error, the user is made to realize their design is underspecified and select an algorithm with a more intentional choice about how to handle this edge case. It could be the algorithm you present if that is the desired behavior, the point is they don’t happen into this behavior by accident.
I think that’s sort of the point though. Rust guarantees no memory safety bugs. Rust helps avoid many other bugs, though of course you can write any logic bug that you want if you try hard enough. One way it helps you avoid those bugs is having the borrow checker push you towards code structures that are more likely to do what you want…
What would happen if the code were structured like that, as it would have been if I were using a language without a borrow checker? The player that comes first in the vector kills the player that comes later in the vector. After that, the player who was hit is either dead or has respawned somewhere else, thus he cannot retaliate in the same frame.
This sort of thing is the reason why some games and/or engines defer “destructive” state transitions – not necessarily to the end of the frame, but at the very least to the end of some particular “phase”.
I think the most common example, albeit at a different level and concerning a different kind of destructive state transition, is Unity’s Destroy behaviour. You can destroy an object whenever you want during an update cycle, but the actual destruction is deferred to after the update cycle so that anyone who still holds a reference to that object and needs things from it can still use it.
In this case: the inner collision test loop can avoid handling deaths and respawns immediately. E.g. it could do something like this:
and then, after you’ve processed all relevant state, you start collecting dead players – most trivially with something like this, but obviously if you expect to have hundreds of them you probably want some other way to keep tabs:
for player in players.iter().enumerate() {
if player.health == 0 {
// Handle death, schedule respawn etc.
}
}
(edit: for a cleaner functionally equivalent variant in a modern setting, see this suggestion; my retro platform background is showing at the seams here – if I read about lasers and ships I’m going to write NES code no matter what :-D).
This lets you avoid the extra lasers vector, at least in this restricted case, although I don’t think it helps with the borrow checker :-).
In “meta” terms, both achieve practically the same thing. Reaping and respawning players after the shooting check loop means their state is reliably sampled throughout that loop. Reaping and respawning them inside the loop means you’re messing up with the sampled state while depending on it being constant; saving the lasers state fixes that by keeping a copy of the state subset that you actually need.
Exactly, I ended up implementing a struct DeathEvent to handle deaths at the right point in time during tick updates. There’s quite a bit of bookkeeping to be done when a player dies.
Regarding your example with reaping the players later when their health is 0, such a solution would still lead to index-based artifacts in some cases, e.g. when you have weapons that deal your current health points as damage to the enemy. In such a case, the player who comes first in the vector has an advantage, as he will reduce the other player’s health, and thus damage, first.
Right – I wanted to mention that, but it was already kind of a long post and it was a little besides the point anyway. In your example the extra lasers vector feels artificial but that’s mostly because, without ballistic trajectory, delays or anything of the sort, there’s basically no actual state to the lasers, so it feels like you’re duplicating some player state just to keep the borrow checker happy.
But I’d guess that’s going to be less of a concern for other projectile types, which have state of their own, which you can use to “capture” relevant state. E.g. in this case you wouldn’t look at the shooting player to see how many HPs they have and deal that amount of damage; you’d save that as part of the projectile’s state, precisely so that amending player state (which is kind of expected when they’re shooting at each other :-) ) doesn’t mess up bullet state.
That’s one of the reasons why I mentioned sampling in my last paragraph. It’s a very common pitfall, and not restricted to gamedev at all. For some reason, our mental model of a loop’s behaviour is tied to it being a reliable state sampling, even when we’re literally changing state all over its body :-).
Deferred destruction solves the memory management problem, but it creates edge cases in the game logic.
For example, if there was an object that received huge damage and should have disintegrated, that will be deferred to the next frame/stage, so while it’s destroyed-but-not-removed it may keep taking more damage, and act as an invincible wall.
Another example: when an entity has been replaced by a new one, now you have two of them existing at the same time. If you’ve scheduled-to-delete 10 silver coins and created 1 gold coin, now some logic that counts the coins may see double for a frame.
This is even more painful when the entities form some graph/hierarchy, because the Schrödinger’s entities may break traversal of that graph or break its invariants. Bevy’s Parent/Child entities were almost unusable because of that, because depending on the order of the systems you could have “despawn all children of a despawned parent” system run before “add a new child to a modified parent” system.
That’s why I mentioned this is at a different level and concerning a different kind of destructive state transitions :). I’ve never used Bevy, so I’m not sure if it suffers from these problems, but all of these sound like symptoms of an incorrect game/world model, and are related to when you actually destroy objects in memory only insofar as your model conflates destruction from the game world with destruction from memory. That can be a good choice, I’m not here to argue against it on principle – I’m sure its trade-offs are worth it under some circumstances.
Edit: FWIW, it’s a little hard to explain bugs in hypotheticals, because the bugs (and the devil) are usually in the details. Your second example is a good illustration: why would you schedule the deletion of some coins, when you can (hopefully?) deduct or add money straight away? And regardless, if you do, why are you counting coins that are scheduled for deletion in the first place?
Instead of easily countable coins imagine traded gemstones, or used crafting materials, or quest items exchanged with an NPC.
Why count items scheduled for deletion? Because the entities still exist in the storage holding them, so you find them when you query/iterate them. The commands to delete them wait in another queue to be executed.
In a heavily-multi-threaded ECS it can’t be solved by having an .is_deleted flag on an entity, because you can have multiple systems operate on different sets of components of the same entities at the same time. The damage-applying system may be running in parallel with inventory-management system and collisions system. Even if you used an atomic flag, it’d race between systems, and you’d have even harder non-deterministic edge cases.
To solve that reliably you must create ordering between systems, so that the damage-applying system runs before the others. When you have such serialization point, you may as well execute the command queue to delete entities for real at that point.
Like I said above: I’m sure these trade-offs are worth it under some circumstances. In a heavily-multithreaded ECS, sure, concurrent access arbitration can introduce significant cost, and maybe some correctness problems, under some circumstances, especially coupled with an incomplete understanding of the world model, or with outright incomplete/incorrect modelling. That’s why virtually every system that supports deferred destruction also supports immediate destruction. However, I’m going to go on a limb here and guess that OP’s game, like many other games (and, of course, unlike many others), isn’t a heavily-multi-threaded ECS.
Funnily enough, the title led me to believe that this is one of those complaints against the borrow checker that deems it worse than a nuisance. Good to know it’s otherwise :)
The correct Rust version also looks very much like the idiomatic C++. Iterator invalidation is an annoying problem and is one of the problems that the Rust borrow checker addresses.
It also looks wrong. What happens if two people shoot the same player and the first laser strike kills them? Won’t you now respawn them and then the second laser strike will reduce their health? The correct thing, I would think, would be to collect a map of players to damage, iterate over the players and collect the amount of damage inflicted per player, then iterate over the map and apply that damage to the target.
This is a good observation. In my opinion, it applies in general. Idiomatic C++ looks very much like Rust in many cases.
It depends on how the hit test is implemented. But it’s true that the player with the lower index in the vector will score the kill if two players deal a killing blow, so there’s still an index-based quirk in there. There are layers to this problem. In the interest of brevity, I omitted a lot of details. There are also issues when a single laser beam hits two enemies (it is only supposed to hit the first obstacle).
That’s not my issue. If A and B both shoot C, who is low on health, I believe the current code will do the following:
That’s almost certainly not what players expect. If the damage is calculated up front and applied once then A and B will both apply damage to C, then C will die once and respawn. This also makes it easy to ensure that both get credit (or half credit) for the kill.
I don’t see how that can happen. When C respawns, he is moved out of harm’s way. B’s laser, which is evaluated later, will miss the target.
I see. I misread the code, you’re collecting lasers not shooters. In that case, the failure mode is the one that @proctrap indicated. A shoots at C’s respawn point, B shoots at C and kills them. C will take damage after respawn if B is processed before A, but not if A is processed before B.
This corner case is avoided if you are temporarily invulnerable after respawn, but having to think about these corner cases is avoided if you process hits and damage taking and then process deaths. If your first loop applies damage (and allows hit points to become negative: and adds dead players to a collection if the accrued damage made the hit point counter stop being positive) and the second handles respawn and reporting who shot the killed players then all of the corner cases are resolved naturally and appear as obvious choices in the code.
Sounds like a perfect glitch for spawncamping
The other day I was playing a multiplayer FPS (a quake 3 mod incidentally) and I killed an enemy on what must have been the same frame as they switched to my team - causing them to instantly respawn somewhere not at all where I was shooting.
The UI showed me shooting killing them while they were on my team. I’m not sure if it added a death to their score and a re-spawn counter (the latter of which at least should never exist after switching teams) but it wouldn’t surprise me if it did.
While I’m a big fan of Rust and the borrow checker, this isn’t necessarily a logic bug. You might have reasonably wanted players earlier in the list to be able to kill players later in the list but not vice versa. Changing your code to satisfy the borrow checker just revealed that you didn’t want the previous behavior.
It’s also actually fairly trivial to satisfy the borrow checker in the original case, precisely because you’re using a sequentially indexed collection. Instead of:
You could have done this:
Then just modify other with
players[other_idx]and the borrow checker will happily let you treat both as mutable. I removed the Copy derive which can mask subtle bugs, and changed it to modify both players in the inner loop just to show you can.It’s not as idiomatic as the iterator solution, but it’s a trivial change that the borrow checker would not have prevented. Just worth keeping in mind because I have seen that kind of code before (and used it myself recently on a few advent of code problems when I didn’t feel like doing things the “right” way….).
But that’s exactly right. This is the kind of error that often results from sloppy thinking about the interactions of mutable, aliased state. By highlighting it as an error, the user is made to realize their design is underspecified and select an algorithm with a more intentional choice about how to handle this edge case. It could be the algorithm you present if that is the desired behavior, the point is they don’t happen into this behavior by accident.
I think that’s sort of the point though. Rust guarantees no memory safety bugs. Rust helps avoid many other bugs, though of course you can write any logic bug that you want if you try hard enough. One way it helps you avoid those bugs is having the borrow checker push you towards code structures that are more likely to do what you want…
This sort of thing is the reason why some games and/or engines defer “destructive” state transitions – not necessarily to the end of the frame, but at the very least to the end of some particular “phase”.
I think the most common example, albeit at a different level and concerning a different kind of destructive state transition, is Unity’s
Destroybehaviour. You can destroy an object whenever you want during an update cycle, but the actual destruction is deferred to after the update cycle so that anyone who still holds a reference to that object and needs things from it can still use it.In this case: the inner collision test loop can avoid handling deaths and respawns immediately. E.g. it could do something like this:
and then, after you’ve processed all relevant state, you start collecting dead players – most trivially with something like this, but obviously if you expect to have hundreds of them you probably want some other way to keep tabs:
(edit: for a cleaner functionally equivalent variant in a modern setting, see this suggestion; my retro platform background is showing at the seams here – if I read about lasers and ships I’m going to write NES code no matter what :-D).
This lets you avoid the extra lasers vector, at least in this restricted case, although I don’t think it helps with the borrow checker :-).
In “meta” terms, both achieve practically the same thing. Reaping and respawning players after the shooting check loop means their state is reliably sampled throughout that loop. Reaping and respawning them inside the loop means you’re messing up with the sampled state while depending on it being constant; saving the lasers state fixes that by keeping a copy of the state subset that you actually need.
Exactly, I ended up implementing a
struct DeathEventto handle deaths at the right point in time during tick updates. There’s quite a bit of bookkeeping to be done when a player dies.Regarding your example with reaping the players later when their health is 0, such a solution would still lead to index-based artifacts in some cases, e.g. when you have weapons that deal your current health points as damage to the enemy. In such a case, the player who comes first in the vector has an advantage, as he will reduce the other player’s health, and thus damage, first.
Right – I wanted to mention that, but it was already kind of a long post and it was a little besides the point anyway. In your example the extra lasers vector feels artificial but that’s mostly because, without ballistic trajectory, delays or anything of the sort, there’s basically no actual state to the lasers, so it feels like you’re duplicating some player state just to keep the borrow checker happy.
But I’d guess that’s going to be less of a concern for other projectile types, which have state of their own, which you can use to “capture” relevant state. E.g. in this case you wouldn’t look at the shooting player to see how many HPs they have and deal that amount of damage; you’d save that as part of the projectile’s state, precisely so that amending player state (which is kind of expected when they’re shooting at each other :-) ) doesn’t mess up bullet state.
That’s one of the reasons why I mentioned sampling in my last paragraph. It’s a very common pitfall, and not restricted to gamedev at all. For some reason, our mental model of a loop’s behaviour is tied to it being a reliable state sampling, even when we’re literally changing state all over its body :-).
Deferred destruction solves the memory management problem, but it creates edge cases in the game logic.
For example, if there was an object that received huge damage and should have disintegrated, that will be deferred to the next frame/stage, so while it’s destroyed-but-not-removed it may keep taking more damage, and act as an invincible wall.
Another example: when an entity has been replaced by a new one, now you have two of them existing at the same time. If you’ve scheduled-to-delete 10 silver coins and created 1 gold coin, now some logic that counts the coins may see double for a frame.
This is even more painful when the entities form some graph/hierarchy, because the Schrödinger’s entities may break traversal of that graph or break its invariants. Bevy’s
Parent/Childentities were almost unusable because of that, because depending on the order of the systems you could have “despawn all children of a despawned parent” system run before “add a new child to a modified parent” system.That’s why I mentioned this is at a different level and concerning a different kind of destructive state transitions :). I’ve never used Bevy, so I’m not sure if it suffers from these problems, but all of these sound like symptoms of an incorrect game/world model, and are related to when you actually destroy objects in memory only insofar as your model conflates destruction from the game world with destruction from memory. That can be a good choice, I’m not here to argue against it on principle – I’m sure its trade-offs are worth it under some circumstances.
Edit: FWIW, it’s a little hard to explain bugs in hypotheticals, because the bugs (and the devil) are usually in the details. Your second example is a good illustration: why would you schedule the deletion of some coins, when you can (hopefully?) deduct or add money straight away? And regardless, if you do, why are you counting coins that are scheduled for deletion in the first place?
Instead of easily countable coins imagine traded gemstones, or used crafting materials, or quest items exchanged with an NPC.
Why count items scheduled for deletion? Because the entities still exist in the storage holding them, so you find them when you query/iterate them. The commands to delete them wait in another queue to be executed.
In a heavily-multi-threaded ECS it can’t be solved by having an
.is_deletedflag on an entity, because you can have multiple systems operate on different sets of components of the same entities at the same time. The damage-applying system may be running in parallel with inventory-management system and collisions system. Even if you used an atomic flag, it’d race between systems, and you’d have even harder non-deterministic edge cases.To solve that reliably you must create ordering between systems, so that the damage-applying system runs before the others. When you have such serialization point, you may as well execute the command queue to delete entities for real at that point.
Like I said above: I’m sure these trade-offs are worth it under some circumstances. In a heavily-multithreaded ECS, sure, concurrent access arbitration can introduce significant cost, and maybe some correctness problems, under some circumstances, especially coupled with an incomplete understanding of the world model, or with outright incomplete/incorrect modelling. That’s why virtually every system that supports deferred destruction also supports immediate destruction. However, I’m going to go on a limb here and guess that OP’s game, like many other games (and, of course, unlike many others), isn’t a heavily-multi-threaded ECS.
Funnily enough, the title led me to believe that this is one of those complaints against the borrow checker that deems it worse than a nuisance. Good to know it’s otherwise :)