This is an interesting paper. If the authors are reading, thanks for looking into this problem; this is really cool. I’m just picking it apart because this is something that I’ve studied as well. There’s definitely something here.
For example, young players and novices may subjectively experience a great deal of depth where expert players find little.
This is interesting, but I would expect that the opposite occurs just as often. For example, if you’re a weak poker player, you probably think it’s mostly luck: you lost because you got bad cards. It takes a certain amount of subtlety to recognize the parts of the game that are skillful (and the parts that are uncontrollable and not worth worrying about).
To simplify matters we will limit ourselves throughout this paper to looking at two-player, turn-based games; the term “game” in this paper refers to this category.
It’s actually very hard to define luck vs. skill for a game with 3 or more players. You have unpredictable strategic interactions. This can happen in 2-player games with simultaneous play (Rock, Paper, Scissors, at one iteration, is effectively random, even though there’s no randomizing element) but it becomes a lot more common with 3 or more.
I created a card game, Ambition, that has almost no card-luck (it’s about 4% of the total variance in final scores, which is low for a trick-taking game unless you duplicate it like Bridge) but there’s still a fair amount of “luck” in the unpredictable interactions that come from other’s strategic choices (or mistakes) affecting other players in unequal ways.
Knowing whether a game is in PSPACE or EXP, for example, doesn’t give us enough detail about the game to help us distinguish it from the myriad other games in the same category.
It’s surprising to me, actually, how often classic games turn out to be NP-complete (or harder) problems in disguise. Asymptotic computational complexity is irrelevant (i.e. I don’t care what O(f(N)) Bridge is when generalized to N decks because it’s never played with more than 1) and yet there seems to be a soft connection.
The size of a game’s skill chain is the number of distinct steps in the ranking of all players, where players at each step beat all the players at lower steps some significant percentage of the time.
Another measure that works great for 2-player games but starts to be incomplete at 3+. It also has non-transitivity (occasionally) for 2 players (A might beat B 60 percent of the time, while B beats C 60% of the time, and C beats A 60% of the time) but it’s far better than nothing.
Of course, it depends on emergent behavior and even cultural factors (and, more objectively, what is defined as the player population… since it’s theoretically possible that there are 28 “steps” between fresh noobs and intermediate players but only 2 between intermediate and expert players) rather than objective properties of the game.
A specific level of computational resources is defined in terms of three factors
I might be inclined to use an abstract machine model (say, random-access machine with pre-determined costs for each instruction) than clock-time performance, which is not only variable but machine-dependent. Cache-friendliness, for one example, will matter immensely if you measure resources in real-world time, but probably doesn’t belong in a model of theoretical game complexity.
What’s missing from this type of game is heuristics – the rules of thumb that players use as a shortcut through the intractable problem of fully searching the game tree.
Game design is, in a way, the opposite  of cryptography. In crypto, you want to build something that is deterministic but seems completely random and patternless to an adversary. You don’t want people to discover heuristics that enable people to crack your code. With games, you may not know in advance what those heuristics (exploits) will be, but you want them to exist (although you also want them to be balanced, as the OP describes) because min-maxing isn’t fun for (most) human players. People want to feel like they’re playing based on instinct rather than winning by min-maxing, which isn’t even socially acceptable in casual games (no one wants to be the guy who takes 10 times as long as every other player).
 Maybe “semi-opposite” is a better term because, as the OP describes, those heuristics should exist but not be dominant.
the strategy ladder
model presents a method for investigating how Kolmogorov
complexity of optimal strategies is related to the concept of
Neat idea. Of course, there is a hitch: I wouldn’t call Kolmogorov complexity “measurable” (in the way we are looking to measure depth) because it’s not even computable in the general case.
I would expect an “uncanny hill” of depth/perceived depth. Beginners will be introduced, “it’s like snap, but with poker chips”, then they’ll slowly be introduced to trickier concepts, the blind/double blind, the river, aces high, blah blah blah. You start to get parts of it, but also realise there is a lot more to it than you realise.
Then after say 15 games or watching a few tournaments on tv (“Where is this river they keep talking about? Huh, why did he do that play? Ahh, that’s why”), you start to get the whole thing. Somewhere down the line you become advanced or an expert and everything comes naturally, you know the ins and outs of the whole system and the depth melts away.
I think that is also why games with imperfect knowledge are so popular, they can add unknowns or even misinformation on top of the depth, or even start replacing mechanical depth with complex human interections.
I think that is also why games with imperfect knowledge are so popular, they can add unknowns or even misinformation on top of the depth, or even start replacing mechanical depth with complex human interections
Imperfect information limits the ability to min-max. You could theoretically run an expected min-max in your head but obviously no one actually does that.