In support of pruning non-determinism being less effective than avoiding it: if you have three concurrent agents with m,n,p respective atomic steps the total number of execution orderings is (m+n+p)!/m!n!p!. That gets really really big really really fast.
well maybe, but you can also prune many of those steps as either stateless or nonconflicting (e.g., agent0 with M steps that all operate on a local variable that results in a message send, agent1 with N steps that all operate on a global state but transactionally with a commit at the end, and agent2 with P steps that all operate on data that never escapes the agent during the lifetimes of agent0 and agent1).
Dave Ackley espouses that we need to let go of determinism to make real progress from where we are: https://www.youtube.com/watch?v=fz5aJNZOFIE
He might be right, I haven’t come to a conclusion, but his ideas are interesting, if scary. And his beard means business.
Whereas the approximate computing people say we need to let go of hardware that computes accurately to make real progress on efficiency. Whereas, the companies behind flash memory say they’ve been doing that for some time with the profit just rolling in.
There’s likely a lot of overlap in those areas.