First of a few posts with my own thoughts arising from the recent New Scientist ‘Instant Expert’ event.
Games and Game Theory appear to be the ruling paradigm for the current AI top dogs. Both Irina Higgins and Simon Lucas made clear cases for the choice of gaming environments as AI training grounds, and referenced Game Theory, too.
Don’t worry, I’m not going to try to argue with them – but I do think it is worth examining the assumptions that underlie gaming approaches and Game Theory, and considering these as they relate to the problem spaces which we dearly wish that AI could help us with. As you might guess, I am not sanguine…
Let’s start by clarifying and distinguishing between the choice of games as the training ground for AI development, and the application of Game Theory to AI problems. Neither choice implies anything about the other.
Choosing games as the test-bed for ideas about how to develop and improve AI makes sense in many ways – Irina Higgins set out some excellent reasons:
- absence of testing bias (use of existing human games guarantees that the results are relevant to human ideas of intelligence),
- unlimited data available,
- parallel testing is possible,
- progress metrics are easy to determine,
- humans find games interesting – so developing AI that can play games moves them in human directions.
Choosing Game Theory as an approach for AI is perhaps less obvious – and requires (for me at least) a little research into what Game Theory actually is; because it doesn’t exactly do what one might imagine.
Game Theory doesn’t tell you how to win at games, say anything general about what games are, why or how humans play them, what makes a good game, or indeed anything anthropological in character at all. In fact it has very little to do with what the man-in-the-street might think of as ‘games’: this explanation of Game Theory states that in psychology, the term covering the same area is the ‘Theory of Social Situations’.
Game Theory is a highly sophisticated mathematical approach to competitive situations (not limited to games), which looks at minimising losses or maximising gains, rather than ‘winning’ per se. The field was crystallised by the mathematical genius and prodigy John von Neumann, and looks at the ways in which ‘rational agents’ should approach such situations for optimal outcomes (the foundation of which was a mathematical examination of ‘minimax strategy’, where the aim is to minimise the player’s maximum loss).
As you might guess, application of this sort of approach in games that are intended as entertainment might be ill-advised – playing a game with the central aim of ‘minimising your maximum loss’ is unlikely to make you an enjoyable opponent – and indeed game designers have learned not to use game-theoretic approaches for game AIs as a result of this and other limitations. Until recently, real-world applications of Game Theory have tended to be in the fields of economics (this article describes its application in the auction wars over broadcast spectrum sales) and chillingly, the doctrine of Mutual Assured Destruction which underpinned Cold War military strategy.
Nevertheless, a quick google search for ‘AI game theory’ produces over 4 million results; this is a connection that is being extensively explored.
DeepMind themselves have begun using game-theoretic models to examine the interactions between human game players and increasingly intelligent AI players, explicitly addressing the subject of how super-intelligent AI (AGI from now on) might interact with us humans.
But before I disappear down a Games Theory rabbit-hole, let’s focus on these two paradigms and the ways in which they might influence the development of AI.
The Gaming paradigm
Humans know what games are – actually, we have fairly finely-tuned antennae that detect game-playing; games form a clear and distinct class of human activity, with very specific attributes;
- they are artificial – clearly understood to be so,
- they are bounded, both tightly by rules and more generally by social conventions, with bounding that is particularly clear in the judgement of outcome,
- they are carefully engineered to be both ‘fair’ and ‘playable’ (in her wonderful book EthnoMathematics, Marcia Ascher analyses several games from aboriginal cultures, noting the delicate balance achieved between probability and skilful play, as structured by the chosen rules),
All of these conditions clearly set the ground for a nice, neat ‘sandbox’ for the development and testing of AI approaches.
But each characteristic also sets its own neat trap – which we would do well to understand as the field races ahead, intoxicated by its successes and, equally, no doubt, by the enormous amounts of cash being thrown at it.
The first characteristic – the clarity of the distinctions humans make between games and real life – is very powerful. Consider these phrases;
‘He’s a bit of a game-player..‘,
‘She thinks it’s all a game..‘,
‘He thinks if he can game the system, then he’s doing well..‘.
All of these are about people who are going about some real-life issue with a ‘game-playing’ mentality – and in each case, the speaker is clearly communicating antipathy to the ‘game-playing’ attitude of the subject. Humans strongly disapprove of game-playing outside clear social boundaries. AI implemented in a non-gaming environment, using a ‘game-playing’ approach, is likely to meet with surprisingly strong negative reactions.
The second characteristic; the bounded character of games – sets a more obvious trap, and one which AI researchers clearly see – there is a clear progression in the challenges AI games machines have been set. From noughts and crosses, to chess, to Go, to poker, fully exploring the bounds become harder for an information processing machine. But note that the bounds in question are, in all these cases, related to availability of information – not to clarity of outcome.
In the case of noughts and crosses, the information available is complete, and the maximum data-set required is small – it was possible with even primitive computers to fully map all possible game states.
Increasingly sophisticated learning and ‘exploration’ techniques, and of course the exponential increases in raw power and speed provided on the back of Moore’s ‘Law’, have underpinned progressive victories over humans in chess, then Go, and even more recently, a particular form of Poker – but in each of these cases, it is the increasing difficulty of access to reliable information that provides the challenge – the ‘success metric’ for poker is as trivial as it is for noughts and crosses (actually, it is simpler to see which player has all the money than it is to check for the winning condition in noughts and crosses).
Here I must not miss the opportunity to link to this brilliantly detailed set of three posts by Andrey Kurenkov on the history and development of game-playing AI .
So the increasing difficulty that AI researchers set themselves is addressing only one dimension of potential difficulty – that of incomplete information, and is not preparing them in the slightest for the conditions that matter in even very simple real situations – conditions where judgement of success is unavoidably uncertain (we’ll look at an example in the next post).
Lastly, the requirement of games that they shall be fair and playable sets another obvious trap.
There is a sequence in Uderzo & Goscinny’s wonderful ‘Asterix in Britain’ (and I’m going to offer kudos to the translators as well) which illustrates this rather nicely. The Romans are invading Britain, but they have real problems with the weird rules the British have, which mandate that battles stop at ‘tea-time’ (actually ‘hot-water time’; buy the book!) – also, there is to be no fighting at the weekend.
So clever Julius Caesar decides to fight only at 5pm on weekdays, and on weekends – and conquers all Britain (except, of course, for one small village…).
The point being, as we end up telling our children far too many times, that life isn’t fair. No-one has carefully game-tested and tweaked the rules so that all players have an equal chance, and to make the game flow nicely.
This is not in the slightest an attack on the choice of game environments as fruitful ones for developing AI techniques.
What it is is a call for us to remain conscious of the implications and limitations of this choice at all times. In the same way that watching a 2 year old, utterly absorbed by swiping and pinch-zooming on a smartphone screen, ignoring the real world around it, is seriously disturbing, I am concerned about the reborn AI industry, fixated as it appears to be on the challenges of triumph in enclaves deliberately walled-off from complexity. I don’t believe that AI researchers are unaware of the issue – but I do question whether they pay it enough attention.
The question, of course, is whether developing AI techniques, strategies, strengths in the sandbox defined by the boundaries of human games will turn out to be just that – a sandbox, offering easy stepwise access to the unbounded world, or whether fine-tuning aptitudes to the world of the sandbox will turn out to be an evolutionary dead-end.
Game Theory is, for me at least, even more worrying. It ought not to be, as its central premise – that the agents it models are simplistically and totally self interested and purely rational – is clearly so far removed from the reality of human existence, that it should be clear to anyone that it is of purely theoretical interest outside a few, strictly limited circumstances.
Unfortunately for humanity, the opposite is true. Deriving as it does, from the work of an undoubted giant of mathematical and engineering thought, John von Neumann, founded in unimpeachable logic, and offering itself as, as it were, a Vorpal blade at a time of utter desperation for the pseudo-science that is economics, it has been applied, with increasing insistence and decreasing caution, to ordinary people’s lives over the last 70 years – you may judge the health of the world economy for yourself.
The total inapplicability of game theory in its pure form to real-world situations is wonderfully and hilariously exemplified in this paper by Merrill Flood, a mathematician working at the Rand Corporation (a US govt. funded think-tank tightly associated with the US military) in the late 1940’s. Flood is obviously au fait with the details and mathematics of Game Theory, but is interested in testing it in practice.
He documents several situations which prima facie appear closely to match scenarios modelled by von Neumann, none of which actually work out. Then he sets up a more controlled experiment, selecting two secretaries from the office as test subjects, selecting a;
‘… two-person, non-constant-sum case, when the available strategies and personal utilities are quite definite …
- Experimenter E offers to give Subject 1 an amount m, but to give Subjects 1 and 2 together a greater amount, m+g if they can agree on sharing the greater amount together.’
The game-theoretic ‘right’ answer to this setup is that Subjects 1 and 2 agree to split the sum ‘g’ equally. But, running the experiment twice, the subjects both times arranged things so that they both ended up with exactly the same amount of money. Subject 1 had failed to behave as a “totally self interested and purely rational” agent, and accepted a payout less than they could have achieved by reneging on any prior agreement.
“The main lesson from this limited experiment is that the social relationship between the subjects can have a controlling influenced on their choices. Often, as in this case, it is conceptually possible to allow for such an effect by appropriate corrections in the payoff matrices so that the assumed utilities are real and not just apparent. Usually, as in this case, it is practically impossible to take account all of the relevant factors – or to control them. Certainly, in this particular case, the experiment was invalidated as a test of the split-the-difference principle or any other ideal theory due to lack of control of the relevant experimental variables.”
Surprise, surprise! in the real world, there are many more factors at play than can be accounted for in Game Theory without disproportionate effort.
Only in the field of economics could such a theory have survived such obvious failings for seven decades, and be going strong.
[UPDATE: I’ve just found my way to this rapid-fire lecture on Behavioural Biology from Robert Sapolsky which charges through the developments in Game Theory, how they have been applied to real-world systems, and makes it clear how overlapping domains of interaction rapidly introduce complexity. There is a great laugh right at the very end, so keep watching!]
AI researchers would do well to read Flood’s paper (it contains many more intriguing examples, with clear mathematical workings as well as a lively interpretation of the psychology at play) with careful attention before building Game Theory approaches into their work – particularly if they are interested in approaches to true General Intelligence.
Is an Artificial General Intelligence based in game-play approaches and acting as a game-theoretic ‘self-interested, rational agent’ imaginable? Perhaps.
But is it likely that such a being would be likely to be ‘friendly’ to humans – humans who sneer at game-playing, and who make irrational decisions because of ‘social relationships’?
I am not at all confident.