Tags

, , , ,

simonlucas

Simon Lucas is Professor of computer science at the University of Essex. His research is focused on the application of artificial intelligence and machine learning to games, evolutionary computation and pattern recognition.

This was the foundation-laying talk of this event, and it was excellent – a rapid-fire but followable overview of the history and principal themes of AI research and development, and more detail on the approach currently producing the results that have been making headlines – neural networks. There was nothing here that some general reading wouldn’t get you, but it was engagingly and thoroughly presented at speed.

It was suggested that the major theoretical groundwork for all of the approaches currently in use – neural networks included – was decades old; that current progress is more about new implementations of previous theoretical work than it is about new theory – enabled by the availability of faster, cheaper processors and enormous, affordable data-sets.

[I wondered, what are the new theories that will be built on the insights that must arise from this technical flowering – the theories which will enable AI to move beyond its current limitations?]

He was straightforward about the way that, although inspired by the biological brain, neural network approaches ‘deliberately discard’ complexity to focus on [what have been selected as] the ‘essential characteristics’.

Lucas suggested that the impenetrability of the workings of neural nets has been over-emphasised, making it clear that they can be interrogated, layer by layer, and also run in ‘reverse’ to interrogate their structure and ‘weightings’.

deepneural-net

Credit: neauralnetworksanddeeplearning.com

There was an explanation of the reasons why DeepMind’s AlphaGo achievement was so  impressive, showing diagrammatically the extent to which looking forward to possible moves in a game of Go presents a problem many orders of magnitude greater than the problem presented by chess (indeed, larger than the number of particles in the known universe! [if you are really keen, look at the figure in the table for the number of possible games on a 2×2 Go board, and follow the link…]).

What was interesting was Lucas’ suggestion that although neural networks are getting all the interest at present, other approaches, also explored in previous work, should not be forgotten – that these might have complementary capabilities and that their perceived lack of success in the past might also be transformed by new capacities. He particularly drew attention to the evolutionary approach, where an analogue of biological evolution is applied to randomly generated, randomly ‘mutated’ algorithms, with selection on the basis of ‘fitness’ – defined as effectiveness in producing the desired outcome.

My most significant personal insight from this talk was how – in Lucas’ eyes at least – games are intentionally chosen as the underlying model used in developing AI – to the extent that, in his final comments, looking to the next challenges and ultimate aims of AI, saying that AI development aspires to ‘operate in open-ended, complex situations’ – but went straight on to ask ‘what sort of games you can develop to assess effectiveness in such endeavours?’.