issues_artificial-intelligenceThis New Scientist event was aimed at a general interest audience, rather than an expert one, but assumed a relatively high level of general understanding – the presentations were light on technicalities, but not shy of discussing complex ideas. I had booked without looking into the speaker’s details, trusting to New Scientist to deliver, and my trust was over-rewarded, as the presentations provided a wider range of views than I could have imagined.

These notes are provided mostly because a number of people I’ve spoken to since weren’t at the event, but were wishing they had been – they will be a poor substitute for having been there, but will hope to convey the key points and provide some links. I’ve split the event up into several posts – skim the headings and dip in to the parts that interest you – there is no grand overarching story here, folks!

NB: despite the title of the event, I make no claim to have been transformed into an ‘Expert’ – in AI or anything else.

There were three sessions, with two speakers in each, broadly organised so that the first session provided a grounding in mainstream approaches, a little history, and an illuminating insight into the development of DeepMind’s AlphaGo, the second gave us opportunities to look at the field from the perspective of embodied intelligence – robotics, and the last attempted to bring the relation of AI to society into focus.

My own thoughts after the event are that we’re further away than I had begun to think from machines that exhibit truly general intelligence – but that we are closer than I had thought to the point of danger – to what I rather pompously call the ‘Zone of Hubris’.

This refers to a class of problems which, while exhibiting truly complex characteristics – multiple, inter-related feedback loops and cross-hierarchy interactions, might, to coin a phrase, look like a nail to a man who uses a hammer all the time.

In other words, problems to which one can imagine a powerful AI of a type that is an extension of the current type – one based on the game-theoretic paradigm – being applied, but where the real character of the problem cannot be modelled by game-theory. The risk being that we begin to trust our AI’s too well, ascribe to them capacities they don’t have, and entrust them to help us solve complex problems – Climate Change being the obvious one – with potentially disastrous consequences.

More on this in a future post. Meanwhile, the next few posts will attempt to convey the key points made by each of the speakers.

Simon Lucas – an overview of the state of play of current AI work

Irina Higgins – an inside view of DeepMind’s AlphaGo triumph

Kerstin Dautenhahn – social robots, and human frailty

Pentti Haikonnen – building a conscious robot

Posts with my thoughts arising from the event (updated 12 Apr ’17):

gametheoryGames and Game Theory – the trouble with paradigms… 

AI and the Zone of Hubris