AI and the Zone of Hubris

AI progress and a landscape of problem conditions

I’ve mentioned this ‘Zone of Hubris’ idea in a couple of earlier posts, and it’s time I made it clear what I mean by this slightly over-blown phrase.

The basic idea is that the sort of AI we are making at the moment is being developed against a range of problems with very clear success metrics, and relatively high levels of available information. Recent rapid progress is giving rise to significant confidence in our ability to begin to address really useful problems with the aid of AI (nothing in this post relates to currently imaginary super-intelligent Artificial General Intelligence).

This is likely to lead us to seek to apply our shiny new successes more ambitiously – as well we should. But we need to be aware that we have been sharpening these tools in a particular arena, and that it is not at all certain that they will work well in different circumstances.

“Well, of course..” you might say; “we’re quite aware of that – that’s exactly how we’ve been proceeding – moving into new problem domains, realising that our existing tools don’t work, and building new ones”. Well yes, but I would suggest that it hasn’t so much been a case of building new tools, as it is has been about refining old ones. As is made clear in some earlier posts, most of the building blocks of today’s AI were formulated decades ago, and on top of that, there appears to have been fairly strong selection for problem spaces that are amenable to game/game-theoretic approaches.

‘Hubris’ is defined as ‘excessive or foolish pride or self-confidence‘. Continue reading “AI and the Zone of Hubris”

New Scientist Artificial Intelligence day – Session Two; Embodiment Perspectives – Kerstin Dautenhahn

Kerstin Dautenhahn is Professor of artificial intelligence at the University of Hertfordshire. Her research interests include social learning, human-robot interaction and social robotics. She is the co-creator of KASPAR, a robot designed to help children with autism develop communication skills.
Making the third presentation at this event, Kerstin explained her background in biological cybernetics, and the ways that her work revolves around the interactions between humans and robots/AI, concerned particularly with systems that aim to help people.
She was concerned to be immediately clear: robots are not people.
Elaborating, she pointed out that each robot you encounter is a de novo creation, not only lacking a common biological heritage with humans – making them unavoidably alien, but without any necessary shared characteristics (either deep or apparent) with any other robot.
Further, now and for the foreseeable future (in her opinion), robots have no psychology – there is no ‘mind’ in there.
The term robot, then, is a moving target, without a firm definition (I was surprised that we weren’t reminded of the etymological origin of the word ‘robot’ in the Czech word for ‘slave’), so that any conversation about robots must be particularly clear as to terms. This, however, is difficult, because of two very strong human traits;

Continue reading “New Scientist Artificial Intelligence day – Session Two; Embodiment Perspectives – Kerstin Dautenhahn”

New Scientist Artificial Intelligence day – Session One; the Mainstream – Irina Higgins

irinahiggins

Irina Higgins is a senior research scientist at DeepMind, and has a background in neuroscience.

The second presentation at this event largely focused on telling a story about DeepMind’s development of AlphaGo – using this as a vehicle to explain DeepMind’s approach and give insights into its culture.

She told us that DeepMind now has 300 scientists, and was keen to emphasise the high-minded aspirations of the organisation – from its mission statement;

Solve intelligence. Use it to make the world a better place.

to its ‘intentionally designed culture’, which aims to mesh the best aspects of industry and academia; the intense focus and resources of the former with the curiosity driven open-ended approach of the latter.

DeepMind’s operating definition of general intelligence is apparently; Continue reading “New Scientist Artificial Intelligence day – Session One; the Mainstream – Irina Higgins”

New Scientist Artificial Intelligence day – Session One; the Mainstream – Simon Lucas

simonlucas

Simon Lucas is Professor of computer science at the University of Essex. His research is focused on the application of artificial intelligence and machine learning to games, evolutionary computation and pattern recognition.

This was the foundation-laying talk of this event, and it was excellent – a rapid-fire but followable overview of the history and principal themes of AI research and development, and more detail on the approach currently producing the results that have been making headlines – neural networks. There was nothing here that some general reading wouldn’t get you, but it was engagingly and thoroughly presented at speed.

Continue reading “New Scientist Artificial Intelligence day – Session One; the Mainstream – Simon Lucas”

Project for a Progressive Ethics

ethicsdoodleI’ve been going to quite a few events recently which broadly come under the heading of futurism – indeed many of them have been through a reliably high quality meetup group actually called London Futurists.

These meetings deal with more-or-less mind-boggling speculations and predictions of things like robots taking all the jobs, artificial intelligences surpassing human capacities, people hacking their own or their children’s biology through genetic or prosthetic modifications, and similar subjects. Sci-fi stuff, you might think …

Continue reading “Project for a Progressive Ethics”

Semantic Scholar trawls unread scientific papers

New Scientist reports (pay barrier, sorry) that the Seattle based Allen Institute for Artificial Intelligence (AI2) has launched a tool called Semantic Scholar, which aims to;

…read, digest and categorise findings from the estimated 2 million papers published each year.

The article goes on to say;

Up to half of these are never read by more than three people.

That’s right. One million scientific papers a year, read by only three people each. That’s some sort of sad exemplar for futility.

DeepMind takes baby steps, but this is significant

When, a year ago, Google bought, for a reported $400M, a small and previously little known British artificial intelligence startup called DeepMind, the acquisition was widely reported, the general press essentially saying; “Gosh golly, these tech type geniuses are, like .. wow! And they get rich too!”(1,2).

Continue reading “DeepMind takes baby steps, but this is significant”