New Scientist Artificial Intelligence day – Session Two; Embodiment Perspectives – Pentti Haikonen

PenttiHaikonenPentti Haikonen is adjunct professor in the philosophy department at the University of Illinois at Springfield. An electronics engineer by training, he has constructed the experimental robot XCR-1, designed to exhibit conscious-like behaviour, and has written several books on approaches to developing conscious robots.

Despite his strong Finnish accent, the fourth presentation of this event by Pentti Haikonen achieved easily the most engaged response from the audience during the day with videos of his XCR-1 robot.

xczr-1a

Although decidedly non-humanoid, and exhibiting only simplistic behaviours, the directness of the robots’s behaviour and the immediacy of the parallels with human behaviours – no doubt overlaid with a strong dose of the anthropomorphising and narrative tendencies identified by Kerstin Dautenhahn – produced strong reactions.

But we were quickly brought to understand that the principles on which this robot has been constructed, and the manner in which its behaviour is ‘learnt’ and ‘decided’ are radically different from those of the previous presentations.

Haikonen started from a philosophical proposition – that if you want to build ‘conscious’ robots (implying a distinction between ‘conscious’ and ‘intelligent’ in this arena – though this was not explored), you need to proceed from an idea about what consciousness is. He suggested, rather strongly, that consciousness cannot be built on the basis of digitised representations of phenomena – that a number like ’17’ does not, and can never, embody the sensation of pain, or the sound of a letter, or the the quality of a note. That consciousness is not the signalised representation of a phenomenon, but the experience of an internal pattern of such signals as a perception.

This is a tricky point to convey clearly. Haikonen used a photo of an executive toy known as  Pin-Art to illustrate it – an image I couldn’t find on the net and have therefore had to attempt to reproduce:

consciousnesspinart

The hand is the reality being sensed, while the individual pins of the pin-art toy represent the signals produced by individual neurons. Haikonen’s contention is that it is neither the individual signal strengths, combined by any algorithmic means, nor the real hand, that is consciously ‘seen’, but the internal projection of a pattern of experience that is linked to the idea of a hand.

It’s still a hard concept to get the head around (it’s not called the ‘hard problem of consciousness’ for nothing!), and I didn’t get far beyond the decision to ‘go along for now as if this is true’ at the time (a very useful trick, by the way).

Since then, I’ve looked further, and thought more, discovering that, in this video, Haikonen claims to have solved the hard problem of consciousness – explaining it a little differently, as follows (I’m paraphrasing – any infelicities are my responsibility!);

  • Consciousness is characterised by reportable observation  – whether of external stimuli, self generated feelings, internal thoughts or indeed thought processes themselves,
  • perception of the content of consciousness consists of observations,
  • that despite the fact that all of consciousness is built from neural activity, our experience is not of neural activity per se, but of apparently sensed reality – even our thoughts are in the form of silent inner speech (the nub of the ‘hard problem’),
  • rather than observing the neural activity, we observe the information that it carries,
  • that the neural machinery acts as a ‘transparent carrier of information’,
  • that the system whereby this occurs is a mediated feedback loop, where things we are conscious of are passed to consciousness via the sensory areas of the brain – to be experienced as if they were sensory input – as ‘virtual perceptions’,
  • that in order to perceive thoughts as auditory ‘virtual perception’, we need to learn to implement this feedback loop – which is why little children keep up a running commentary on their activities – including what seem like thoughts; they are not yet able to internalise the process of representing their thoughts to themselves as auditory input without actual sounds being involved (this last is not a unique suggestion of Haikonen’s, but originates with Vigotsky).

This view meshes well with several other considerations.

For one, it offers an immediately believable route for the evolution of consciousness, and also supports the notion of ‘levels’ of consciousness, as increasingly observed in other species.

cogdiss
Slugs?

And for another, it makes sense of one’s experience when confronted with images that are hard to parse – for the few moments before one resolves the cognitive problem, the image can seem to defy one’s eyes – weirdly mutable, as the brain ‘tries’ different conceptual resolutions, before settling on something that works. This reinforces the notion that perception is thoroughly mediated by conception.

But of more immediate relevance to the subject at hand, this view challenges directly the idea that autonomous robots with human-like characteristics of consciousness can ever be developed on the basis of algorithmic techniques with arbitrarily digitised data.

Haikonen’s robot is apparently an analogue device – with thousands, rather than tens of millions of transistors. On this basis, the fairly simple learning and response exhibited in the videos is perhaps more impressive than it at first appears – although it is of course hard to know just what one is comparing.

This challenge was obviously not missed by the other participants, and during the panel discussion that rounded off the event, Simon Lucas made a point of saying that Haikonen’s robot essentially did nothing that couldn’t be paralleled with a digital architecture. Unfortunately though, the discussion went another way, and this wasn’t debated. [I’m inclined to think that for Haikonen, there is a philosophical distinction which is fundamental, and that digitising the process would miss the point – but he might answer differently given the chance.]

In fact, the issue that did generate discussion was another aspect of the presentation – which generated a strong audience response too – the demonstration of teaching the robot the concept ‘bad’.

Haikonen’s robot has only a few sensors (all analogue), and one of these is a ‘shock’ sensor. Haikonen teaches it the concept behind the word ‘bad’ by hitting the bar connected to this sensor with a stick, repetitively, while saying the word ‘bad’. Of course, anthropomorphising as we do (despite the robot having very little in the way of animalistic attributes), we in the audience winced and vocalised at the sight of this.

During the discussion, a contributor described this as (unnecessarily) introducing the concept of violence at the root of the robot’s view of the world – in tones of strong disapproval, which were shared to varying extents by others, as far as I could tell. Even Haikonen was apologetic.

This is perhaps understandable in the context of an explicit aim to develop synthetic ‘consciousness’; when one definition of consciousness is the capacity to suffer, applying ‘bad’ stimuli intentionally must be open to question. But the same criteria, applied in reverse, suggests that without an ability to suffer, any claim to consciousness would be undermined, and going further, what on earth would an ability to suffer be for, without any experience of suffering?

If this is accepted, then building a machine with the capacity to suffer becomes a sine-qua-non for anyone aiming at consciousness. In a short conversation I had with Haikonen after the event, he referred to children who are born without the ability to feel pain, and the difficult learning issues this results in.

[Perhaps the issue comes down to the method of inducing the sensation. We experience suffering from a range of sources – including circumstances that aren’t intentionally inflicted. If the machine’s ‘pain’ circuitry were related to its battery level – to an analogue of ‘hunger’ – and certain environmental conditions were rigged to induce accelerated power drain, would there be the same level of disapproval?]

Judging the validity of the distinctive claims made by Haikonen are beyond me, but it certainly seems that they are worth serious attention. I learned a huge amount from every section of the event, but this segment raised as many issues requiring thoughtful consideration as the rest put together. AI research has a history of fashionable excess, where currently attractive avenues are pursued to the exception of all others, until they are beset by diminishing returns.

Thinking these thoughts, it was unsettling for the distinction in funding behind DeepMind and Haikonen’s efforts to be as starkly obvious as they were, and to see the lack of interest from other members of the panel in his approach. Surely we need more than one lone scientist in the world to be addressing such fundamental questions – even if his work doesn’t immediately suggest ways of reducing fuel bills or targeting online advertising more accurately?

Personally, I have a strong hunch that General Intelligence – the sort that is the Holy Grail of AI research – will turn out to be unavoidably connected to consciousness – or that high intelligence without human-like consciousness will turn out to be impossible to regulate in the context of ‘friendliness’.

General Intelligence, as the crowning outcome of evolution, is the most complex single phenomenon we know of. The idea that it is going to built with the simplistic approaches we currently focus on (particularly when the known layers of inter-related complex functionality in the brain expand with every passing year – electrical, chemical and mechanical – with quantum coming up on the rails) seems like a weak bet. If I’m even half right, then failing to fund Haikonen’s work will, in retrospect, seem very foolish.

Someone was complaining that Haikonen’s books aren’t cheap, but I have a feeling that sales matter to the progress of his project, and I urge you to follow my example and buy one! If you need some encouragement, then watch a few videos – and if you know anyone who is funding AI research, suggest that they chuck half a percent of their money in his direction, as a side bet.

4 thoughts on “New Scientist Artificial Intelligence day – Session Two; Embodiment Perspectives – Pentti Haikonen

  1. I bought all three books on robot consciousness Haikonen wrote, and while my knowledge is still kind of limited to understand all of their content, I’m happy to at least be supporting his research somehow (although I got the books with a 15%-20% discount at the online bookstore, but that’s as much as my wallet allowed me to spend). His work is of great inspiration to me, and a push to work hard to make my own little scientific dream come true (building my own robot). I keep wondering why he doesn’t get funding for his (promising) research.

    I understand the emotions people felt when the scientist hit the robot for the experiment, but I think we should mitigate these emotions a little if we want to reach the goal of building a conscious robot, or we’d never get there. As a Sony AIBO owner, when the robot misbehaves I can hit a sensor over its head to discourage the behavior and shape its “personality” (and AIBO is quite an interesting albeit simple robot). As of now, there are no robots that are really comparable to sci-fi sentient robots, so no harm done in hitting them with a stick (even though, I’ll say it again, for someone who loves robots it can be difficult to watch; I used to cry as a kid when I saw robots behing dismantled in movies or documentaries, or Robot Wars on TV).

    Anyhoo, loved the post, the whole series around the New Scientist event (wish I could have been there!) and the general blog. I’m subscribing 🙂

    – Luana

    Like

    1. Hi Luana, still haven’t got the books myself, but they’re on my list.
      I’ve been having a conversation with a friend about Haikonen; trying to get my head around his claim to have got around the ‘hard problem of consciousness’. Here are some edited highlights – I’d be interested to hear your thoughts. Obviously, not having read his books, we’ve probably missed something important!

      The hard problem of consciousness – the Haikonen video: https://youtu.be/QbDGvIl72RM

      David Chalmers (originator of the ‘hard problem’ conception):
      Pan-psychism – consciousness as a fundamental condition. Interesting for me as it relates to my guru Chris. Alexander’s contention that there is a universal characteristic that correlates with ‘fitness’, or ‘rightness’, or ‘beauty’; the tantalising idea being that, if Tononi’s notion of phi is about ‘integrated information’, then this quality of rightness might correlate with the degree of resolution, or absence of contradiction, or entropy content, of that integrated information.

      David Chalmers TED talk
      Tononi – Integrated Information Theory

      FYI, I don’t find Haikonen’s purported resolution of the hard problem fully convincing. He seems to be saying that the subjective experience of being conscious is just that – a subjective experience of constructed percepts that mirror/mimic real percepts.

      So far, so good – this appears to be what is happening – that all our ‘real’ experience is modulated re-screenings of the signals received from the sensory nerves, mediated by experiential constructs. As he has it, internal thought is processed and represented in the mind as if it were perceptual information, despite it haveing an internal source.

      All well and good, but still begging the question, surely?

      Somehow this modulated presentation feels like ‘me’ (’cogito’ as opposed to ‘experientio’) – and there is no percept that I ever had that was ‘me’ to model this characterisation on.

      Or was there? Backing away from pan-psychism, I can imagine building an evolutionary model that brings us from proprioception and self-preservation data establishing primitive me/not me distinctions which have the character of percepts – and then layers onto those the sensory perceptions of social interactions ( increasingly complex and layered in humans) so that this prototypical ‘me’ collection of sense data can be crystallised by cultural interactions in the presence of high (human) intelligence to the point where it becomes a ‘thing’ that is experienced in a sufficiently coherent way that it becomes a useful illusion (all this happening between 18 months and 5 years) , which once established, becomes a runaway train (cf sexual selection).

      Of course, the external/internal and individual/cultural recursive loops are what drives this – and as with many recursive loops it can have an exponential characteristic – so that it took until Cervantes for the novel to be invented, but that once this had happened, the blue touch paper of second (third? fourth? nth?)-order individuation had been lit.

      And James’ reply:

      I think that in living things the “cogito” vs “experientio” schism is bridged by evolution.

      Everything is originally based on external concrete perception, if you go back far enough. But some of that “basing” is mediated by a hell of a lot of evolution, endowing our ancestors with a succession of increasingly sophisticated ready-built models (or special circuits for running such models) that feel inherent, and are indeed built-in by our DNA, perhaps with some minimal “booting” from individual experience in the current generation (e.g. babies playing with gripping things), but not carved out from scratch by that experience.

      On top of that we have stuff that is genuinely individual to each of us – stuff that is embodied in generic protein structures which our DNA tells us all to build pretty-much the same, but which absolutely wouldn’t be there in anything like that form if we hadn’t been where we’ve been and done what we’ve done as individuals since birth.

      Or so I suppose…

      A robot might have been built with light-sensitive sensors, that allow it to “see”. It may have special analogue circuits that allow it to extract certain visual features (e.g. edges). Those circuits may self-tune to some degree according to each particular robot’s experience in its environment (a la neural network). And then, given a lot more data, and much richer data, the robot might somehow learn to recognise certain people’s faces. The knowledge of the faces would be part of that robot’s set of percepts, but its ability, for example, to locate the edges of a plane surface would be more or less inherent, and if it “felt” that that ability were a “part” of it (i.e. had a model of that capability within itself somewhere), then I’d say that feeling could be a plausible candidate for consciousness (or part thereof). Awareness of the likely uniqueness of the since-birth percepts (“I can recognise Dil, but not all robots can.”) would give the robot not just consciousness but a sense of individuality (“I think this, therefore I am me”, as distinct from “I think, therefore I am.”) Humans all have such an awareness*, but I don’t think it necessarily follows that anything with consciousness would be aware of its individuality.

      – Dil

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.