Pentti Haikonen is adjunct professor in the philosophy department at the University of Illinois at Springfield. An electronics engineer by training, he has constructed the experimental robot XCR-1, designed to exhibit conscious-like behaviour, and has written several books on approaches to developing conscious robots.
Despite his strong Finnish accent, the fourth presentation of this event by Pentti Haikonen achieved easily the most engaged response from the audience during the day with videos of his XCR-1 robot.
Although decidedly non-humanoid, and exhibiting only simplistic behaviours, the directness of the robots’s behaviour and the immediacy of the parallels with human behaviours – no doubt overlaid with a strong dose of the anthropomorphising and narrative tendencies identified by Kerstin Dautenhahn – produced strong reactions.
But we were quickly brought to understand that the principles on which this robot has been constructed, and the manner in which its behaviour is ‘learnt’ and ‘decided’ are radically different from those of the previous presentations.
Haikonen started from a philosophical proposition – that if you want to build ‘conscious’ robots (implying a distinction between ‘conscious’ and ‘intelligent’ in this arena – though this was not explored), you need to proceed from an idea about what consciousness is. He suggested, rather strongly, that consciousness cannot be built on the basis of digitised representations of phenomena – that a number like ’17’ does not, and can never, embody the sensation of pain, or the sound of a letter, or the the quality of a note. That consciousness is not the signalised representation of a phenomenon, but the experience of an internal pattern of such signals as a perception.
This is a tricky point to convey clearly. Haikonen used a photo of an executive toy known as Pin-Art to illustrate it – an image I couldn’t find on the net and have therefore had to attempt to reproduce:
The hand is the reality being sensed, while the individual pins of the pin-art toy represent the signals produced by individual neurons. Haikonen’s contention is that it is neither the individual signal strengths, combined by any algorithmic means, nor the real hand, that is consciously ‘seen’, but the internal projection of a pattern of experience that is linked to the idea of a hand.
It’s still a hard concept to get the head around (it’s not called the ‘hard problem of consciousness’ for nothing!), and I didn’t get far beyond the decision to ‘go along for now as if this is true’ at the time (a very useful trick, by the way).
Since then, I’ve looked further, and thought more, discovering that, in this video, Haikonen claims to have solved the hard problem of consciousness – explaining it a little differently, as follows (I’m paraphrasing – any infelicities are my responsibility!);
- Consciousness is characterised by reportable observation – whether of external stimuli, self generated feelings, internal thoughts or indeed thought processes themselves,
- perception of the content of consciousness consists of observations,
- that despite the fact that all of consciousness is built from neural activity, our experience is not of neural activity per se, but of apparently sensed reality – even our thoughts are in the form of silent inner speech (the nub of the ‘hard problem’),
- rather than observing the neural activity, we observe the information that it carries,
- that the neural machinery acts as a ‘transparent carrier of information’,
- that the system whereby this occurs is a mediated feedback loop, where things we are conscious of are passed to consciousness via the sensory areas of the brain – to be experienced as if they were sensory input – as ‘virtual perceptions’,
- that in order to perceive thoughts as auditory ‘virtual perception’, we need to learn to implement this feedback loop – which is why little children keep up a running commentary on their activities – including what seem like thoughts; they are not yet able to internalise the process of representing their thoughts to themselves as auditory input without actual sounds being involved (this last is not a unique suggestion of Haikonen’s, but originates with Vigotsky).
This view meshes well with several other considerations.
For one, it offers an immediately believable route for the evolution of consciousness, and also supports the notion of ‘levels’ of consciousness, as increasingly observed in other species.

And for another, it makes sense of one’s experience when confronted with images that are hard to parse – for the few moments before one resolves the cognitive problem, the image can seem to defy one’s eyes – weirdly mutable, as the brain ‘tries’ different conceptual resolutions, before settling on something that works. This reinforces the notion that perception is thoroughly mediated by conception.
But of more immediate relevance to the subject at hand, this view challenges directly the idea that autonomous robots with human-like characteristics of consciousness can ever be developed on the basis of algorithmic techniques with arbitrarily digitised data.
Haikonen’s robot is apparently an analogue device – with thousands, rather than tens of millions of transistors. On this basis, the fairly simple learning and response exhibited in the videos is perhaps more impressive than it at first appears – although it is of course hard to know just what one is comparing.
This challenge was obviously not missed by the other participants, and during the panel discussion that rounded off the event, Simon Lucas made a point of saying that Haikonen’s robot essentially did nothing that couldn’t be paralleled with a digital architecture. Unfortunately though, the discussion went another way, and this wasn’t debated. [I’m inclined to think that for Haikonen, there is a philosophical distinction which is fundamental, and that digitising the process would miss the point – but he might answer differently given the chance.]
In fact, the issue that did generate discussion was another aspect of the presentation – which generated a strong audience response too – the demonstration of teaching the robot the concept ‘bad’.
Haikonen’s robot has only a few sensors (all analogue), and one of these is a ‘shock’ sensor. Haikonen teaches it the concept behind the word ‘bad’ by hitting the bar connected to this sensor with a stick, repetitively, while saying the word ‘bad’. Of course, anthropomorphising as we do (despite the robot having very little in the way of animalistic attributes), we in the audience winced and vocalised at the sight of this.
During the discussion, a contributor described this as (unnecessarily) introducing the concept of violence at the root of the robot’s view of the world – in tones of strong disapproval, which were shared to varying extents by others, as far as I could tell. Even Haikonen was apologetic.
This is perhaps understandable in the context of an explicit aim to develop synthetic ‘consciousness’; when one definition of consciousness is the capacity to suffer, applying ‘bad’ stimuli intentionally must be open to question. But the same criteria, applied in reverse, suggests that without an ability to suffer, any claim to consciousness would be undermined, and going further, what on earth would an ability to suffer be for, without any experience of suffering?
If this is accepted, then building a machine with the capacity to suffer becomes a sine-qua-non for anyone aiming at consciousness. In a short conversation I had with Haikonen after the event, he referred to children who are born without the ability to feel pain, and the difficult learning issues this results in.
[Perhaps the issue comes down to the method of inducing the sensation. We experience suffering from a range of sources – including circumstances that aren’t intentionally inflicted. If the machine’s ‘pain’ circuitry were related to its battery level – to an analogue of ‘hunger’ – and certain environmental conditions were rigged to induce accelerated power drain, would there be the same level of disapproval?]
Judging the validity of the distinctive claims made by Haikonen are beyond me, but it certainly seems that they are worth serious attention. I learned a huge amount from every section of the event, but this segment raised as many issues requiring thoughtful consideration as the rest put together. AI research has a history of fashionable excess, where currently attractive avenues are pursued to the exception of all others, until they are beset by diminishing returns.
Thinking these thoughts, it was unsettling for the distinction in funding behind DeepMind and Haikonen’s efforts to be as starkly obvious as they were, and to see the lack of interest from other members of the panel in his approach. Surely we need more than one lone scientist in the world to be addressing such fundamental questions – even if his work doesn’t immediately suggest ways of reducing fuel bills or targeting online advertising more accurately?
Personally, I have a strong hunch that General Intelligence – the sort that is the Holy Grail of AI research – will turn out to be unavoidably connected to consciousness – or that high intelligence without human-like consciousness will turn out to be impossible to regulate in the context of ‘friendliness’.
General Intelligence, as the crowning outcome of evolution, is the most complex single phenomenon we know of. The idea that it is going to built with the simplistic approaches we currently focus on (particularly when the known layers of inter-related complex functionality in the brain expand with every passing year – electrical, chemical and mechanical – with quantum coming up on the rails) seems like a weak bet. If I’m even half right, then failing to fund Haikonen’s work will, in retrospect, seem very foolish.
Someone was complaining that Haikonen’s books aren’t cheap, but I have a feeling that sales matter to the progress of his project, and I urge you to follow my example and buy one! If you need some encouragement, then watch a few videos – and if you know anyone who is funding AI research, suggest that they chuck half a percent of their money in his direction, as a side bet.
I bought all three books on robot consciousness Haikonen wrote, and while my knowledge is still kind of limited to understand all of their content, I’m happy to at least be supporting his research somehow (although I got the books with a 15%-20% discount at the online bookstore, but that’s as much as my wallet allowed me to spend). His work is of great inspiration to me, and a push to work hard to make my own little scientific dream come true (building my own robot). I keep wondering why he doesn’t get funding for his (promising) research.
I understand the emotions people felt when the scientist hit the robot for the experiment, but I think we should mitigate these emotions a little if we want to reach the goal of building a conscious robot, or we’d never get there. As a Sony AIBO owner, when the robot misbehaves I can hit a sensor over its head to discourage the behavior and shape its “personality” (and AIBO is quite an interesting albeit simple robot). As of now, there are no robots that are really comparable to sci-fi sentient robots, so no harm done in hitting them with a stick (even though, I’ll say it again, for someone who loves robots it can be difficult to watch; I used to cry as a kid when I saw robots behing dismantled in movies or documentaries, or Robot Wars on TV).
Anyhoo, loved the post, the whole series around the New Scientist event (wish I could have been there!) and the general blog. I’m subscribing 🙂
– Luana
LikeLike
Hi Luana, still haven’t got the books myself, but they’re on my list.
I’ve been having a conversation with a friend about Haikonen; trying to get my head around his claim to have got around the ‘hard problem of consciousness’. Here are some edited highlights – I’d be interested to hear your thoughts. Obviously, not having read his books, we’ve probably missed something important!
David Chalmers TED talk
Tononi – Integrated Information Theory
And James’ reply:
– Dil
LikeLike