Experts, savants and complexity.

It has been my experience and observation, not once, but a few times, that someone I have known to have some level of ability in a domain that goes beyond straightforward expertise, when asked honest and interested questions about how some outcome of theirs was achieved, has become unhelpful. Sometimes tetchy, sometimes incommunicative, sometimes vague.

And this can be surprising. Somehow, we can have an assumption that anyone who can produce extraordinary results in their domain has such a lucid command of that domain that they should be able to tell us just how and why they do it – and further, that they ought to be happy to explain. After all, we are in awe of them, fascinated by their capacity to produce such work – surely they’ll be happy to be listened to.

But no, not always. Quite often not, in my experience.

Continue reading “Experts, savants and complexity.”

Response to Joe Brewer

Joe recently posted a long article that resonated with me, with the rather forbidding title, ‘Why I am no longer attempting to build a rigorous science of social change‘. I don’t know Joe, but have been aware of other posts and essays of his for some time. I connected with these because of seeing the name of the organisation he has co-founded; The Center for Applied Cultural Evolution, which immediately gave me the feeling that we had followed many common thought paths –  a feeling that has been reinforced by everything I have read of his since.

Briefly, what I sensed we share is a certainty that all that we humans really have is each other and our shared culture – that it is past time for us to take responsibility for our own development, and to seek to do this in the wisest, most sane way possible – using the best discoverable integration of our rationalising endeavours and our capacity for humanity. That despite full awareness of the provisional, incomplete and patchy status of both these resources, we are nevertheless at a point in our civilisational development when, for all sorts of reasons, it is both possible and necessary to begin this work.

Screenshot 2018-06-03 14.23.34

This is not the place to rehearse this conviction. If you share it, then read on. If it resonates with you, but you want more detail, then you may wish to read elsewhere before continuing.

Continue reading “Response to Joe Brewer”

On vision, path-dependency, agility – and bears.

A friend sent me a rather wonderful description of an ideal future – one where we knew how to live well on the planet, at ease with each other and our reality, with the positive aspects of incredible technology incorporated and wisely integrated into our humanity – in short, a vision.

And I reacted against it. Certainly not in terms of the spirit, and not in terms of much of the detail. But because of the detail.

Continue reading “On vision, path-dependency, agility – and bears.”

AI and the Zone of Hubris

AI progress and a landscape of problem conditions

I’ve mentioned this ‘Zone of Hubris’ idea in a couple of earlier posts, and it’s time I made it clear what I mean by this slightly over-blown phrase.

The basic idea is that the sort of AI we are making at the moment is being developed against a range of problems with very clear success metrics, and relatively high levels of available information. Recent rapid progress is giving rise to significant confidence in our ability to begin to address really useful problems with the aid of AI (nothing in this post relates to currently imaginary super-intelligent Artificial General Intelligence).

This is likely to lead us to seek to apply our shiny new successes more ambitiously – as well we should. But we need to be aware that we have been sharpening these tools in a particular arena, and that it is not at all certain that they will work well in different circumstances.

“Well, of course..” you might say; “we’re quite aware of that – that’s exactly how we’ve been proceeding – moving into new problem domains, realising that our existing tools don’t work, and building new ones”. Well yes, but I would suggest that it hasn’t so much been a case of building new tools, as it is has been about refining old ones. As is made clear in some earlier posts, most of the building blocks of today’s AI were formulated decades ago, and on top of that, there appears to have been fairly strong selection for problem spaces that are amenable to game/game-theoretic approaches.

‘Hubris’ is defined as ‘excessive or foolish pride or self-confidence‘. Continue reading “AI and the Zone of Hubris”

Project for a Progressive Ethics

ethicsdoodleI’ve been going to quite a few events recently which broadly come under the heading of futurism – indeed many of them have been through a reliably high quality meetup group actually called London Futurists.

These meetings deal with more-or-less mind-boggling speculations and predictions of things like robots taking all the jobs, artificial intelligences surpassing human capacities, people hacking their own or their children’s biology through genetic or prosthetic modifications, and similar subjects. Sci-fi stuff, you might think …

Continue reading “Project for a Progressive Ethics”

Discoverability, the semantic web and the power of patterns

Raoul Vaneigem said;

Everything has been said yet few have taken advantage of it. Since all our knowledge is essentially banal, it can only be of value to minds that are not.

But he wrote this before the internet existed. Today, we might add ;

… and is only discoverable by those with excellent search skills and a willingness to plough through endless irrelevance.

Finding information that meets your needs, is written in a language you understand, on the basis of assumed knowledge that you already possess is not at all easy – despite the cornucopia that is the internet. Or is it, because of the cornucopia…?

The problem applies in reverse for producers of content. How will your (my!) little gem of insight ever be seen by more than a few?

Continue reading “Discoverability, the semantic web and the power of patterns”

DeepMind takes baby steps, but this is significant

When, a year ago, Google bought, for a reported $400M, a small and previously little known British artificial intelligence startup called DeepMind, the acquisition was widely reported, the general press essentially saying; “Gosh golly, these tech type geniuses are, like .. wow! And they get rich too!”(1,2).

Continue reading “DeepMind takes baby steps, but this is significant”