LETS marketplace from a feature ‘phone

Tags

, , , , , ,

This post follows on from a previous post: Alternative currencies – Simbi and the Flying Brick. Thinking about how alternative currencies can be designed to suit the particular circumstances they are intended to improve, I suddenly realised that it should now be possible to implement LETS in any part of the world where people generally have access to feature ‘phones – which nowadays includes many places where access to hard cash is extremely difficult.

LETS Stands for Local Exchange and Trading Scheme. LETS systems exists to enable groups of people who for any reason at all find trading with traditional ‘hard’ currencies difficult.

For instance: people without enough money, people denied access to markets, people who want to be sure that the results of their efforts benefit their community as much as possible.

LETS systems must be able to do four simple things:

Continue reading

Progressive Ethics Project – Launched!

Tags

, , , , , , ,

The Project for a Progressive Ethics is now a thing. At least, it’s a Meetup, we’ve had an exciting first meeting, and we have a supportive home.

The prospect we have in mind is this. A well-respected, public place (OK a website), which you can use as a sounding board for ethical considerations of any sort – from a personal dilemma, to something in the news, to a debate in the pub, to a Phd thesis. A place where you will find a richly interconnected network of ethical propositions, easy to navigate, designed so that you can easily home in on the issues that concern you, or zoom out to get a wider view – where you can ask simple, quick questions and get simple, straightforward answers – but where you can also dive deep and wide to explore things to your satisfaction. Somewhere you can engage with – where, if you get an answer you think is wrong, or misguided, you can understand where that answer came from, and challenge it – knowing that there is a community of humans who will respond – that the underlying wish of the framework is to be deeply congruent with a reasoned, progressive viewpoint.

simpleNetwork

I had no idea what to expect from the first public discussion of this possibly insanely over-ambitious project. As I’ve written before, this is something I didn’t plan to start, which somehow came upon me, from a short forum comment after a London Futurists event.

I had no way of knowing what the people who would actually turn up might be expecting, or whether there would be much overlap between a group of strangers’ views on ethics. Truthfully, it could easily have been something of a trainwreck…

Continue reading

Alternative currencies – Simbi and the Flying Brick

Tags

, , , , , , , , , , ,

Credit: danyythemartian – DeviantArt

The Flying Brick was the printed directory of the Brixton LETS Scheme (this isn’t the image we used – the original is lost in the mists of time – or a cardboard box in the attic).

LETS stood for Local Exchange and Trading Scheme. Brixton LETS was started in the second wave of alternative, local currency schemes in 1992 in Brixton, South London, and I’m proud to say I was one of the founding group, and one of the team that ran the scheme in its heyday over the following few years.

The idea was that members would trade together using our own local currency – the Brick (what else?) – which was a ‘virtual’ currency – a number in a database, with no physical existence. And that this currency would have different rules to ‘normal’ money, specifically: Continue reading

Trust Aggregation, reputation economies and privacy

Tags

, , , , , , , , , ,

Last night I listened to this feature on the excellent BBC World Service – Hacking the Vote – pegged on claims by companies hawking their services to political parties that they know enough about a great many individuals to be able to create specific pyschological profiles and thus enable carefully crafted messages to be shown to them, to get them to vote for the candidate paying for the service.

The shocking reminder of the extent to which data is being collected on all of us and put to murky use in the shadows prompted this post.

It’s not about data privacy, particularly – although I personally make my online life stupidly difficult by using a vpn, by installing the anti-tracking, anti java-script, anti adverts, anti-everything extensions I can find to my browsers in an attempt to at least put some road-bumps down for those who would treat my as a statistical profit centre. With the self-defeating result that half the sites I use won’t work unless I grant them freedom to do it all anyway.

It’s about a way that we, as individuals, might be able to use that data for our own purposes. If it’s all being collected and used to manipulate us anyway, why shouldn’t it work for us, a little?

Aggregated trust scores

There have been several attempts at building tools that provide reputation metrics, trust scores – think credit ratings on steroids.

The idea being that individuals will sign up to aggregator sites, and give them access to various kinds of trust/social standing scores. The aggregator sites will then publish trust metrics on individuals, to be used by all sorts of people. Employers, potential service users, lenders, contacts, dating matches.

If anyone manages to crack this (it’s not easy – see this dead indiegogo site for peeple), then individuals will spend more effort curating these than they do on their credit rating. Lawsuits will be brought over harsh ratings using defamation laws drafted decades before the internet was even imagined.

The trust aggregator metric that is itself trusted will be the locus of immense influence.  If that doesn’t already sound scary, there’s another big problem.

Continue reading

AI and the Zone of Hubris

Tags

, , , , , , , , ,

AI progress and a landscape of problem conditions

I’ve mentioned this ‘Zone of Hubris’ idea in a couple of earlier posts, and it’s time I made it clear what I mean by this slightly over-blown phrase.

The basic idea is that the sort of AI we are making at the moment is being developed against a range of problems with very clear success metrics, and relatively high levels of available information. Recent rapid progress is giving rise to significant confidence in our ability to begin to address really useful problems with the aid of AI (nothing in this post relates to currently imaginary super-intelligent Artificial General Intelligence).

This is likely to lead us to seek to apply our shiny new successes more ambitiously – as well we should. But we need to be aware that we have been sharpening these tools in a particular arena, and that it is not at all certain that they will work well in different circumstances.

“Well, of course..” you might say; “we’re quite aware of that – that’s exactly how we’ve been proceeding – moving into new problem domains, realising that our existing tools don’t work, and building new ones”. Well yes, but I would suggest that it hasn’t so much been a case of building new tools, as it is has been about refining old ones. As is made clear in some earlier posts, most of the building blocks of today’s AI were formulated decades ago, and on top of that, there appears to have been fairly strong selection for problem spaces that are amenable to game/game-theoretic approaches.

‘Hubris’ is defined as ‘excessive or foolish pride or self-confidence‘. Continue reading

Aside

Games, Game Theory, Microsoft hammers home the point

Tags

, , , , , ,

pig-chase-overviewSatya Nadella’s call for AI to be collaborative with humanity turns out to mean ‘work with humans to win games designed as game-theoretic exercises‘.

This might be a slight oversimplification, but it strongly suggests that the concerns raised in my last post are not far off the mark.

Games and Game-Theory – the trouble with paradigms…

Tags

, , , , ,

First of a few posts with my own thoughts arising from the recent New Scientist ‘Instant Expert’ event.

gametheory

credit: xkcd.com

Games and Game Theory appear to be the ruling paradigm for the current AI top dogs. Both Irina Higgins and Simon Lucas made clear cases for the choice of gaming environments as AI training grounds, and referenced Game Theory, too.

Don’t worry, I’m not going to try to argue with them – but I do think it is worth examining the assumptions that underlie gaming approaches and Game Theory, and considering these as they relate to the problem spaces which we dearly wish that AI could help us with. As you might guess, I am not sanguine… Continue reading

New Scientist Artificial Intelligence day – Session Two; Embodiment Perspectives – Pentti Haikonen

Tags

, ,

PenttiHaikonenPentti Haikonen is adjunct professor in the philosophy department at the University of Illinois at Springfield. An electronics engineer by training, he has constructed the experimental robot XCR-1, designed to exhibit conscious-like behaviour, and has written several books on approaches to developing conscious robots.

Despite his strong Finnish accent, the fourth presentation of this event by Pentti Haikonen achieved easily the most engaged response from the audience during the day with videos of his XCR-1 robot.

xczr-1a

Continue reading

New Scientist Artificial Intelligence day – Session Two; Embodiment Perspectives – Kerstin Dautenhahn

Tags

, , ,

Kerstin Dautenhahn is Professor of artificial intelligence at the University of Hertfordshire. Her research interests include social learning, human-robot interaction and social robotics. She is the co-creator of KASPAR, a robot designed to help children with autism develop communication skills.
Making the third presentation at this event, Kerstin explained her background in biological cybernetics, and the ways that her work revolves around the interactions between humans and robots/AI, concerned particularly with systems that aim to help people.
She was concerned to be immediately clear: robots are not people.
Elaborating, she pointed out that each robot you encounter is a de novo creation, not only lacking a common biological heritage with humans – making them unavoidably alien, but without any necessary shared characteristics (either deep or apparent) with any other robot.
Further, now and for the foreseeable future (in her opinion), robots have no psychology – there is no ‘mind’ in there.
The term robot, then, is a moving target, without a firm definition (I was surprised that we weren’t reminded of the etymological origin of the word ‘robot’ in the Czech word for ‘slave’), so that any conversation about robots must be particularly clear as to terms. This, however, is difficult, because of two very strong human traits;

Continue reading

New Scientist Artificial Intelligence day – Session One; the Mainstream – Irina Higgins

Tags

, , , ,

irinahiggins

Irina Higgins is a senior research scientist at DeepMind, and has a background in neuroscience.

The second presentation at this event largely focused on telling a story about DeepMind’s development of AlphaGo – using this as a vehicle to explain DeepMind’s approach and give insights into its culture.

She told us that DeepMind now has 300 scientists, and was keen to emphasise the high-minded aspirations of the organisation – from its mission statement;

Solve intelligence. Use it to make the world a better place.

to its ‘intentionally designed culture’, which aims to mesh the best aspects of industry and academia; the intense focus and resources of the former with the curiosity driven open-ended approach of the latter.

DeepMind’s operating definition of general intelligence is apparently; Continue reading