Tags

, , ,

OK, I’ve come to an impasse with what I’m doing at the moment – banging my head against a particular wall for a few hours too long.

Time to move on to another topic; pull out the mental list of all the things that need to be at least thought about in order to move our startup forward.

IP, MVP, business model, data protection, regulatory environment, legal structure, read more about Scrum/Agile, marketing strategy, logo design, data structures, UX prototyping …  …  … …  (…:::!!!!)

OK, here’s one: how much money are we going to need to spend on coding to get to a scalable MVP launched?

Big question! Cost estimation is a big deal in traditional bricks-and-mortar architecture, too, so I’m aware that this is not a subject to be taken lightly or fudged. A frequent and serious pain point in construction projects is when project estimate costs rise significantly AFTER the client is committed. Whatever else happens when this occurs, confidence and morale are dented, usually badly.

Construction clients want to spend as little as possible while statutory consents are at risk, and one way to spend less up-front is to do lightweight cost estimation (on the back of lightweight specification) and hope for the best. Of course, even if they have misgivings about these estimates, consultants are often unwilling to rock the boat at an early stage, not wanting to be the messenger that gets shot. Less scrupulous players have even been known to downplay cost risk until the client is committed, and then milk the situation (‘Oh, you wanted us to do the roof? Oh no, we never included for that. Yeah, yeah, I know you need a roof – rainin’ innit? Let me see what I can do for ya. Not gonna be cheap though – you wouldn’t want to skimp on a roof, wouldja?’).

So my approach to construction projects is almost always to convince clients of the value of making a larger-than-typical effort at the early stages to address all the likely risks – I’d rather have a client cancel early than go into something that is going to turn into hell for everyone. If they come up with another project in a few years perhaps they’ll remember me as that honest chap who saved them from getting burned.

So, can I do this with software?

It seems not. In fact, it seems not, big-style.

As far as I can see, there are two broad approaches to costing software projects (at the scale at which we are operating, anyway: at larger scales, there is a bewildering array of techniques which rely on a further bewildering array of jargon-ridden parameters. Brief scanning of a few Wikipedia articles makes it abundantly clear why so many large-scale software projects fail with massive cost over-runs – the planning tools are themselves so complex that it is blindingly obvious that they must introduce enormous instability into the outcomes they purport to reliably produce).softest2

So, approach one says that you write a detailed specification of exactly what you want the software to do, and get quotes based on that – the approach recommended for construction. Only, of course, this model is completely impractical for what we are doing; inventing a new tool, without close examples to work from, with a disruptive business model, for clients who have no idea that such a tool is possible, on our first foray into this domain – I could continue this list, but you get the point; we have only the broadest idea as to how the thing will finally work – all of the details will need to be built on the basis of ‘our best intuition’, then necessarily (and valuably) subject to considerable revision once we get feedback from real users.

The idea of investing a great deal of time and effort on writing a detailed specification for something that we know will almost certainly change significantly is obviously ludicrous. Earlier this year I wrote a spec for a pretty much standard and fairly simple e-commerce website. It was 37 pages long and as granular and dense as you like. I believe it was a good piece of work, but it was only sensible to even start because I knew exactly what the system should do in every situation the client and I could think of – and we knew we had considered all the important situations because the scenario and attendant business processes were so thoroughly understood. I could even get whole chunks of the descriptive text from the ‘net to work from because it had all been done before.

This is the opposite of our current position, where most of our use-case scenarios are untested projections.

This situation in software design is of course now normal, with every bright spark and her dog aiming to invent some hitherto unimaginable but immediately ubiquitous tool, and the observation made that the combination of near-free distribution costs, rapid granular feedback of all kinds and unprecedented scalability offer an opportunity to approach product design in a way that differs radically from any previous model. The combination of Moore’s Law, the internet and cloud technologies have finally enabled software development to break free of most of the conventional approaches which it inherited from engineering.

This analysis underpins the whole agile/lean approach, whence comes the second costing method, which, boiled down, appears to be; “Oh that’s easy. Just hire a team. Then your costs are what you are paying them each month, and they will make progress – it’s an ongoing cost, not a capital cost.”

I laughed out loud when I realised that was what was being said – entertainingly innocent and optimistic as hell, it nevertheless makes a strong point; that increasingly, software development will be a continuous activity, that the size of iterations and increments are shrinking, that it is getting less and less ‘lumpy’ all the time.

Only, this approach won’t do for us, not in relation to costing, anyway. Starting with nothing, no ongoing vehicle, and a proposition that is more complex than a one-liner, we are in an unavoidably lumpy situation. It’s a little like the difference between the macro-scale and the quantum scale. At the macro-scale, matter is smooth, continuous, statistical; but at the quantum scale, everything is individual and discrete, and radical probabilistic discontinuities abound.

We will need money from somewhere to get this thing to the point where it can be sprung on its unsuspecting market, and the people handing over the money certainly won’t be encouraged by the idea of the route to a functional product being a monthly expense figure and an open-ended list of milestones heavily caveated with a statement along the lines of ‘Of course, content and delivery date for these milestones are subject to revision by the team on a continuous basis, forever’. They won’t want us to talk like this even if they are sophisticated enough to think that that will be the truth of it – people giving out money fairly reasonably tend to want to get the impression that the recipients take it seriously.

We need a figure. A hard figure – however round, however hedged; we need a figure we can feel some confidence in, that we can discuss with potential investors without awkward silences, blushing or stammering.

Impasse.

And then I realise that what is needed is an estimate from an experienced intuition. A craftsman’s answer. Experienced intuition is what makes a real expert – accumulated knowledge, even combined with great intelligence, is weak without the unconscious integrative wisdom that experienced intuition brings.

Craftsmanship is a word relatively rarely encountered in the digital realm – it too readily conjures images of men with calloused hands, pencil behind the ear and a briar pipe redolent of Old Nut Brown in the bib pocket of their overalls. But as Richard Sennet makes clear in his 2008 book The Craftsman, craftsmanship can arise in any domain where what he calls a ‘material consciousness’ is fundamental to expertise. That is, a strong relationship between the necessary thought processes and the ineluctable realities of the material at hand, developed through the practise always of making real work that has quality. This applies to software development as much as it does to carpentry, and in fact Sennet’s first example in the book of a class of craftspeople is the community that produce the Linux OS.

Because the digital realm is subject to unprecedented pace of change (and highly disruptive change at that), it might seem that coders can no sooner gain the degree of experience combined with domain expertise necessary to achieve a level of craftsmanship than that craft will be superseded, condemned to a backwater of legacy issues, and of course this occurs.

Nevertheless, it is clear that a craftsman-like relation to coding can be attained that escapes this trap – numerous examples exist, but I’ll simply point at Ward Cunningham, who invented the wiki (here’s the first public one, still going strong, as an example; full of informed, terse-but-relaxed, engaged discussion about coding). Ward, and others like him, exhibit an understanding of what coding is, of what computers are, of the implications of a variety of strategies that transcends languages, hardware and technology at the same time as it is deeply interested in all of them, in the most practical way possible, being always directed toward the business of producing useful work of quality.

That’s what we needed, I realised; someone like that, someone who could bring that sort of insight to the situation should be able to;

  • understand what we are aiming to do,
  • have an immediate grasp of the rough scale of what we are attempting, and its implications for real-world coded implementation and
  • have an idea of the investment of time – and thus money – that will be required to achieve this.

The right sort of person should be able to do all of this, to a reasonable if still ‘ball-park’ level of accuracy, simply on the basis of experienced intuition. An hour or two with someone able to do this sort of thing without really having to try has much higher value, in my experience, than weeks spent attempting to produce a thorough appraisal.

The former may only be right to within 20%, but it won’t lead you astray with the sort of false certainty that apparently thorough breakdowns do. The reality is that no breakdown not based on a fully-detailed and finalised specification is anything other than a self-confirming work of fiction, likely to lead you into all sorts of dangerous assumptions of reliability just because it consists of page after page of sensible seeming allocations and figures.

So, enter my old friend Mark van Harmelen, who over the course of a chinese meal and a meandering walk followed by coffee and cake, took a felt-tip and scribbled a list of about eight functional elements, each of which, after some staring into the distance and sub-vocal murmurings, acquired a figure with three noughts on the end of it. Add these up, and there was my answer, along with an associated time equivalent and a short list of assumptions.

Job done – certainly to the level we need for the moment, and with a level of confidence that no amount of spreadsheet-building would give me.

Another empty space along the upward spiral acquires some content – thanks Mark!

An imminent post will be a parallel to this, on the subject of my slowly growing understanding of what ‘agile’ means for us.

Onwards and upwards!