Wednesday, 27 May 2009

80 / 20 Development

That old Pareto rule applies to software development: you can get 80% of the user's desired functionality for 20% of the effort needed to produce a proper application. One iteration of the rule means that you get 96% of the requirement for 36% of the total effort, which is pretty much as far as most people go. Especially when the application is being developed in Office.

There's an excellent essay called From Windows to Cocoa in which Peter Bright describes three kinds of programmers. The first kind are really business analysts, the second are the guys working your company's big programs, and the third are the real craftsmen. Here is Peter's description of the first kind:

...basically business analysts; they're using Access or Excel or VB6 to write data analyzing/number crunching applications. These things are hugely important in the business world, totally unexciting to anyone else, and the people writing them aren't really "programmers." I mean, they are, in the sense that they're writing programs, but they're not especially interested in programming or anything like that. They don't really care about the quality of the libraries and tools they're using; they just want something simple enough that they can pick it up without too much difficulty. They'll never write the best code or the best programs in the world; they won't be elegant or well-structured or pretty to look at. But they'll work. Historically, as I said, these are the kind of people who Access is made for. Access is a great tool, quite unparalleled. Sure, it's a lousy database engine with a hideous programming language, but the power it gives these people is immense. So Access and VB6 and Excel macros are where it's at for these guys. ”

That's pretty much me and all the people around me. Except there are two levels within this first group. There's the 80/20 developers and there's those of us who try to write robust applications. What's an 80/20 developer? He's the one who writes 80/20 applications.

An 80/20 application dumps you into the middle of the VBA editor with an error message like “Get Range method of current object failed”. When you call the guy (have you ever met a girl who cuts VBA?) who developed it, he says: “You should have removed the “?” from the data” or “Oh, you can't do that with this, it doesn't make sense”.

An 80/20 application has user forms where all the buttons are active and have names that don't quite describe what they do. It is possible to push one of these buttons and get obscure error messages and not be able to get back to where you were. When you call the guy he says “you can't press that button until you've loaded the csv file and entered the magic password. I did explain that.” That's not how a user interface is supposed to work.

An 80/20 spreadsheet has range names that don't describe what the range is for or has in it; it has un-commented code, functions that replicate something that there's a method in the object model to do, non-descriptive variable names, no error-trapping, no exception-handling... and it's hell to maintain.

Sure the 80/20 application does what the users want it to do. Almost. If treated with kid gloves and fed only carefully-prepared data.

For reasons I don't understand, people think that just because it's Microsoft Office, they don't have to read anything. There is more to learn about the Excel object model than there is in the usual first-year undergraduate course of mathematical methods. Learning Java with all its main libraries as well as finding your way round Eclipse or NetBeans will fill up more space in your head than a geology degree. There are 700-page books full of the tricks and annoyances of Excel, Access and Word.

You need time to develop an application which will let you find your way round the object models. That's what you don't have. Bash this report out, run that simulation, cut that query, attend this meeting, make up that presentation. Busy work that benefits nobody. We're not here to learn things, we're here to fill in forms, work processes and make the company money.

Well, you can't do what you don't know how. You can't be innovative if you don't get time to think, read and talk with other people. You can't do things smarter if you're permanently dumb.

Steve McConnell's Code Complete is pretty much everything most people will need and want to know about good programming style. Buy, read, enjoy and digest. Even picking three things from it to improve your programming will help.

Tuesday, 26 May 2009

The Philosophy of Mistakes: Interlude

I'm going round and round in circles on the subjects of mistakes and testing. As ever, it's because I'm arguing with a voice in my head that doesn't belong there. It belongs to a previous manager I had who had a very low level of technical skills and knowledge and paranoia way into the red zone. What he wanted was the assurance that nothing would ever come out of our pricing engine that would ever cause him embarrassment. Nothing. Ever. But it all had to be black-box testing, because if he (and others in the team) looked inside the box, they would not really understand what they were looking at.

Now everything I have read on software testing says that the vast majority of mistakes and omissions are found by a thorough code review conducted by two relevantly skilled people. Black box testing is a very bad way of finding mistakes. Why? Because you don't need many input variables with a reasonable range of values before a “perm all the values” test suite becomes a billion or more records long. You can't construct it, let alone have the time to run it. So you have to run a subset – a very small subset of maybe a hundred thousand records. What are the odds that a random flaw will be in that subset? That would be less than 0.01%. This is not going to work. Plus, you have to construct the test cases, which means you need an engine to produce the “answers” for the cases – in software, since no human being is going to be able to work out the answers to more than about a hundred cases accurately – so you get stuck in an infinite regress of checking the checking.

What, in other words, my paranoid manager needed to do was sit with the programmers and get a detailed walk-through of the code behind the pricing engine and how well they had foreseen problems and dealt with them. He would then have an idea of where specific problems might arise and what he should be testing for. But no, that was never going to happen.

Testing goes back to the days when people tried to pass off low-grade precious metal currency on each other – so you needed to test that you really were getting a genuine shekel or twenty-carat gold or whatever. Testing for that is simple. So is testing that the latest batch of cannon aren't going to blow up all over your soldiers – that's why it's called proof-firing (on the Abderdeen Proving Grounds). Any gun that survives a dozen shots is unlikely to blow up later – it's in the nature of the beast. As the system gets more complicated, so does the testing. In the end, you have to accept that if you're going to have anything as complicated as a nuclear submarine or a Boeing 747, some things are going to be less perfectly finished than others and it may even sail or fly with a couple of non life-threatening snags. That's the nature of the real world. The same goes for software – though the desired standard is more XP SP1 than Windows Vista. Perfect, error-free programs are either very few if complex, or very small if plentiful. What we get in ordinary business applications is going to be well short of perfection, unless it's fairly simple.

Testing is and always was there to prove that under normal circumstances your widget works, not that under some weird conditions it doesn't. If it's a mechanical widget, those weird circumstances may not be too hard to find – too hot, too cold, too much dust, but who foresaw “the wrong kind of snow”? If it's a software widget, those weird circumstances might be there, but may never occur. Software isn't like a fugaze coin – which shows up bad when it tries to be good – rather, software can work just fine, except when...

And no amount of black box testing will ever find it. What you need is sensible peer-review. Like that's going to happen in a modern organisation where everyone is a single point of failure.

Monday, 25 May 2009

Which Lady Is The Tramp?

The other morning on the commute I was listening to the Pure Jazz Moods 2-CD (it's the commute, you're allowed to listen to anything that puts you in the right mood) and on came Ella Fitzgerald singing The Lady Is A Tramp. And with the last lines “I'm alone when I lower my lamp / That's why the lady is a tramp” I realised what a neat bit of ambiguity Rogers and Hart had pulled off. You're supposed to think, well, I always had thought, that it's the singer who is the tramp, because she likes the beach at Coney Island, can't eat late, doesn't play craps with barons and earls and is a “hobohemian”. Except she turns up on time for the opera, stays awake all the way through, actually “reads every line” of Walter Winchell, doesn't do bitchy gossip (“won't dish the dirt / with the rest of the girls”), is quite happy with boating on Central Park (as opposed to a yacht off Newport) and when she goes to bed there isn't a lover there. None of which is true about “ladies” - so it's the Lady who is the tramp, because her manners are fake, rude and expensive. In fact, the Lady is a moral tramp, while the singer is only a social tramp.

It's end of the Bank Holiday - deep thoughts are suspended for a day.

Saturday, 23 May 2009

Countdown to July

It does not matter how many times I sit through a re-organisation, the waiting does not get any easier. You know they are plotting to get rid of everyone they don't like and who doesn't fit in, while lying about how it's all about organisation, skills and office locations.

I'm worried about losing the salary, of course. The possible eighteen-month job search is not something I'm looking forward to either. But I have some money – even though it's supposed to be for my pension – and I know I'm not going to lose my mind or identity being out of a job. Not out of work, my work is as a philosopher, logician, human being (washing, cooking, keeping a pleasant house – this is work) and all-round creative person. When I have a job, I'm usually out of (my) work.

What is really irritating me is the thought of having to listen to a load of patronising bullshit, while the HR department “support me in my transition” and explain that I shouldn't take it personally. This is claptrap: if it has my name on it, it's personal. The only things that aren't are circulars addressed to “The Householder”. The moment they tell me, I want out. No handover (what's to hand over – if they don't need me, they don't need what I do or what I know), no hanging around: give me the Compromise Agreement, the breakdown of the severance payment, structure it to be tax-free and I'll sign and return it as soon as my solicitor says it's okay. Otherwise, I'm outta here. Pack my case and go – right then and there. Before they can even start their box-ticking speech.

I am fifty-five years old and this is the fourth time I will have been made redundant in July. My pensions were worth not much last year and even less after what's happened to the markets since. You don't want to know the unemployment figures for men over fifty-five. They want me to sit still while they run through a speech that makes them feel good about what they're doing: they need me to sit there and nod along, or they won't feel justified in themselves. The hell I intend to give them that satisfaction.

Walk around our offices and you would not know all those people think they have a good chance of losing their jobs. Management say they think it's because we're all being “professional”. They don't think it's anything of the sort – they know it's because everyone is in denial. It hasn't happened to some of them, so they don't know it's real yet. They still think they are needed. They have yet to learn that no-one is needed, that everyone is fodder and management think that whatever you do can be done better by someone else or didn't need to be done at all. It's after it all happens that the mood will turn sour and the “professionalism” vanish. By then, I'll be gone.

While I have had myself “out there”, I haven't been that vigorous in finding something. Now I have to go through some motions – not because it will result in a job offer, but because it will make me feel like I'm being pro-active. We're going to hear in the next two-four weeks and I'd like to be running when the ground hits me. Maybe it was just about okay to believe that I had a job in the new organisation until last week, but now it isn't. The odds have swung. If I leave the serious search any longer, I will feel bad about myself.

That's what this is really all about. It has nothing to do with them. It has to do with me: I'm not acting as I need to be. I have to not “judge myself mercilessly” but to start the work. If only I didn't hate job-hunting. And it's only just hit me why. You, gentle reader, may think that the next job is going to make your life better and be fun or interesting or give you lots of travel or whatever it is rocks your boat – but I don't think that about my next job. I can't make money from my work, I can only make it, as so many of us have to, from my job.

On Being an Analyst In A Bureaucracy

For reasons beyond me, people want to be managers rather than analysts. I regard the title 'analyst' as a badge worth wearing, whereas I'm not so sure that 'manager' is really worth it – these days, managers in large organisations are little more than bag-carriers for the senior guys, “messengers, sent by grocery clerks, to deliver a bill”, as Colonel Kurtz describes Captain Willard in Apocolypse Now.

There are jobs with 'analyst' in the title which are more concerned with processing, say, prices for commercial accounts from the salesman's proposal into the computer systems – these are really administration roles. Then there are the poor bloody infantry sorting out the errors in vast databases - these are data administrators, and no less valuable for that.

The role of an analyst is to source, interpret and report information and provide an informed view on what that information means for the future. It's to dig into the numbers and then assemble a picture: which is both analysis and synthesis. If this is done at all now, it's done by top-flight investigative journalists when they write books, and maybe a handful of stock-pickers in the financial industry.

That can be difficult to do well or with integrity when the information and evaluations are contrary to the chosen aims and ambitions of the executive. CIA analysts had this problem in a big way with the Bush administration. An analyst whose interpretations don't fit the current policies is usually told to get with the program: in most organisations, raising money or gaining management support is everything, and figures and anecdotes are plucked from thin air to support the policies.

You can't be an analyst in a bureaucracy. Bureaucracies are inherently political, and politics is the opposite of honest, creative scientific, artistic or technological endeavour. Managers in bureaucracies do one of three things: push initiatives they believe will advance their careers, resist changes pushed by career-advancers, or act as the willing servant of their senior managers. Everything they do is with a political end in mind, everything has an ulterior motive, which is why their annual appraisals are dishonest and hypocritical. Bureaucracies are not knowledge organisations: knowledge implies truth and an authority independent of rank and personality. That is incongruous in a bureaucracy: a manager does not have to be right, they just have to be senior. If your Director says that making loans to high-risk individuals so that you can book the up-front insurance premiums and a bunch of income in the next three months is the way to go, and damn the bad debts later, then by golly, that's they way all the under-managers will go and it's the song they will all sing.

Bureaucracies are not skill-organisations: skills, technical ability and expertise, have the same implication of truth and independent authority, and are therefore anathema. That's why big organisations don't train people in transferable, market-valuable skills: not because “they might leave” but because then those people would know something that could not be contradicted by management to suit some bogus policy.

An analyst needs seven things: knowledge of their industry and market; a broad general knowledge of economics and demography; a grasp of technicalities from statistical analysis through cost accounting to SQL query writing and programming; the ability to present information clearly and concisely; a clear-eyed understanding of the way the world works; a grasp of the principles of formal and informal logic and of epistemology. Oh, and a sense of humour.

Bureaucracies take analysts and reduce them to jargon-spouting, cliche-scribbling number-crunchers, because the value is in the organisation, not the people, so the people must not have unique skills and knowledge. If there is a need, such as in IT, it must be devalued as much as possible by being outsourced and off-shored. The only thing of value to the management of a bureaucracy is the ability to source and present figures that support their positions.

What I'm doing working where I am, I don't know. Oh, yes, I do. The location is one of the best in the world. No kidding.

Thursday, 21 May 2009

Three Scenes On The Way Home

Tuesday evening between seven thirty and seven fifty in the evening. It's worth clicking on the photographs - there's a heap of details there.

1. Chinatown



2. Upriver from Hungerford bridge


3. Onto the South Bank



I've had the lurgi for the last couple of days. I shouldn't have gone into work on Tuesday, but I had mysterious things to do in the early evening.

(Taken with a Canon A590IS)

Wednesday, 20 May 2009

Sex, Science and Profits

I read through Terence Kealey's Sex, Science and Profits over the weekend, wearing my philosopher of science hat. Prof Kealey is the Vice-Chancellor at the University of Buckingham and a proper scientist in his own right, so his is a practitioner's view. He's arguing that science funding is best left to the private sector (the University of Buckingham is a private university) and the State should keep its nose out. He doesn't like patents and IPR much either.

The book is worth reading even if you think the State should be funding science. It's one of those books where the evidence is more interesting than the theses. Prof Kealey has two theses. First that all worthwhile advances in science and technology are made in the private sector by practitioners, and that academic theoreticians lag behind the experimentalists and industrialists, not run ahead. Second, that the idea that science is a public good, available to and benefiting all, is a serious mistake.

Let's look at the first thesis. Unless you count royalty and aristocracy as “the State”, when most of the best mathematics has always been funded by the State, there was no State – in the modern sense of an all-pervading, intrusive, executive bureaucracy taking a sizeable chunk of earnings in taxation – until after the Second World War. Hence almost all of the advances in technology and science had to have been made by people of independent means, funded by aristocrats or wealthy merchants, or working in a trade because that's all there was. “Big Science” was born in the Second World War, with the Los Alamos programme and the development of radar and associated technologies in the Rad Labs. These were the most expensive and successful research efforts in the history of mankind – and both depended on a theory so abstruse it is best understood through the mathematics: Quantum Theory and Relativity. And those were developed by a bunch of university academics with no connection to business at all (a Swiss patent clerk was a civil servant).

Since then, the record of State-funded science has been pretty dismal – especially in the countries that now form the EU: the only exception is in high-energy physics, which can only be funded by the State. By definition, high-energy physics can have no commercial applications (because there's a super-collider in every office...). The record of much European technology is also pretty dismal, and one reason might be that the major industries in Europe were until recently owned by the State. Another might be that Americans know how to organise research and Europeans don't.

A swift word about Bell Labs. This legendary research centre, part of AT&T, was the home of radio spectroscopy, the transistor, the discovery of cosmic background radiation, the Unix operating system, the C programming language and the idea of object-oriented languages, amongst others. But it was run as the best engineering department in the USA. It employed people who were bright and original and didn't expect them to fill out grant applications, objectives and progress reports. When it turned into Lucent Technologies, and did expect the progress reports, it ceased to be the legendary Bell Labs. Nothing interesting came out of Lucent, which was absorbed into the French company Alcatel in the oughties.

So it's not about public and private, it's about the way the institution is run. It just so happens that there are more private-sector people who are better at it than public-sector people (the difference between the two kinds of people is real and I swear it is genetic).

The real insight is delivered in the chapter “There is no such thing as science, only scientists”. Kealey's point is that modern science is so complex that it can only be understood by people who specialise in it. Science is not a public good because the general public have no hope of understanding it (the technical papers, not the pop science books). A senior manager or board director is not going to be able to pick up some fabulous idea from the journals because they don't speak the language. So they have to employ scientists to review the journals, attend the conferences and learned societies because only the scientists can understand the work of other scientists and translate it into a product. Now here's the really neat bit. Why do companies let their own scientists publish? Partly because publishing scientists are happy, productive scientists - Kealey says it's something to do with status and making time with pretty girls – but mostly because if your people don't publish, no-one talks to them and they don't get many invitations to conferences. The real benefit isn't from reading the journals, it's what you pick up while networking. No-one is going to network with people from a company that only wants to learn from but not contribute to: free-riders not allowed.

This might do as a definition of a “knowledge industry”: one in which there is a net advantage to allowing the smart people to publish substantial research (if delayed a little to give product development a chance). Financial markets are a knowledge industry, but retail banking is not: telecoms engineering is a knowledge industry, but telecoms operations is not. The law (with one exception) is perhaps the ultimate knowledge industry – it's all published (see the Incorporated Council of Law Reporting in the UK) – while the UK's secretive local government, social services and Family Courts are probably the ultimate “ignorance industries”. (We're talking about useful research here, not the carefully PC textbooks for social workers.) By the way, Kealey mentions businessmen talking about “best practice” as a way of sharing knowledge: in my experience, the more people talk about “best practice” the further they will be from it after they've finished their seminar. “Best practice” exercises are usually futile, because there's never any budget for making the IT changes needed to support any changes – they are there to scratch an itch, not to make progress.

Returning to the thesis, by the same argument, there's no such thing as fashion, there are only designers. Because just as most people can't make an antacid after hearing a seminar about recent work on stomach acid production, they can't make a dress or suit by looking at the photographs either. Some can, and they make a good living ripping off catwalk fashion for the high street, but there are probably no more of them than there are digestion scientists. But you're not going to deny there is such a thing as “fashion” - there is, and only a few can do it. It's the same with science. There is science, but it's as accessible as, oh, Michelin-star cooking and needs about the same length of apprenticeship. But that doesn't mean it doesn't exist – just ask anyone who has to pay for either a Michelin-starred supper or the development of a new drug.

The value of any argument or book is not that it's right, it's that it make you think. And Prof Kealey's book is well worth the time it takes to read it.