The intersection of public policy, technology and society is complex. And yes, that’s something of an understatement. A mix of politics, law, design, architecture, usability, privacy, security, accessibility, technology and ethics (amongst many other factors) all interplay in often unpredictable ways when creating and providing public services – not least when they encounter real people.
And yet this complexity is rarely understood at the top levels of most organisations, leading to inadequate comprehension of the political, human and socioeconomic impacts of their decisions. In the same way, few technologists comprehend the potential political implications of the design decisions they make, or the damaging folly of the snake-oil claims of easy technological solutions to complex problems that inevitably turn out to be nothing of the kind.
Dr. Melvin Kranzberg’s laws include the much-quoted “Technology is neither good nor bad; nor is it neutral“, but often missing its full context:
Technology is neither good nor bad; nor is it neutral … technology’s interaction with the social ecology is such that technical developments frequently have environmental, social, and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves.Kranzberg, M. (1986). ‘Technology and History: Kranzberg’s Laws’, Technology and Culture vol. 27, no. 3, pp. 544-560. Published by The Johns Hopkins University Press and the Society for the History of Technology
Policy decisions – and manifesto declarations, public expectations and legislation – are often made before the associated delivery project or programme is set in motion. And they usually operate almost exclusively within the existing departmental or agency silos of the current configuration of the public sector. So any genuine ability to meet the well-intended mantra of “user needs” has already been compromised (as Cassian Young and I have previously discussed in Escaping waterfall government and the myth of ‘digital transformation’).
Let’s consider a simplified example to illustrate what I mean: the adoption of a mandatory state-backed identity system. Whatever political choice is made – whether it’s one that requires all citizens to be enrolled, Napoleonic-style, on a central, state-mandated identity register; or one that requires citizens to enrol via commercial third parties; or something else – that policy decision effectively imposes a service design, leading to an architecture with specific privacy, security and ethical characteristics, and the role and relative power of the various actors – government, business and citizen – within it and the human consequences that result.
Political decisions, by definition, are all too often the result of ideology rather than an objective evaluation of evidence and the best ways of achieving a desired outcome. And while they may be made with the best of intentions, they bring with them a lot of baggage, a whole series of top-down constraints on how a particular outcome might be achieved.
Going back to our example, consider the upfront decision of the UK’s 2006 Identity Cards Act, which mandated the solution – plastic cards and a central database – in a piece of legislation. In doing so, it skipped the actual requirement (a desire for the state to require citizens to “prove who you are / prove you are entitled to something”); missed the opportunity to objectively explore alternative options and whether they might achieve better policy outcomes more effectively; and rushed into an arbitrary choice of solution.
The 2006 Identity Cards Act and resulting programme is far from being the exception. It’s part of a well-documented portfolio of so-called failed “technology projects” that have often brought delays, overspends and even raw human misery – together with long, and rightly critical, National Audit Office reports. These “technology projects” are basically the by-product of top-down political choices, imbued with all the same conscious and unconscious ideological biases as the decisions that created them, and with inadequate feedback mechanisms to iterate and refine the approach as the programmes proceed.
The consequences of these political decisions ripple into every corner of the resultant delivery programmes, constraining areas such as discovery, service design, procurement, architecture and privacy and security. The result is all too often not only failed outcomes, with the promised political objective never or only poorly achieved, but also an erosion of trust in the political system and those who continue to promise simple soundbite solutions to complex problems.
Policymaking generally fails to comprehend Kranzberg’s thirty-two year old observation, ignores the essential role of continuous feedback and improvement, and forgets the value of the type of iterative, experimental and methodical approach that the late Carl Sagan outlined:
“In almost all … cases, adequate control experiments are not performed, or variables are insufficiently separated. Nevertheless, to a certain and often useful degree, policy ideas can be tested. The great waste would be to ignore the results of social experiments because they seem to be ideologically unpalatable … Since there is no deductive theory of social organisation, our only recourse is scientific experiment – trying out sometimes on small scales (community, city, and state level, say) a wide range of alternatives.”The Demon-Haunted World. Science as a candle in the dark. Carl Sagan. Ballantine Books. 1996
When people suggest for example – often with the best of intentions – ideas such as consolidating welfare processes and rules into a single unified model, or propose changes to the balance of nationalisation / regulation / privatisation, or ID Cards, or a need for data portability or data sharing, they are making implicit assumptions about policy options and the technical architecture, service design and privacy and security issues that flow.
It would be far better to prevent this repeated rush into failed “solutions” by instead teasing apart and testing and refining ideas and options, rather than starting from the wrong place with potentially humiliating or degrading human consequences. But often conscious, or unconscious, ideological or technical bias makes us blind to exploring better ways of achieving a desired outcome.
We need a way of enabling more time to be spent on modelling and testing outcomes and how best they might be achieved – including human consequences, public policy, service design, architecture, privacy and security (and indeed other areas, such as ethics) – if we are going to move away from the broken cycle of simplistic promises and resulting failed programmes that continue to litter the landscape.
Whether our political system, and the understanding of the role and impact of technology within it, can evolve to the level of maturity required however remains to be seen. But Sagan makes a good case for why it’s so important we try:
“The scientific way of thinking is at once imaginative and disciplined. This is central to its success. Science invites us to let the facts in, even when they don’t confirm to our preconceptions. It counsels us to carry alternative hypotheses in our heads and see which best fit the facts. It urges on us a delicate balance between no-holds-barred openness to new ideas, however heretical, and the most rigorous sceptical scrutiny of everything – new ideas and established wisdom. This kind of thinking is also an essential tool for a democracy in an age of change.”The Demon-Haunted World. Science as a candle in the dark. Carl Sagan. Ballantine Books. 1996