be careful what you wish for … (part 94)


Hold on — haven’t we been here before? There’s something very familiar about the recent unveiling of new powers for the state to snoop on the UK population through a proposed new Counter-Terrorism and Security Bill.

I doubt that anyone reasonable argues with the superficial intent — to detect criminals (specifically terrorists) and bring them to justice. The much more complex issue is the question of where lines are drawn: what is the most appropriate way of achieving that outcome? How much should the state intrude into everyone’s daily lives?

We need to find a solution that is proportional, sustainable and reasonable in our democratic state. The worst possible outcome would be that we blindly erode the very values we once used to uphold — not because someone bombed us day after day into doing so, but because we voluntarily surrendered our own freedoms, and hence our legitimacy, out of a misplaced sense of fear.

Part of the problem is that often these issues only seem to be considered through one lens: that of counter-terrorism. When in opposition, political parties tend to listen to a wide variety of expert opinion, and at least offer us the hope of developing reasonably balanced policies. Once in power however, governments seem to turn to single sources of truth, all of them peering through the same lens. Over time, it becomes almost impossible to distinguish a new government’s policy on these issues from the ones that preceded it, as their once-intended policy becomes progressively degraded.

These latest counter-terrorism proposals seem intended to work by eradicating any remaining vestiges of anonymity and privacy in our daily lives. This extinction of personal anonymity has profound implications — not just for journalists whose sources can no longer be protected, not just for MPs trying to meet in confidence with their constituents, or for NHS whistleblowers, but also for the police themselves. And indeed for the rest of us, just trying to muddle along and get on with our lives.

I do wonder which undercover criminal sources, or “double-agent” jihadists, are going to run the risk of communicating with or meeting with the police or intelligence agencies, when they know it’s no longer either secret or safe to do so? With that telltale, pointing-finger trail of mobile phone interactions and email exchanges left in their wake, who would risk putting their lives on their line? The risks for such essential insider informants are being multiplied by the very measures presumably intended to help.

I’m surprised we haven’t heard more from the likes of important players such as Crimestoppers — given that their assurances of anonymity for those wanting to help presumably plays a key role in encouraging people with information to come forward. Their site says they received over 100,000 pieces of useful information about crime from the public last year, and over 6,000 criminals were arrested and charged. Yet who has evaluated the impact that the removal of anonymity will have on such essential sources of information?

Without a proper debate, and a rigorous assessment of the likely real-world impacts of a further erosion of our online and electronic device privacy, who knows whether these latest familiar proposals changes will actually assist — or degrade — counter-terrorism intelligence work?

Part of the debate that we need includes news programmes and journalists asking these sort of questions: “What impact will the end of anonymity have on essential intelligence gathering sources like Crimestoppers?”, “How will the the police be able to meet with informers if all the details of who met with whom and when are automatically being gathered electronically?” and even (admittedly more self-serving) “How will journalists protect their sources?”. But ultimately these issues and their very real impacts are not going to go away merely because a properly informed debate doesn’t take place.

We need a much better public discussion about where these lines are drawn: what information is gathered, from whom, in what detail, what it is stored and how it is protected and how accessed. No computer system is 100% secure. There’s no such thing. Information will leak from the systems holding all these sensitive information. The odd rogue insider will occasionally — and inevitably — abuse their position: sources will be compromised, confidence undermined, sources of intelligence lost. Possibly far, far worse.

If these proposals do go ahead, the controls and democratically-accountable oversight regimes put into place must be robust and demonstrably independent to counterbalance them. Those who abuse the system — and they will — must be brought promptly and publicly to trial, and those who are inadvertently exposed — police sources, journalist sources, MPs’ constituents, NHS and financial services whistleblowers — rigorously protected. Parliament needs the capability, commitment and power to ensure our (unwritten) constitution is not undermined by the drip, drip, drip of incremental responses to the fear of terrorist activities.

To answer my own opening question — yes, we have been here before. And I’m sure we’ll be here again. We’ve seen this well-meaning, but one-sided perspective in the past: that’s partly what my semi-dramatised 2006 blog when guilty men go free was all about. When proposals such as this are put in front of us, they need to be robustly assessed by a credible public challenge rooted in the wider reality of the way our country operates and our people live their lives — and not simplistically considered through a counter-terrorism lens, darkly.

Posted in future Britain, privacy, security, technology, technology policy, Uncategorized | 1 Comment

Happy 20th anniversary online government

It’s 20 years ago this month that the UK government first launched a website intended to provide a simplified, single point of access to information from across the public sector. I thought I’d add a little more detail — or at least, a few historic screenshots — to support my recent CIO column marking the anniversary.

The Government Information Service (GIS), hosted at, launched in November 1994. It was intended that over 400 public sector organisations, including government departments, local authorities and police forces, would provide their information on the site, which received around 200,000 hits a day shortly after launch.


In July 1996, this summarised the state of play:

23 July 1996

By mid 1997 it was approaching 2m requests a week.


In 1999, the “Portal Feasibility Study” (PDF) set out plans for a more comprehensive approach to delivering all government services online in one place. The portal element of this architecture was originally nicknamed “”: below are some mockups from 2000 of how it was envisaged it might look during early envisaging. 1 2000 2 2000By the time of its launch, it had become “UKonline”. UKonline initially appeared as a beta site in November 2000, followed by a formal launch in February 2001.

UK Online

UKonline aimed to provide more integrated services, built around citizens’ “life episodes” (events that had meaning to them), rather than just projecting the departmentally-based  silo services already in existence.

UK Online life episodes

The 1st March 2004 saw another rebrand and relaunch, this time as Directgov.


In May 2011, Directgov (and its sister site, BusinessLink — dedicated to meeting the needs of UK business users) began to be superseded by GOV.UK, initially as an alpha.


In October 2012, the site replaced Directgov and went fully operational as GOV.UK, celebrating its second birthday just last month.

UK.GOV.October 2014

I’ve collated some stats on the usage of the online site(s) in various guises over the past 20 years below — not helped by early stats relating to “hits” or “visits” and more recent measures relating to “unique visitors/users”. So don’t take this as the definitive or final comment on the growth of online government information and services but a partial snapshot at a moment in time … (and if any of you have additional interim dates and usage stats not shown, let me know and I’ll revise/improve the list).

  • 1994 — 200,000 hits a day
  • 1997 — 285,000 hits a day
  • 2004 — 172,257 unique visitors a day
  • 2012 — 1m unique visitors a day
  • 2014 — 1.4m unique visitors a day

Happy 20th anniversary!

[A more detailed narrative of the last 20 years of online government is provided in an earlier blog here]

Posted in IT, IT strategy, open government, public services, technology, technology policy | 1 Comment

random access memories

I’m often asked how I got into computing in a generation when neither IT not computer science were on the school curriculum. So, I’ll try to fill in a few random gaps from some cobwebby parts of my memory …

It all started as a hobby. One of my earliest memories of using a computer was thanks to the North Kent Amateur Computer club and the host of hobbyists and enthusiasts that used to attend. It was the age of the kit computer, when the likes of the UK101 ruled supreme.


I think it was Thursday evenings after school when I’d turn up at some local venue — usually a spare school hall — to find a room full of enthusiastic, often bearded participants, all hunched over circuit boards and small TV sets acting as monitors.

From time to time one of them would pick up a soldering iron and make an adjustment to something, or unplug and restart the system or wallop a TV set to stop the picture jumping. These are the folks I’ve always thought of as “digital natives” — people who understand how to design and use digital technology — but that phrase seems to be (ab)used now merely to mean people who have grown up using digital devices as consumers.

I remember testing out some of the simple 3D green mazes and early scrolling games that tried to make it feel like I was speeding across a largely imagined landscape and into the distance. The members of the club were always unfailingly generous in letting me try out their latest hardware builds and hand-cranked software instead of merely dismissing me as another irritating spotty teenager from the local comprehensive avoiding homework with a much more interesting displacement activity.

Towards the  end of my sixth form days, the first computer had turned up at the school — a Commodore PET.


Ever since, I’ve regarded this as what a “real” computer should be like — with a built-in monitor and keyboard, and its own sophisticated storage system (aka a cassette tape deck). We were able to book the PET after school to use for something like an hour at a time — just about enough time to load something from the tape, suss out how it worked and then hand it over to the next person. We seemed to spend most of our time standing around it and working out how to win at the text-based version of the Star Trek game.

I remember at home we also ended up owning one of the earliest video game consoles — a Magnavox Odyssey. My mum bought it for us secondhand (I’m not quite sure how we managed to persuade her to do that, particularly given how tight money was). It seemed great fun for its time, complete with a light rifle and a set of screen overlays that would be held in place on the TV screen courtesy of the static charge.


These imaginative static overlays turned our black and white TV into “colour” for the first time, with most of the games based on a variation of Pong. To change game, you removed one of the cards that came with the console, and inserted another one in its place. I remember using the rifle to take pot shots at the ball of light as it moved around the “haunted house” overlay, appearing momentarily at some of the “windows” cut into the static film. State of the art or what?

Like most of my early machines, sadly I’ve no idea what happened to the Odyssey over the years — lost in one of those many post-university moves between various rental properties in and around rundown parts of London.

My own first purchase was a Sinclair ZX-80, then the ZX-81, then the ZX-Spectrum.

Screen Shot 2014-10-29 at 07.32.04

Like many of my generation, I owe a lot to the pioneering spirit of Sir Clive Sinclair: thanks mate.


At university, the Spectrum ended up in the communal room, used for gaming alongside our “artistic” pyramid built from dozens of empty beer cans assembled on the mantelpiece. The dusty two-bar electric fire that made this the only warm room in the house was at the end of slightly melted extension cable which ran into the adjacent bedroom — the only room in the Stoke Newington house we all shared that had no electricity slot meter in it. (From what I remember, most of the house was wired via a spaghetti ball of cables back into that room and its free electricity supply …).

Initially, programming these early computers either consisted of copying in program listings from hobbyist magazines or writing your own code. With the magazine listings there always seemed to be errors in the code, meaning it was necessary to buy the next month’s edition as well for the errata — unless I managed to work out for myself what lines had been missed or garbled in the meantime. Later came the use of cassette decks, a rather erratic and unpredictable way of loading programs — often only failing after 15 or 20 minutes or more when the Play button on the cassette player would pop up and you’d realise the program had failed to load.

Later I moved onto a Commodore 64, a BBC Model B and then onto a “proper” computer — the Apricot F10. My programming efforts, which had started with BASIC, experimented with everything from assembly language (on both Z80 and 6502 processors) to Fortran, Prolog and Pascal.


At work when I started, it was an age when the IBM PC had yet to dominate — it was not unusual to find an office where almost everyone was working at a different incompatible computer. On one desk would be an Apple II, at another a Commodore and at another an Apricot. Apricot were doing well in the UK at the time, particularly in the public sector — they ran faster and more effectively at a lower price than their American cousins. Not that it helped them much in fighting off the competition …

… to be continued (possibly) at some future point …

[With thanks to, and for the assortment of images used in this blog]

Posted in technology, Uncategorized | 1 Comment

more on the 1999 change of address demonstrator ….

I mentioned in a previous post the work done in the late 1990s to put online a change of address service.

This service enabled citizens to inform separate government departments, via the internet and in a single transaction, of a change of address.  The two departments who took part in this work were Inland Revenue (now part of HMRC) and the Department of Social Security (now DWP). The project never really moved beyond its live demonstration phase with a limited subset of citizens.

I’ve recently managed to source some additional screen grabs from that era, as below.

Missing from these is the stage where users were authenticated by a third party digital certificate — in this case Royal Mail’s ViaCode or Barclay Bank’s Endorse (NatWest Bank had also been involved in some earlier work). It was this signing that helped confirm the user’s identity (a similar federated model as that currently being developed by the Cabinet Office’s Identity Assurance Programme).

The smartcard authentication method required a user to have a valid and pre-initialised smartcard. with a recognised digital certificate present. The smartcard was inserted into the citizen’s smartcard reader before accessing the secure web site.  This enabled (transparently to the user) the web browser on the PC to establish a secure session with the site using a trusted certificate. When this secure session was established, the citizen was able to access the protected web site. Then, once the user had completed the web pages, the data were signed using the digital certificate.

The initial welcome screen.


Next was the screen for entering personal details.


Then the old and new address pages, with the addresses automatically validated with the Post Office’s Personal Address File (PAF).

coa6 coa7

Followed by letting users decide which departments they wanted to notify of their change of address.


And finally, there was the summary and declaration screen.


After which the user would be presented with a confirmation page and a reference number to quote in the event of any follow-on enquiries.


The summary architecture for this service is shown below.

Change of Address 1999

XML was used as part of the government’s adoption of open standards for data and interfaces via the GovTalk initiative.



Stefan Czerniawksi points out the above largely relates to the early stage demonstrator — and that the later live pilot expanded to include more departments and was made available through third party sites. See also his related blog here.

Posted in identity, IT, IT strategy, open government, privacy, public services, security, technology, technology policy | 1 Comment

a tale of two countries: the digital disruption of government

Screen Shot 2014-10-14 at 11.39.40

My Australian colleague, Marie Johnson, and I have drafted a paper for this month’s Commonwealth Association for Public Administration and Management (CAPAM) conference being held in Putrajaya, Malaysia. It looks at government endeavours in the UK and Australia over the last 20 or so years to use technology to improve our public services.

You can download a copy of the paper (PDF) here — A Tale of Two Countries – Fishenden and Johnson.

Due to a diary conflict, I won’t be attending to co-present the paper, but Marie will be there to narrate and debate our ‘tale of two countries’.

Screen Shot 2014-10-14 at 11.35.55

Posted in IT, IT strategy, open government, public services, technology, technology policy | 1 Comment

updated Identity Assurance Principles for the UK Government

We’ve been making good progress at the Privacy and Consumer Advisory Group (PCAG) on reviewing the work of various government departments — everything from the Identity Assurance Programme (IDAP), to the “big data” work of the Office of National Statistics, to the “data sharing” proposals, to electoral transformation and other programmes.

I’d like to acknowledge the very open way that most of the government teams have engaged with PCAG — and that even where discussions may have become “full and frank” they have always remained constructive. The Minister for the Cabinet Office, Francis Maude, has also been very supportive of our work, part of the reason its scope has expanded considerably from our earlier focus on identity assurance.

Of course, as an independent advisory group we don’t have any “power” in the sense of a veto over the work of the various government departments — but in general most people we’ve engaged with have understood the sense in applying best privacy and security principles to their work, rather than leaving it full of holes or subject to large-scale public suspicion. It helps that the government’s Technology Code of Practice has as part of its Point 6 the requirement that “Users should have access to, and control over, their own personal data.” Indeed, some programmes — such as the NHS programme — might have avoided some of their problems if they’d observed this policy in the first place…

We’ve just formally submitted our updated Identity Assurance Principles (.pdf) to the UK Government’s IDAP team. They will provide their public response in due course once they’ve had time to consider them and their impact on their work. These updated Principles follow on from PCAG’s earlier work, and our subsequent open consultation.

Posted in identity, open government, privacy, public services, security, technology, technology policy | 2 Comments

more work required: on ‘big govt IT’, ‘transactions’ and the future of public service design

I posted online recently some headline stats comparing the relative scale of UK banking transaction volumes with UK government transaction volumes. They sparked a healthy debate about the nature of ‘transactions’ and the complexity of processing required of a transaction embedding a complex welfare claim form relative to one for a simple financial exchange.

Neither the estimate of total UK banking transactions per annum nor the estimated number of UK government transactions seem reliable in my original post (one commentator suggesting that for just one of the government services the true figure was of a magnitude 7x greater than that shown on the Govt Transactions Explorer). Also, some is inevitably double-counting: something may start as an HMRC payment and then become a banking transaction, so there’s a degree of mutual inter-dependence in such figures.

After much internal debate, I decided to pull that original infographic  — to prevent the propagation of something potentially misleading without a proper context. Irritatingly, LinkedIn removed all the subsequent comments too — if I’d know that, I’d have left it there. Mea culpa and many apologies to those who contributed: luckily I was keeping a summary of the comments since they raised many useful points which I wanted to capture, so they are not entirely lost.

So, important lesson learned … and to return to my original point, which was about the relative scale of government IT compared to what is happening in other areas, comparative stats like these in my replacement graphic make a point about scale whilst still skirting the issue of comparative transactional complexity:

transaction stats

Or on the sheer scale of what the overall internet is now handling (or, at least, was handling in 2012…):

Screen Shot 2014-07-12 at 10.51.17(source: Retrieved: 12.07.2014)

Usefully, within the 1.15bn HMRC transactions of the first graphic above, the Transactions Explorer lets us drill down to see what ‘transactions’ make up this total. And to drill down again into each specific service. It’s a very useful tool — we need to see much more of this transparency and insight. Letting the sunshine in should also help resolve issues of associated data quality — addressing the points raised that some of the data appears to be underestimating what actually currently happens.

My original point was not intended to be specifically about the nature of transactions (which after all range through the gamut of ISO 20022 domains of Payments, Securities, Trade services, Cards and FX to the GovTalk Envelope [PDF] ), but about the scale of government IT in the digital era. Implicit was also a much bigger question about whether, with a proper data architecture and redesigned public services, many of these transactions are even necessary — combined with the question of how best we architect it all (an issue usefully discussed in this blog post by Stefan Czerniawski).

Many current government ‘transactions’ are merely automated versions from the old paper world, moving electronic versions of forms from one place to another — either literally, or by mimicking the form online in a series of interminable web pages that ape the paper world. We can throw all the tin and software we like at these ‘digital forms’, but it’s not going to do much to improve the quality, efficiency, or relevance of the services involved.

The more challenging issue is how we ensure these processes and services (and indeed the organisations behind them) are re-thought and redesigned for the digital age. In the same way that distributing a text document to multiple contributors and then trying to reconcile all their comments and changes (now scattered across multiple forked copies of the original document) is being superseded by a model where documents are collaborated on online in a single place and not sent around at all, there is enormous scope for a smarter data architecture. One that moves away from mirroring the capture or flow of online equivalents of paper documents to one oriented around data and capturing the delta for specific services — such as welfare or tax — rather than the entire data set time after time for each service and functional business silo.

So yes — to start with ‘transactions’ or technology being used to automate what is there is to start in entirely the wrong place. Many ‘government transactions’ (and potentially some government organisations and agencies) could potentially be dramatically improved, or perhaps even obsoleted entirely, with better designed public services. As I’ve commented on before, the poor design of many public service processes and associated paper forms are as socially unacceptable as the same poorly-designed services delivered on to a screen. So some of the more complex ‘transactions’ that people commented upon — such as case management work — raise the wider question about the overall design of public services, and why such things are being sent around in the first place, effectively using technology to fossilise the way things were done in the paper age.

I started from the perspective that whilst once governments were often amongst the users of ‘big’ technology (in terms of scale and speed), others now have claimed that crown. More importantly, that government could learn from the best of what has happened elsewhere — and use technology as a lever to redesign our public services, not merely to automate them in their current state. That’s part of the narrative I was discussing in my recent CIO piece ‘The birth of the composable enterprise‘.

Improving our public services requires the re-evaluation, redesign and re-engineering of its organisations on every level – people, process, technology and governance. This was implicit in the comment on the LinkedIn (RIP — grrrr, lesson learned) thread: ‘Maybe the question is how many should the UK Govt be doing and not looking at what they are doing’. Those who commented that the debate about ‘transactions’ is ‘one dimensional’ and missing more important wider issues are entirely right: it’s time we had that wider and much more difficult debate — something Mark Thompson for example raises in his ComputerWeekly piece ‘Where is the long-term political vision for digital public services?‘.


Posted in future Britain, IT, IT strategy, open government, public services, social exclusion, social inclusion, technology, technology policy | 1 Comment