Happy 20th anniversary online government

It’s 20 years ago this month that the UK government first launched a website intended to provide a simplified, single point of access to information from across the public sector. I thought I’d add a little more detail — or at least, a few historic screenshots — to support my recent CIO column marking the anniversary.

The Government Information Service (GIS), hosted at open.gov.uk, launched in November 1994. It was intended that over 400 public sector organisations, including government departments, local authorities and police forces, would provide their information on the site, which received around 200,000 hits a day shortly after launch.

GIS

In July 1996, this summarised the state of play:

23 July 1996

By mid 1997 it was approaching 2m requests a week.

GIS

In 1999, the “Portal Feasibility Study” (PDF) set out plans for a more comprehensive approach to delivering all government services online in one place. The portal element of this architecture was originally nicknamed “me.gov”: below are some mockups from 2000 of how it was envisaged it might look during early envisaging.

me.gov 1 2000

me.gov 2 2000By the time of its launch, it had become “UKonline”. UKonline initially appeared as a beta site in November 2000, followed by a formal launch in February 2001.

UK Online

UKonline aimed to provide more integrated services, built around citizens’ “life episodes” (events that had meaning to them), rather than just projecting the departmentally-based  silo services already in existence.

UK Online life episodes

The 1st March 2004 saw another rebrand and relaunch, this time as Directgov.

DirectGov

In May 2011, Directgov (and its sister site, BusinessLink — dedicated to meeting the needs of UK business users) began to be superseded by GOV.UK, initially as an alpha.

Alphagov

In October 2012, the site replaced Directgov and went fully operational as GOV.UK, celebrating its second birthday just last month.

UK.GOV.October 2014

I’ve collated some stats on the usage of the online site(s) in various guises over the past 20 years below — not helped by early stats relating to “hits” or “visits” and more recent measures relating to “unique visitors/users”. So don’t take this as the definitive or final comment on the growth of online government information and services but a partial snapshot at a moment in time … (and if any of you have additional interim dates and usage stats not shown, let me know and I’ll revise/improve the list).

  • 1994 — 200,000 hits a day
  • 1997 — 285,000 hits a day
  • 2004 — 172,257 unique visitors a day
  • 2012 — 1m unique visitors a day
  • 2014 — 1.4m unique visitors a day

Happy 20th anniversary!

[A more detailed narrative of the last 20 years of online government is provided in an earlier blog here]

Posted in IT, IT strategy, open government, public services, technology, technology policy | 1 Comment

random access memories

I’m often asked how I got into computing in a generation when neither IT not computer science were on the school curriculum. So, I’ll try to fill in a few random gaps from some cobwebby parts of my memory …

It all started as a hobby. One of my earliest memories of using a computer was thanks to the North Kent Amateur Computer club and the host of hobbyists and enthusiasts that used to attend. It was the age of the kit computer, when the likes of the UK101 ruled supreme.

Compukit_UK101_Mainboard_s1

I think it was Thursday evenings after school when I’d turn up at some local venue — usually a spare school hall — to find a room full of enthusiastic, often bearded participants, all hunched over circuit boards and small TV sets acting as monitors.

From time to time one of them would pick up a soldering iron and make an adjustment to something, or unplug and restart the system or wallop a TV set to stop the picture jumping. These are the folks I’ve always thought of as “digital natives” — people who understand how to design and use digital technology — but that phrase seems to be (ab)used now merely to mean people who have grown up using digital devices as consumers.

I remember testing out some of the simple 3D green mazes and early scrolling games that tried to make it feel like I was speeding across a largely imagined landscape and into the distance. The members of the club were always unfailingly generous in letting me try out their latest hardware builds and hand-cranked software instead of merely dismissing me as another irritating spotty teenager from the local comprehensive avoiding homework with a much more interesting displacement activity.

Towards the  end of my sixth form days, the first computer had turned up at the school — a Commodore PET.

pet3d

Ever since, I’ve regarded this as what a “real” computer should be like — with a built-in monitor and keyboard, and its own sophisticated storage system (aka a cassette tape deck). We were able to book the PET after school to use for something like an hour at a time — just about enough time to load something from the tape, suss out how it worked and then hand it over to the next person. We seemed to spend most of our time standing around it and working out how to win at the text-based version of the Star Trek game.

I remember at home we also ended up owning one of the earliest video game consoles — a Magnavox Odyssey. My mum bought it for us secondhand (I’m not quite sure how we managed to persuade her to do that, particularly given how tight money was). It seemed great fun for its time, complete with a light rifle and a set of screen overlays that would be held in place on the TV screen courtesy of the static charge.

magnavox-odyssey-ad

These imaginative static overlays turned our black and white TV into “colour” for the first time, with most of the games based on a variation of Pong. To change game, you removed one of the cards that came with the console, and inserted another one in its place. I remember using the rifle to take pot shots at the ball of light as it moved around the “haunted house” overlay, appearing momentarily at some of the “windows” cut into the static film. State of the art or what?

Like most of my early machines, sadly I’ve no idea what happened to the Odyssey over the years — lost in one of those many post-university moves between various rental properties in and around rundown parts of London.

My own first purchase was a Sinclair ZX-80, then the ZX-81, then the ZX-Spectrum.

Screen Shot 2014-10-29 at 07.32.04

Like many of my generation, I owe a lot to the pioneering spirit of Sir Clive Sinclair: thanks mate.

sinclair_zx80_hr_1s

At university, the Spectrum ended up in the communal room, used for gaming alongside our “artistic” pyramid built from dozens of empty beer cans assembled on the mantelpiece. The dusty two-bar electric fire that made this the only warm room in the house was at the end of slightly melted extension cable which ran into the adjacent bedroom — the only room in the Stoke Newington house we all shared that had no electricity slot meter in it. (From what I remember, most of the house was wired via a spaghetti ball of cables back into that room and its free electricity supply …).

Initially, programming these early computers either consisted of copying in program listings from hobbyist magazines or writing your own code. With the magazine listings there always seemed to be errors in the code, meaning it was necessary to buy the next month’s edition as well for the errata — unless I managed to work out for myself what lines had been missed or garbled in the meantime. Later came the use of cassette decks, a rather erratic and unpredictable way of loading programs — often only failing after 15 or 20 minutes or more when the Play button on the cassette player would pop up and you’d realise the program had failed to load.

Later I moved onto a Commodore 64, a BBC Model B and then onto a “proper” computer — the Apricot F10. My programming efforts, which had started with BASIC, experimented with everything from assembly language (on both Z80 and 6502 processors) to Fortran, Prolog and Pascal.

act_apricot-f2_1

At work when I started, it was an age when the IBM PC had yet to dominate — it was not unusual to find an office where almost everyone was working at a different incompatible computer. On one desk would be an Apple II, at another a Commodore and at another an Apricot. Apricot were doing well in the UK at the time, particularly in the public sector — they ran faster and more effectively at a lower price than their American cousins. Not that it helped them much in fighting off the competition …

… to be continued (possibly) at some future point …

[With thanks to http://www.computinghistory.org.uk, www.rewindmuseum.com and http://www.computinghistory.org.uk for the assortment of images used in this blog]

Posted in technology, Uncategorized | 1 Comment

more on the 1999 change of address demonstrator ….

I mentioned in a previous post the work done in the late 1990s to put online a change of address service.

This service enabled citizens to inform separate government departments, via the internet and in a single transaction, of a change of address.  The two departments who took part in this work were Inland Revenue (now part of HMRC) and the Department of Social Security (now DWP). The project never really moved beyond its live demonstration phase with a limited subset of citizens.

I’ve recently managed to source some additional screen grabs from that era, as below.

Missing from these is the stage where users were authenticated by a third party digital certificate — in this case Royal Mail’s ViaCode or Barclay Bank’s Endorse (NatWest Bank had also been involved in some earlier work). It was this signing that helped confirm the user’s identity (a similar federated model as that currently being developed by the Cabinet Office’s Identity Assurance Programme).

The smartcard authentication method required a user to have a valid and pre-initialised smartcard. with a recognised digital certificate present. The smartcard was inserted into the citizen’s smartcard reader before accessing the secure web site.  This enabled (transparently to the user) the web browser on the PC to establish a secure session with the site using a trusted certificate. When this secure session was established, the citizen was able to access the protected web site. Then, once the user had completed the web pages, the data were signed using the digital certificate.

The initial welcome screen.

coa1

Next was the screen for entering personal details.

coa2

Then the old and new address pages, with the addresses automatically validated with the Post Office’s Personal Address File (PAF).

coa6 coa7

Followed by letting users decide which departments they wanted to notify of their change of address.

coa3

And finally, there was the summary and declaration screen.

coa4

After which the user would be presented with a confirmation page and a reference number to quote in the event of any follow-on enquiries.

coa5

The summary architecture for this service is shown below.

Change of Address 1999

XML was used as part of the government’s adoption of open standards for data and interfaces via the GovTalk initiative.

Postscript(s)

16.10.2014

Stefan Czerniawksi points out the above largely relates to the early stage demonstrator — and that the later live pilot expanded to include more departments and was made available through third party sites. See also his related blog here.

Posted in identity, IT, IT strategy, open government, privacy, public services, security, technology, technology policy | 1 Comment

a tale of two countries: the digital disruption of government

Screen Shot 2014-10-14 at 11.39.40

My Australian colleague, Marie Johnson, and I have drafted a paper for this month’s Commonwealth Association for Public Administration and Management (CAPAM) conference being held in Putrajaya, Malaysia. It looks at government endeavours in the UK and Australia over the last 20 or so years to use technology to improve our public services.

You can download a copy of the paper (PDF) here — A Tale of Two Countries – Fishenden and Johnson.

Due to a diary conflict, I won’t be attending to co-present the paper, but Marie will be there to narrate and debate our ‘tale of two countries’.

Screen Shot 2014-10-14 at 11.35.55

Posted in IT, IT strategy, open government, public services, technology, technology policy | 1 Comment

updated Identity Assurance Principles for the UK Government

We’ve been making good progress at the Privacy and Consumer Advisory Group (PCAG) on reviewing the work of various government departments — everything from the Identity Assurance Programme (IDAP), to the “big data” work of the Office of National Statistics, to the “data sharing” proposals, to electoral transformation and other programmes.

I’d like to acknowledge the very open way that most of the government teams have engaged with PCAG — and that even where discussions may have become “full and frank” they have always remained constructive. The Minister for the Cabinet Office, Francis Maude, has also been very supportive of our work, part of the reason its scope has expanded considerably from our earlier focus on identity assurance.

Of course, as an independent advisory group we don’t have any “power” in the sense of a veto over the work of the various government departments — but in general most people we’ve engaged with have understood the sense in applying best privacy and security principles to their work, rather than leaving it full of holes or subject to large-scale public suspicion. It helps that the government’s Technology Code of Practice has as part of its Point 6 the requirement that “Users should have access to, and control over, their own personal data.” Indeed, some programmes — such as the NHS care.data programme — might have avoided some of their problems if they’d observed this policy in the first place…

We’ve just formally submitted our updated Identity Assurance Principles (.pdf) to the UK Government’s IDAP team. They will provide their public response in due course once they’ve had time to consider them and their impact on their work. These updated Principles follow on from PCAG’s earlier work, and our subsequent open consultation.

Posted in identity, open government, privacy, public services, security, technology, technology policy | 1 Comment

more work required: on ‘big govt IT’, ‘transactions’ and the future of public service design

I posted online recently some headline stats comparing the relative scale of UK banking transaction volumes with UK government transaction volumes. They sparked a healthy debate about the nature of ‘transactions’ and the complexity of processing required of a transaction embedding a complex welfare claim form relative to one for a simple financial exchange.

Neither the estimate of total UK banking transactions per annum nor the estimated number of UK government transactions seem reliable in my original post (one commentator suggesting that for just one of the government services the true figure was of a magnitude 7x greater than that shown on the Govt Transactions Explorer). Also, some is inevitably double-counting: something may start as an HMRC payment and then become a banking transaction, so there’s a degree of mutual inter-dependence in such figures.

After much internal debate, I decided to pull that original infographic  — to prevent the propagation of something potentially misleading without a proper context. Irritatingly, LinkedIn removed all the subsequent comments too — if I’d know that, I’d have left it there. Mea culpa and many apologies to those who contributed: luckily I was keeping a summary of the comments since they raised many useful points which I wanted to capture, so they are not entirely lost.

So, important lesson learned … and to return to my original point, which was about the relative scale of government IT compared to what is happening in other areas, comparative stats like these in my replacement graphic make a point about scale whilst still skirting the issue of comparative transactional complexity:

transaction stats

Or on the sheer scale of what the overall internet is now handling (or, at least, was handling in 2012…):

Screen Shot 2014-07-12 at 10.51.17(source: mashable.com/2012/11/27/email-stats-infographic/. Retrieved: 12.07.2014)

Usefully, within the 1.15bn HMRC transactions of the first graphic above, the Transactions Explorer lets us drill down to see what ‘transactions’ make up this total. And to drill down again into each specific service. It’s a very useful tool — we need to see much more of this transparency and insight. Letting the sunshine in should also help resolve issues of associated data quality — addressing the points raised that some of the data appears to be underestimating what actually currently happens.

My original point was not intended to be specifically about the nature of transactions (which after all range through the gamut of ISO 20022 domains of Payments, Securities, Trade services, Cards and FX to the GovTalk Envelope [PDF] ), but about the scale of government IT in the digital era. Implicit was also a much bigger question about whether, with a proper data architecture and redesigned public services, many of these transactions are even necessary — combined with the question of how best we architect it all (an issue usefully discussed in this blog post by Stefan Czerniawski).

Many current government ‘transactions’ are merely automated versions from the old paper world, moving electronic versions of forms from one place to another — either literally, or by mimicking the form online in a series of interminable web pages that ape the paper world. We can throw all the tin and software we like at these ‘digital forms’, but it’s not going to do much to improve the quality, efficiency, or relevance of the services involved.

The more challenging issue is how we ensure these processes and services (and indeed the organisations behind them) are re-thought and redesigned for the digital age. In the same way that distributing a text document to multiple contributors and then trying to reconcile all their comments and changes (now scattered across multiple forked copies of the original document) is being superseded by a model where documents are collaborated on online in a single place and not sent around at all, there is enormous scope for a smarter data architecture. One that moves away from mirroring the capture or flow of online equivalents of paper documents to one oriented around data and capturing the delta for specific services — such as welfare or tax — rather than the entire data set time after time for each service and functional business silo.

So yes — to start with ‘transactions’ or technology being used to automate what is there is to start in entirely the wrong place. Many ‘government transactions’ (and potentially some government organisations and agencies) could potentially be dramatically improved, or perhaps even obsoleted entirely, with better designed public services. As I’ve commented on before, the poor design of many public service processes and associated paper forms are as socially unacceptable as the same poorly-designed services delivered on to a screen. So some of the more complex ‘transactions’ that people commented upon — such as case management work — raise the wider question about the overall design of public services, and why such things are being sent around in the first place, effectively using technology to fossilise the way things were done in the paper age.

I started from the perspective that whilst once governments were often amongst the users of ‘big’ technology (in terms of scale and speed), others now have claimed that crown. More importantly, that government could learn from the best of what has happened elsewhere — and use technology as a lever to redesign our public services, not merely to automate them in their current state. That’s part of the narrative I was discussing in my recent CIO piece ‘The birth of the composable enterprise‘.

Improving our public services requires the re-evaluation, redesign and re-engineering of its organisations on every level – people, process, technology and governance. This was implicit in the comment on the LinkedIn (RIP — grrrr, lesson learned) thread: ‘Maybe the question is how many should the UK Govt be doing and not looking at what they are doing’. Those who commented that the debate about ‘transactions’ is ‘one dimensional’ and missing more important wider issues are entirely right: it’s time we had that wider and much more difficult debate — something Mark Thompson for example raises in his ComputerWeekly piece ‘Where is the long-term political vision for digital public services?‘.

 

Posted in future Britain, IT, IT strategy, open government, public services, social exclusion, social inclusion, technology, technology policy | 1 Comment

high level cross-government architecture — 2003 style

I tweeted recently a couple of old IT architectural schematics from 2003. They provide an interesting (historical) perspective on how to bring existing government systems into the world of online services.

Following a few requests, I thought I might as well follow-up by blogging in a bit more detail about those schematics here — given the subject remains highly topical.

Back in 2003, the aim was to reach consensus on a cross-government technical architecture that would provide the right balance between centrally provided  and department provided components. In 2003, the open standards of e-GIF (the e-Government Interoperability Framework) prevailed and the default data format for inter-system interoperability was XML (the extensible Markup Language).

The “Vision” from 2003 is shown below.

online service vision 2003

The high level architecture behind this vision is shown below.

conceptual x-government viewIt’s fairly self-explanatory at this level — with several core elements shown down the righthand side (management and operations; data interoperability [XML]; security framework; metadata framework), and four key technical layers to the left: data sources; data access; business, logic and workflow; and UI components and processes.

This is expanded a little more in the view that follows.

x-govt conceptual more detailed

The top tier makes clear the multi-channel strategy and the way government services were envisaged as being delivered through a whole range of different organisations: from government to businesses to the voluntary sector, and through a variety of devices.

Web services were to provide the common, open standards for how the various online government services could be delivered via public interfaces. At the “common services” tier a range of modular components (from authentication to payments to notifications) were to exist, providing a flexible way of pulling common components together in the design of frontline services. Behind this, a similar open layer of web services existed which would use XML to proprietary integration mechanisms to connect into existing systems — be they public or private sector — needed in the design and delivery of online services.

For this approach to work, both departments and the centre needed to agree standards for the web services to be exposed by departments. The integration between XML and existing systems — to enable that integration — also needed to be resolved to ensure consistent delivery. There would need to be a way of reliably orchestrating processes both at the central government level (for cross-government orchestration of multi-departmental services) and at the department level (cross-system).

At the centre, the Government Gateway (which became an umbrella term for a range of component services — primarily identity and authentication, transaction orchestration, payments, and departmental integration services), acted as a broker, ensuring all calls were made through the same common architecture and endpoints. This included defining common standards in areas such as naming conventions, error handling responses and so on, together with the XML schema, meta content etc of the actual SOAP (Simple Object Access Protocol) methods and calls used for the interaction of content and services.

At what has often proved the most intractable and complex layer — that of bridging the gap between existing systems and online services — three elements came into play:

  • custom adaptors (specific integration tools for e.g. a mainframe)
  • web services interfaces surfaced via the adaptors and exposing data and methods through native XML/SOAP
  • associated process logic to ensure data and application integrity

backend integrationThis approach enabled a variety of existing systems to be bridged into the open, web services world. Of course, this was meant to be a transitive stage and not something to fossilise and preserve existing systems, enabling them to live on forever. It was intended to be a pragmatic way of taking early benefits in a massively diverse brownfield environment whilst a parallel programme could begin to re-engineer and re-architect backend processes, data, systems and their owning organisations into something better suited to twenty-first century government services.

backend transition

This secondary stage foresaw the assessment, transformation, re-factoring and web-enablement of these older systems — with an end goal of removing older backend systems and disaggregating and componentising them to enable departments to redesign and improve their services free of the restrictions of their inherited IT estate.

The illustration below shows conceptually how this architectural approach would enable the central orchestration of a service across multiple departments, with each of those departments in turn tackling local integration across their multiple backend systems.

x-govt orchestrationAn alternative view of the central components/local integration model is shown below, illustrating the role of common, re-usable components in the overall architecture and service design model.

common components A more detailed breakdown of the layers of the model is shown below.

detailed layers

Returning to a higher level perspective, the schematic below shows how the various components comprising the Government Gateway provided the realisation of some of this vision.

GG enablers

Note the existence of a “virtual department” to the right of the picture: this was a major concept at the time. Rather than trying to fix all of the historic issues of existing systems, data, processes and organisational hierarchies, the proposition was to create ‘virtual departments’ that would provide a way of exposing new services built around citizens’ and businesses’ needs rather than merely projecting the existing departmental service structures onto the internet. These would enable the development and design of better services — but, over time, they would also build out the future of government, at which point the existing systems could be switched off. More ambitiously, they would also enable the potential reconfiguration of government itself as it would no longer be tied down by information systems that had fossilised the historic business units and hierarchical functional silos of the departments and agencies in which they had been designed and deployed.

Some 11 years on, there remains considerable healthy debate about the best architectural models for government — across business, information and technical levels, and indeed the wider organisational configuration of government itself. At the polar extremes are opposing technical perspectives on the merits of emergent solutions and approaches versus imposed blueprints, and of centralised versus federated models.

As with most of these things, neither extreme in itself will prevail: good systems, successful technology and well-designed user services tend to involve a shifting blend of models and approaches. But hopefully I’m not alone in finding it useful to document the journey we’re on — and the road already travelled.

Posted in future Britain, identity, IT, IT strategy, privacy, public services, security, technology, technology policy | Tagged | 1 Comment