A (brief) history of UK Government moves towards a platform-based architecture

The vision — 1998 onwards

There’s been a long-standing vision to use technology in a way that doesn’t fossilise the old paper-based way of running the public sector — but which enables services to be redesigned and improved. Part of this vision has seen a move over the last few decades towards common services that remove duplication and enable the delivery of services designed around the citizen (rather than the owning department or agency).

The Parliamentary Office of Science and Technology (POST) back in 1998 provides the first recorded effort I’ve come across that considers the impact of technology on government and all of its services in a more holistic way. Their report on “electronic government” included the following schematic as part of a discussion about common processes across government and the need to avoid merely using technology to serve up online versions of paper forms.

POST gubbins

Figure: the 1998 view of common services

[Source: POST Electronic Government 1998]

The UK government’s architectural approach to development of these more joined-up services was set out in the 1999 “Portal Feasibility Study”, prepared for the Central IT Unit (CITU) in the Cabinet Office. It proposed the three-tier architecture shown below.

3 tier conceptual architecture

Figure: The 3-tier conceptual architecture, 1999

[Source: Portal Feasibility Study]

This architecture was designed to support a wide range of access channels, and to enable the existing IT estates of departments and agencies to integrate without requiring a big bang, rip and replace approach. It proposed that

A three-tier architecture can be used to insulate the access channels from the complexity of the Government Back Office with web technology providing the portal, or gateway between the channels and the individual service requested. The key concept of the three tier architecture is the use of middleware technology to provide a brokerage capability, a concept that sits well with the idea of a portal. The middleware will link components to allow them to interact without the need to have knowledge of the other component’s location, hardware platform, or implementation technology. [p.7]

 It included an emphasis on the value of open standards:

The technical implementation of the three-tier architecture must provide the glue to link existing Departmental services and systems to a wide range of different access channel technologies. This means that open standards need to be proscribed and that the interface standards needed to ensure good interworking must be defined.

An open architecture will maximise the flexibility and opportunities for infrastructure provider competition. Every major interface in the architecture will need to have an interface specification defined for it. This will allow architectural components, services and supplier systems to be replaced easily and a ‘plug and play’approach to be taken to architecture components, services and supplier systems. [p.8]

For the specific standards, reflecting some of the dominant technical models of the time, it recommended:

Tier Element Recommendation
Front Web server protocol HTTP
Application server candidates CORBA ORB (supporting services written in JAVA, or DCOM and Microsoft Transaction Server)
Middle Application server architecture candidates CORBA ORB
Back Suggested protocol for communication with Government departments SMTP
Suggested messaging standard S/MIME

Table: Proposed Technical Standards

[Adapted from “Portal Feasibility Study, 1999”]

Confidentiality was to be assured through the use of encryption between the desktop and the Web server, specifically SSL or HTTP/S. Public Key Infrastructure (PKI) would be used in the middle tier, with a firewall deployed to limit network access between the front and middle tiers. Messaging between the middle and back tiers would use structured messages digitally signed by the Web site to: protect the integrity of the information, authenticate the source and prevent repudiation. The internal government networks (the Government Secure Intranet, GSI, and xGSI) would be used to secure the physical transmissions within government’s own estates.

From concept to engineering

The approach — building out a single online web presence for the whole of government —  outlined in the “Portal Feasibility Study” was later to be iterated through a series of implementations. The initial site was “UKonline”, replacing the early portal efforts of the Government Information Service (GIS) at open.gov.uk first established in 1994. This appeared as a soft launch, beta site in November 2000, providing time to gather feedback and enable refinement and improvements prior to its formal launch in February 2001. It was redesigned, rebranded and relaunched on 1st March 2004 as Directgov. The current GOV.UK first appeared in May 2011 and went officially into full service in October 2012.

However, my focus in this brief blog is not the front-end website but a range of other, associated platform components, such as those developed to secure the processing of transactions and to enable the reliable identification, verification and authorisation of citizens, businesses and intermediaries.

From late 2000 onwards, the UK government built out a series of web-service / SOA-based components. There was a clear business case for taking this “whole of government” approach: it was already evident that without some degree of common services, each part of the public sector would proceed with its own local initiatives — from websites to identity systems to payments systems and so on. The same needs would be met time and again in multiple places, resulting in pointlessly duplicated expenditure, processes and systems.

This would not only be expensive, and duplicate common infrastructure in multiple places across the sector, but also produce a poor user experience:  government services would be scattered across a technological landscape that replicated the physical world of multiple overlapping providers. Leaving everyone to do their own thing would effectively fossilise existing, paper-based services in the online world rather than taking the opportunity to improve and streamline them.

Open standards were the bedrock on which these initiatives were based, using published APIs (for programmatic access) and XML (for data interchange). Although collectively referred to as the “Government Gateway” (reflecting terminology first used in the “Portal Feasibility Study”), the initial component services consisted of several discrete functional components: 

  1. Registration & Enrolment (the identification, authentication and verification of online users — citizens, businesses and intermediaries — utilising a range of credentials, from user ID and passwords to digital certificates to — later — EMV chip and PIN cards)
  2. Transaction Engine (handling the processing of transactions between citizens, businesses and departments, including the management of state and orchestration across multiple service providers where a transaction involved multiple entities)
  3. Secure Messaging (handling two-way secure communication between departments and citizens)
  4. Payments Engine (handling in-bound payments to government departments)
  5. Gateway Helpdesk (an internal service enabling departments to integrate a degree of management of these common components services within their existing helpdesk services)

These API-based components are illustrated at a high level below.


Figure: the API-based components developed from late 2000 onwards

The rationale for adopting this cross-government approach has been explained as follows:

“The Gateway was designed to simplify and accelerate the UK e-Government programme. It achieves this by ensuring that the common building-block components of e-Government services are provided once, in a flexible, modular and scalable way.”

[Source: “Delivering e-Government Services to Citizens and Businesses: The Government Gateway Concept”. Jan Sebek, p.127. Published in “Electronic Government: Second International Conference, EGOV 2003, Volume 2”. Editor Roland Traunmüller]

The services were exposed via the following open standards:

  • TCP/IP
  • HTTP
  • HTTP 128 bit SSL (TLS 1.0)
  • HTML
  • XML
  • X.509 certificates
  • W3C XML digital signatures
  • SOAP
  • SMTP
  • SAML

The schematic below shows the relationship between these initial component services and also illustrates how the provision of common platform components avoids duplication of the same needs across multiple government services. It came about, in part, because each department was developing its own local solutions to its online service needs, with a resulting duplication of expenditure and fragmentation of services from a user perspective. Moving to a strategic, platform-based approach for common services made sense from an economic, architectural and user experience perspective.


[Source: “Delivering e-Government Services to Citizens and Businesses: The Government Gateway Concept”. Jan Sebek, pp.125-128. Published in “Electronic Government: Second International Conference, EGOV 2003, Volume 2”. Editor Roland Traunmüller]

All of the technical standards used were stipulated by government to be open, enabling interoperability regardless of the software or vendors involved across the many connecting organisations:

“A wide range of systems have interoperated with the Government Gateway since its launch, including systems running Sun’s J2EE technology, IBM technologies, Apache, Tomcat and other technologies and applications including standalone PC application software.”

[Source: Preliminary Study on Mutual Recognition of eSignatures for eGovernment applications NATIONAL PROFILE UK April 2007. p.16. IDABC European e-Government Services.]

Building on the Platform

Following on from the successful release and implementation of the initial cross-government components, in 2003 a further set of services were considered.

2003 components

Some initial work was done to prototype various of these additional platforms, but internal issues relating to funding and agreement between the multiple stakeholders involved became increasingly problematic. The recurrent issue of how much government should itself try to build in-house and what it should consume or purchase from others was also as much a debating point then as it is now. Hopefully this time around the debate about continuing to develop the vision of government at the technical platform level will be informed by the use of techniques such as Wardley mapping, which can help identify when and where in their evolution the various needs are and therefore how they might best be sourced (from bespoke build to rental).


 Figure: Wardley Mapping 

[courtesy Simon Wardley]

For those few who have not seen it yet, Mark Foden’s short but useful video on “the Gubbins of Government” is essential viewing. It sums up much of what has been intended and what remains to be done very succinctly to implement the platform model at scale:

Current era

In 2010, “Better for Less”, which set out many of the ideas implemented in government during the 2010-2015 period, emphasised the value of the component-based approach.

The use of a common infrastructure is critical … applications must be developed to work together using open standards for data and interoperability. It is critical that the common infrastructure is available for all the identified stakeholders: officials, citizens, third sector organisations, potential and actual solution providers …

… Government IT is well placed … if it adopts this construct: service oriented architecture, open standards, re-use of components and a carefully thought-through incentivisation environment encouraging and supporting innovation and re-use. It can take advantage of a cheap, commodity platform, competitive suppliers, constant innovation, and automatic sustainability.

In 2013, GDS published in its Service Design Manual its intention to continue this journey towards “Government as a Platform”. The more detailed text available at the time appears to have been removed from the site, but is still viewable in GitHub here. It recognised the need to work smartly including:

A move to platforms does not mean that government has to develop everything in-house: many of government’s needs can be met by existing cost-efficient utility services. However, government can help to establish best practice in areas such as personal data privacy.

Wherever appropriate, the government should use existing external platforms, such as payments services (ranging from third party merchant acquirer services to the UK’s national payments infrastructure). Deciding to develop platforms in-house will happen only where that is the best way to meet users’ needs in the most flexible and cost-effective way.

Government will draw upon the experience of other organisations that have already made this journey. Many have created platforms that initially sit across the top of existing silos to expedite agile and effective digital service delivery.

This co-existence enables the benefits of the platform model to be realised quickly, even in a brownfield environment such as government, while the silos below the waterline are gradually reduced in importance and eventually made obsolete.

This is the approach that government will follow, ensuring that it develops a well-defined schedule for switching off legacy environments as the platform model is progressively implemented.

Another announcement about this commitment was made in September 2014 by Sir Jeremy Heywood, Cabinet Secretary and Head of the Civil Service, in a blog entitled “More than just websites“. It seems clear that the continuing move towards a platform-based model is now generally accepted not only at the engineering level but, as importantly, at the business and services level too.

Where next?

We shall see after the election:-)

Technical Annex

I originally planned to include the following more technical detail in the body of my blog. However, it’s a bit distracting for those only interested in the history of efforts to realise the higher level principles and benefits of developing a platform based model. But for those who want to look under the bonnet in a bit more technical detail, here it is … It aims to provide insight into two earlier components developed from late 2000 onwards — the Transaction Engine, and Registration and Enrolment.

Transaction Engine

The Transaction Engine (TxE) was developed to provide a transaction handling and routing service. It supports the exchange of data (usually in the form of documents and business forms) between government organisations and external organisations, intermediaries and citizens, as well as between government organisations. It’s able to manage state between both single point to single point, as well as to orchestrate more complex interactions involving multiple parties where state needs to be understood and managed (e.g. an incoming piece of data may successfully update 3 of 4 departmental systems, but not the remaining one: clear state-management rules can be applied in such circumstances, ranging from rolling back all of the updates through to providing response information that indicates which systems have been updated and which have not and leaving the client side to decide how to progress the interaction).

TxE provides two main methods of utilisation:

  1. for ‘ad-hoc’ use, a defined submission protocol involving the exchange of XML documents posted into a URL destination point
  1. for permanently connected government and related entities, two-way messaging via Departmental Integration Servers (DIS) between the hub and those spokes involved in the messaging exchange

Although the original TxE was based on proprietary technology, the protocols developed and used were vendor neutral, open standards-based rather than utilising Microsoft’s BizTalk Framework. The development of these platform components were managed within the context of the broader e-Government Interoperability Framework (e-GIF) and its associated GovTalk initiative to use open interoperability standards (which drew primarly upon the IETF, W3C and WS-I interoperability standards).

XML was used as the standard data format for all messages into and through the TxE. The specific standards utilised were defined by the former GovTalk initiative. This is now defunct, with the UK Government’s Open Standards Board taking renewed and reinvigorated ownership of the open standards to be used in government for software interoperability, data and document formats.

Interoperability between the TxE and existing departments is accomplished through what became known as the “Departmental Interface Server” (DIS). DIS provides SOAP reliable two-way communication between the connecting organisation and the TxE. It combines compliance with TxE’s open standards (XML, HTTP and SOAP) together with reliable once-only delivery and separation of the integration requirements of the systems within the connecting organisations.


Figure: Standards

[Source: Government Gateway technical documentation]

Some departments and other organisations deployed DIS services developed by third parties rather than using the Microsoft Biztalk-based offering: SoftwareAG for example supplied HMRC and others, demonstrating the practical benefits of open standards in maintaining an open market rather than restricting it to a single supplier.

TxE protocols

Three message protocols are supported by the Transaction Engine:

  • Document Submission Protocol: the interface used by ISV Applications and Department Portals. It allows ISV applications or Portals to submit business transactions/forms/documents to Department services
  • Hub and Spoke Interchange Protocol: used for Department to Department business transactions/forms/documents submission
  • Administration Message Protocol: an asynchronous message based version of a more recent SOAP Admin Interface

GovTalk Message Envelope and Document Submission Protocol

DIS uses the GovTalk Message Envelope in conjunction with the Document Submission Protocol (DSP) from 2008. The DSP routes business transactions (e.g. Self Assessment tax forms) submitted from either a Department Portal (e.g. the HMRC Online service Web site) or directly from an ISV application (which could be hosted on a website, server-based, PC-based, smartphone-based etc.), through the TxE, to the appropriate Department (back-end) system and retrieves the corresponding response. The TxE acts as a “hub” to numerous “spokes”, which can be websites or dedicated DIS endpoints that provide onward integration and interoperability into departmental local systems.

DSP uses the GovTalk Message Envelope to encapsulate business transaction documents. GovTalk documents are XML formatted and use the UTF-8 encoding standard. Messages are transported using the Hypertext Transport Protocol (HTTP). Portals, ISV applications and Departments (using DIS) must be capable of generating HTTP 1.1 POST requests and receiving and interpreting HTTP 1.1 response messages.

The transaction architecture is componentised, with applications and portals separated from the systems with which they interact. So, for example, HMRC’s portal does not talk directly with its backend, but via the DSP and GovTalk Message Envelope, routed and authenticated via the TxE. Connections with the TxE are made either over the internet or via government networks (formerly the GSI, now the PSN).


Figure: end-to-end overview of the transaction-handling system

[Source: Government Gateway technical documentation]

The typical sequence followed when a client application submits a document to a target spoke (i.e. a Department service) is shown below (assumes no errors).


Figure: overview of the submission-response protocols

[Source: Government Gateway technical documentation]

The protocol makes extensive use of the envelope portion of the GovTalk schema, requiring XML documents submitted to TxE to include a Qualifier element immediately after the Class element. Together these two elements denote the message type.

Messages issued by the client application:


Messages issued by the Gateway: 

  • SUBMISSION_ERROR – error detected in message received from the Client
  • SUBMISSION_RESPONSE/ERROR – business response/error from Target Spoke (i.e. Department)

When submitting any message type to TxE it’s the responsibility of the client to ensure each message conforms to the relevant syntactical rules for that particular type of message. A client application does not necessarily have to process each document sequentially as described above. Instead it could operate in a batch mode, submitting a number of documents over a period of time and then later using: 

  • DATA_REQUEST to examine the state of these submissions
  • SUBMISSION_POLL to retrieve the corresponding response for each submission
  • DELETE_REQUEST to delete each submission from the Gateway
  • SUBMISSION_REQUEST to resubmit documents with recoverable errors

Registration and Enrolment (R&E)

The registration and enrolment (R&E) service was launched in January 2001. The initial authentication and authorisation component supported two credential types, UserID/Password combinations and digital certificates. The service continued to be enhanced after launch and later expanded to include WS-Security tokens (offering the potential for other trusted issuers to federate their tokens) and the use of EMV Chip and PIN card (from around 2008, offering the potential for users to authenticate using a third party card, such as one issued by their bank).

Citizens, businesses and intermediaries were therefore able to have single sign-on facilities across all government services — national, regional and local — although the option also existed (should a citizen wish) to utilise a separate method of authentication to each separate government service. This was to enable citizens to maintain the segmentation of their identities across government should they desire to do so.

Part of the problem tackled by this common service is the need to associate an online identity and its associated credential with the different identifiers by which that same user is known within different parts of government.


Figure: mapping an identifier to other known identifiers

[Source: UK Government Gateway Frequently Asked Questions 2005] 

Two interfaces are provided:

  • User Interface (web pages): a series of on-line conversational dialogues consisting of web page interactions with end users that support the Gateway registration and enrolment process (see www.gateway.gov.uk)
  • Programmatic: APIs available over the Internet, comprising two types:
    • a secure web service interface. This is made available only to trusted users (e.g. government websites and applications), providing authentication and authorisation support, including the ability to register and enrol a user
    • a public web service interface with reduced privileges, available for any website  or application to conduct routine tasks including the authentication of users


Figure: API methods exposed

[Source: UK Government Gateway Frequently Asked Questions 2005]

A Simple Object Access Protocol (SOAP) interface is required for websites and applications to interact programmatically with the R&E API-based service. The parameters included in SOAP messages are required to be well formed XML documents conforming to the XML Schema (XSD) defined by the UK government. This enables maximum re-use across SOAP APIs and for both parties to correctly and consistently validate XML documents.


Figure: MoD use of the Chip and PIN support for online access to shared services

[Source: “The UK Government Gateway Remote Authentication. Jim Purves, E Delivery Team, Government Gateway. Department of Work and Pensions 24/10/2008]

PS. If you’ve got this far, well done. You might also find this previous blog useful in terms of some of the underlying architectural design: high level cross-government architecture — 2003 style

Posted in digital, future Britain, identity, IT, IT strategy, open government, privacy, public services, security, social exclusion, social inclusion, technology, technology policy | 1 Comment

digital leaders TV discussion

BBC Click’s Kate Russell hosted myself, Mark Thompson and Alan Brown on a recent episode of Digital Leaders TV. Our topic? How to understand and implement new digital business models in the public sector, with questions and interventions both from Kate and from submitted questions from those tuning in to the broadcast Hangout.

The discussion centred on our new best-selling book “Digitizing Government: Understanding and implementing new digital business models” (available in the UK from Amazon, Waterstones or your local independent bookshop, and in the US from Amazon and others).

This is the full video:

And this, for those of you pressed for time, is the 15 minute edited “highlights edition”:

One theme that emerged from the questions submitted was a fear that moving to digital public services is about putting everything online, leaving those unable or unwilling to use technology behind. Putting existing services onto an electronic screen is a minor part of what “digital” really means: online services should not simply be about the citizen being forced to use an online channel, but about improving all channels (including face-to-face services delivered in an office or in our homes) by improving the processes, systems and organisations that sit behind them.

Our discussion aimed to bring out this often overlooked perspective, mirroring much of what our book is about.

Posted in digital, future Britain, IT, IT strategy, open government, public services, social exclusion, social inclusion, taxation, technology, technology policy | 1 Comment

(continued) more thoughts on government in the digital age

bide-imageOn the back of the launch of our new book, Digitizing Government, I posted a few background thoughts in my previous blog — very imaginatively entitled, er, more thoughts on government in the digital age. I continue exploring a few more themes here.

Cloud computing ≠ shared services

“Shared services” dominated much of the discussion about government use of technology over the last decade or so. But for all the talk, little was achieved. In the UK there were the few shared, and ageing, API-based components collectively known as the “Government Gateway” (providing common cross-government services such as authentication, transaction handling and payments): but generally the whole debate became typified by the standoff “I’ll share my service with you, but your service isn’t any good for us”.

The idea that a simplistic, “one size fits all” vertically-integrated, shared services solution for functions such as HR, CMS or ERP was the magic bullet was well-intentioned but naive, given how different organisations operate. The approach lacked a sufficiently detailed analysis of the needs of the various organisations involved and how mature, or bespoke, their requirements actually were. It failed to decompose requirements into those areas where processes and functions were common — and could potentially utilise shared services infrastructure — and where they were unique to each organisation. They also lacked the accompanying management drive necessary to rationalise, simplify and standardise many of the existing processes and functions prior to using a shared technology platform.

So is today’s poster-child — cloud computing — just more of the same, part of the current vogue around ‘smac’ (social, mobile, analytics and cloud)? Like any technology, poorly managed and poorly applied, it’s not going to magically solve complex problems of service design any better than any other option. But as part of a meaningful strategy, such as the UK’s G-Cloud initiative, the adoption of cloud computing will have far more impact than shared services. In the USA:

“The federal government has been gradually adopting shared service business models for administrative services for nearly 30 years. Today, the buzz is all about “the cloud” and its potential to transform shared services as we know them. There’s much hype and a tendency to conflate shared services and cloud computing—things that have many similarities but are not exactly the same. As tips of the spear in an all-out war on government inefficiency, shared services and cloud computing could help drive hundreds of billions of dollars in long-term savings while enabling enormous transparency and performance improvements throughout the government.” [1]

An important recognition in this transition is the eradication of the “special” or “home-baked” processes, while the accompanying cultural and organisational challenges are eerily familiar:

“The [US] government’s most significant achievement in three decades of shared services gradualism has been elimination of scores of agency-specific payroll systems and consolidation into four centralized providers that serve the entire government today. To this day, most agencies continue to self-serve for most administrative services. Redundant shadow staffs remain scattered throughout most agencies. Inefficient legacy systems continue to operate despite faster, better, and cheaper shared service or cloud computing alternatives. Most government shared services currently operating are under-used and under-performing relative to the state-of-the-art in other sectors. The government remains stuck in an obsolete, industrial age organizational model with vast redundancies and inefficiencies. It has flat-out failed to transform with the times into a lean, high performance enterprise suitable for 21st century challenges.” [2]

We also need to be ruthlessly honest about the problems that need to be tackled, and the opportunities on offer:

“Enforcing acceptance of standardized systems throughout the government would be one of the toughest, but most critical challenges determined leaders must face. Like the tax code, government administration is rife with complexity—the byproduct of over-designed, agency-unique systems. Agencies must be forced to accept plain vanilla and give up fancy flavors with marginal business value. Moving agencies onto common platforms is fundamental to the streamlining and consolidation necessary to unlock potential savings. It would also open up the government like never before to transparency and performance management improvements.” [3]

As I mentioned above, in the UK the shared services agenda historically made little headway: few departments share common services and systems. The few cross-government exceptions include examples such as the recent publishing platform, GOV.UK, its underlying performance dashboard,  GOV.UK Verify identity assurance and the much earlier API-based platform components of the Government Gateway (currently due to end life sometime in 2016).

This propagation of the “we’re special” mantra and the associated wholesale mid- and back-office bespoking and replication of processes, roles and functions arises not because of any technical constraints, but rather because of a failure to move the public sector away from its inefficient structural silos and the technology stacks that mirror them. A shared services culture and a platform-based approach won’t work in environments where the organisation is not able to re-engineer the way it operates to focus on users (in this case, citizens, business and frontline employees) and their needs rather than its own self-motivated organisational imperatives.

There remain deep-seated cultural, leadership, and organisational issues in the public sector’s current configuration that need to be tackled if we’re not to continue expending precious public sector resources on internal overheads rather than our public services. Part of the problem in the current debate is the failure to distinguish between those jobs (and their associated processes and organisations), that are much needed, and those that are duplicating and repeating what is already being done multiple times over elsewhere. In this sense, as in so many other areas, government is no different to any other large-scale organisation and their inherent tendency towards inertia and the status quo:

“Companies will quickly recognize ideas that fit the pattern that has proved successful for them before. But they will struggle with ideas that require a very different configuration of assets, resources and positions to be successful.” [4]

Or, as Marshall McLuhan put it more succinctly back in 1967, “We look at the present through a rear-view mirror”.

Deverticalisation / Utility-Commodity

UntitledThe move away from vertical structures to horizontal ones has rapidly become one of the hallmarks of the digital age. This deverticalisation has in part been enabled by open standards, commoditisation and the existence of common platforms. It’s not a new phenomenon, but part of a cycle of improvement that has moved through numerous industry segments over time. What is different now is that IT — and the essential role of open standards in technology — has brought deverticalisation to traditional “white collar” job roles, functions, processes and organisations.

In the private sector, competitive pressures drive its application. For the public sector to realise equivalent benefits, it has to generate its own “cathartic moment”, to foster an epiphany amongst both the political and public official classes that will enable resource to be moved away from the proliferation of roles, functions, processes and organisations that duplicate and replicate behind the scenes and instead enable them to be deployed to the frontline. While the technology may be different, we’ve seen such processes and impacts before:

“Something happened in the first years of the 20th century that would have seemed unthinkable just a few decades earlier: Manufacturers began to shut down and dismantle their water wheels, steam engines and electric generators. Since the beginning of the Industrial Age, power generation had been a seemingly intrinsic part of doing business, and mills and factories had had no choice but to maintain private power plants to run their machinery. As the new century dawned, however, an alternative started to emerge. Dozens of fledgling electricity producers began to erect central generating stations and use a network of wires to distribute their power to distant customers. Manufacturers no longer had to run their own dynamos; they could simply buy the electricity they needed, as needed, from the new suppliers. Power generation was being transformed from a corporate function to a utility.” [5]

The same applies now to many of the duplicated processes, functions, roles and organisations of the public sector. Yet the idea of the growth of utility or commodity IT services is hardly a new kid on the block. As Rappa commented in 2004

“The utility business model is shaped by a number of characteristics that are typical in public services: users consider the service a necessity, high reliability of service is critical, the ability to fully utilize capacity is limited, and services are scalable and benefit from economies of scale.” [6]

The more interesting question is why has this transition proved so slow? The answer is in part found in the behavioural and cultural inertia of the organisations involved — combined with significant vested interests in holding onto and maintaining the lucrative, and failed, business models of the past, such as long-term, wholesale outsourcing and new public management (NPM) (which we discuss in some detail in our book “Digitizing Government” and which has been extensively analysed by Dunleavy et al in “Digital Era Governance”).

Many industries and organisations are beginning to grasp the scale of the change required, but government remains behind the curve and needs to catch up fast — something the recent refocus on the Government as a Platform vision that the Government Digital Service set out in mid-2013 should help expedite. Its online guidance for CTOs recognises that:

“This move to platforms does not assume that government has to develop everything in-house: many of government’s needs can be met by cost-efficient utility services that already exist. Yet government can also help establish best practice in areas such as the privacy of citizen’s personal data, helping lead by example. Wherever appropriate, the government should use existing external platforms, such as, for example, payments services (ranging from third party merchant acquirer services to the UK’s national payments infrastructure). Deciding to develop platforms in-house will happen only where that makes best sense in terms of meeting users’ needs in the most flexible and cost-effective way.”

This distinction between what government needs to do itself (its genuinely unique needs) and those that it can consume is essential. After all:

“The global economy is in the midst of a major business-process revolution as significant as the one that occurred a century ago. As a result of a substantial decline in interaction costs, the new revolution is leading to the widespread de-verticalization of corporate business structures. De-verticalization is the process of separating functions and services from a vertically integrated business. Companies are undergoing this change because they can operate more efficiently and achieve better results by relying on partners to perform certain functions, rather than by maintaining control of these processes themselves. As de-verticalization unfolds in a given industry, supply-chain partners focused on particular aspects of the value chain emerge. Frequently, these partners develop greater economies of scale and superior skill than their in-house counterparts. The development of these partners reduces redundancy of operations in an industry and lowers the barriers to entry. [7]

The well-managed application of the benefits of deverticalisation will produce a significant reduction in costs — and supports the tight-loose approach that the UK government has been promoting. Good use of technology aligned to effective organisational design enables improved “… internal management, monitoring and control … This can facilitate both greater centralization and paradoxically greater autonomy of decision-making downward and outward.” [8]

It’s not difficult to see how many of these following benefits might be productively applied within the organisation of our public services:

“Both the reductions in transactions costs and the timeliness of information flow expands the span of control of managers and results in the flattening of organisations. They also encourage “de-verticalization”, “globalization” and “out- sourcing”. The “Product Cycle” has been greatly shortened–reduction in “time to market”–the average product cycle has shortened from 5 years to 12-18 months.” [9]

Think of the “product cycle” or “time to market” above in terms of successful implementation of a new policy “product” — such as improved welfare services or corporation tax — to understand its potential impact on government. Deverticalisation can help bring about the creative destruction of inefficient and expensive ways of operating, enabling many processes and functions to be simplified and improved. It’s increasingly unsustainable to propagate the current operating model. Public services need to be of the highest quality and delivered as cost-effectively as possible: this requires a major change in the way both the organisations that provide them and the services they provide are designed and delivered.

This change is no longer a divisive political choice of “right” or “left”, but a moral, societal and economic necessity. What does remain far more political is the choice of which services are delivered in-house and which from external services and suppliers. Part of resolving this long-standing issue may lie in properly mapping the landscape — understanding those roles that are unique and specific to the public sector (most typically those on the frontline of service delivery) which can best be kept in-house, and those that are generic (such as many middle- and back-office functions, systems and processes) that can best be sourced externally. But for public services to operate efficiently, in whatever balance of public-private engagement a government and its electorate desires, they will still require the underlying operating model to deverticalise, to transition to one that uses open standards and platform-based commodity components. Some of the characteristics of such changes are summarised in the table below.

Screen Shot 2014-12-12 at 09.57.36


As organisations encourage deverticalisation to develop:

“…more fulfillment partners emerge to seize the new business opportunity. Competition among fulfillment partners forces them to improve their skills even further; often, they become more skilled in their own domain than integrated players. Eventually, however, competition also tends to force down prices and lead to abundant capacity. Therefore, once the majority of the industry adopts a deverticalized operating model, pricing often falls to commodity-like levels.” [11]

Without taking advantage of the significant user benefits of deverticalisation — in particular reduced costs and better use of resources  — the public sector  will face an increasingly corrosive and existential crisis. The result will be the continued, progressive weakening of our public services over time whilst simultaneously inflating their costs and unnecessary structural overheads.

If frontline public services degrade, quality and productivity drop, staff morale decline, and costs continue to escalate, citizens and businesses alike will not only fail to receive the services required, but will also become increasingly disillusioned with the political and leadership processes that have isolated the public sector from renewal and improvement.

This is why the move to digital has to work: the alternative is unthinkable. Those who continue to stall or block government’s long-overdue modernisation for their own narrow self-interest are playing with the very soul of our public services.


[1] Marshall, J. Shared Services and the Cloud: Seize the Opportunity. The Public Manager, Fall 2010. p.62.

[2] Marshall, J. Shared Services and the Cloud: Seize the Opportunity. The Public Manager, Fall 2010. p.66.

[3] Marshall, J. Shared Services and the Cloud: Seize the Opportunity. The Public Manager, Fall 2010. p.67.

[4] Chesbrough, H. Open Business Models: How to thrive in the new innovation landscape. 2006. Harvard Business School Press. p.4.

[5] Carr, N. The End of Corporate Computing. SPRING 2005 MIT Sloan Management Review. p.57

[6] Rappa, MA. The utility business model and the future of computing services. IBM Systems Journal, Vol 43, No 1, 2004. p.32

[7]  Raskin, A; Mellquist, N. The New Industrial Revolution: De-verticalization on a Global Scale. Research on Strategic Change, August 2005. http://www.alliancebernstein.com

[8] Lau, LJ. The New and Traditional Economies. January 2001. p.4 . Retrieved from http://www.stanford.edu/~ljlau/Presentations/Presentations/011201.PDF on 26.10.2012]

[9]  Lau, LJ. The New and Traditional Economies. January 2001. p.6 . Retrieved from http://www.stanford.edu/~ljlau/Presentations/Presentations/011201.PDF on 26.10.2012

[10] Raskin, A; Mellquist, N. The New Industrial Revolution: De-verticalization on a Global Scale. RESEARCH ON STRATEGIC CHANGE, August 2005. p.2. http://www.alliancebernstein.com

[11] Raskin, A; Mellquist, N. The New Industrial Revolution: De-verticalization on a Global Scale. Research on Strategic Change, August 2005. http://www.alliancebernstein.com p.9

Posted in future Britain, IT, IT strategy, public services, technology, technology policy | 1 Comment

more thoughts on government in the digital age

Digitizing Government SmallThe book I’ve written with Alan Brown and Mark Thompson — Digitizing Government — is out. It’s here on Amazon UK and here as a Kindle edition: although it’s also here if you’d rather order online AND support your local independent bookshop. (The US version is due out 26th December — on Amazon US here).

We had a great open event launch party for the book last week at which a variety of distinguished panelists participated — Chi Onwurah MP, Liam Maxwell (UK Government CTO), Paul Brewer (Director for Digital Resources at Adur and Worthing Councils) and Paul Shetler (Chief Digital Officer at the Ministry of Justice), along with my fellow co-author Mark Thompson.

Our book looks at how the public sector needs to re-design itself for the digital age to help cultivate better public services. This isn’t just in terms of technology but about the behaviour, culture and re-designed services of truly digital organisations. In fact, much of what we focus on is as relevant for any large organisation struggling to make the most of “digital”.

I thought I’d set out in some occasional blogs a few background thoughts, themes and ideas that help provide additional backstory to some of the critiques, observations and recommendations we make in the book. This time I’m going to kick-off by looking at “Outsourcing” and “The Wrong Debate?” — with more to come in random future blogs across a wide range of topics that play into this space …


The undifferentiated outsourcing that has dominated public sector thinking has been a blunt tool often inexpertly used. This isn’t to say there’s no role for outsourcing — far from it: it can play an essential role. But it needs to be intelligently applied as only one of many possible options, and people need to understand when it’s appropriate and when it isn’t.

Traditional suppliers are understandably keen to promote the role of outsourcing in helping fix some of the public sector’s many problems. In a 2011 interview, Capita called for more outsourcing of public sector roles to the private sector, stating that “90 per cent of the UK’s 500,000 civil servants were performing back and mid-office functions, which could easily be better managed by the private sector” [1].

Such a shift from public to private sector of clerical, support and administrative roles may or may not end up being more efficient and help save costs, but doesn’t really start the discussion in the right place. Undifferentiated outsourcing of what’s already within the public sector as it’s currently configured would potentially repeat what IT outsourcing did: hand another arbitrary organisation a set of people, systems, processes and costs frozen at a single moment in time.

Outsourcing applied simplistically becomes a costly displacement activity and does little to tackle the real issue — how public sector services can best be designed and delivered in order to better meet user need. Instead, these frozen services, with perhaps some marginal but largely inconsequential savings, are then merely re-sold by the private sector, as-is, back to the public sector. Worse, it becomes far more difficult (if not impossible) to sensibly redesign the end to end service given that parts of it are now under entirely separate ownership and management.

I’m not sure anyone really wins in this situation: the private sector company is often frustrated by the cumbersome and micro-managed contracts that prevent them innovating, and the public sector is frustrated by the belated realisation that little of the benefits it anticipated have come to fruition. As one Dell executive complained in 2010:

“Government expects its outsourcing service provider to maintain the complexity rather than to simplify and standardise the work processes,” he said.

“Processes and people are moved to the provider in their existing state and are independently managed next to countless similar processes of other companies. Consequently, the cost and service benefits of standardisation and simplification are lost.” [2]

It’s time we moved away from starting with any “solution” — such as outsourcing — without first understanding why, how and when it might be best applied: and when it might not be appropriate at all.

The wrong debate — public v private?

We all too often seem to end up in a very binary, Christmas pantomime-like debate about the role of public and private sectors: public sector good (hurrah!), private sector bad (boo!); or, just as inane, public sector inefficient (boo!), private sector efficient (hurrah!).

In describing the transition to digital government [3], Tim O’Reilly tries to move things away from the binary, bunkered down attitudes that often seem to prevent us properly discussing how we can get the best possible publicly funded services for citizens:

“…The idea that we have to choose between government providing services to citizens and leaving everything to the private sector is a false dichotomy. Tim Berners-Lee didn’t develop hundreds of millions of websites; Google didn’t develop thousands of Google Maps mashups; Apple developed only a few of the tens of thousands of applications for the iPhone.

Being a platform provider means government stripped down to the essentials. A platform provider builds essential infrastructure, creates core applications that demonstrate the power of the platform and inspire outside developers to push the platform even further, and enforces “rules of the road” that ensure that applications work well together.”  [4]

Meanwhile, in Canada, it also seems to be about far more than frontline cuts or “efficiency savings”:

“… fiscal restraint measures are driving the need to standardize, consolidate and re-engineer the way government operates and delivers services. By re-thinking how government delivers services, it will help lower the costs of services while improving the service experience.” [5]

In recent BBC coverage of the Chancellor’s Autumn Statement, the Director General of the CBI seemed to be a lone voice in raising the fundamental question of looking at how the public sector is designed, operated and maintained:

CBI director general John Cridland said the government would have to be “much more imaginative” about how it makes further spending cuts.

“Most of what we’ve done in this parliament, frankly, has been efficiency savings, cuts in head count, controls on pay,” he told BBC Radio 4’s Today programme.

“If you’re going to make the cuts we now need to make you’ve got to be far more lateral, you’ve got to re-engineer the whole model.” [6]

Our book examines these complex issues, looking at how the digital culture and practices of modern organisations can help improve the design and operation of government itself, and hence our public services.

The meaningful reform and renaissance of our public services requires us to move beyond the narrow “operational efficiencies” lens that currently dominates the political and media domains. The real task at hand is being side-tracked by the unacceptable — and unnecessary — axing instead of frontline services that impact some of the most vulnerable in our society. This “cut services” narrative misses the fundamental opportunity that the digital age provides: which is to rethink and radically improve government itself, stripping out the layers of duplication and redundancy, and to put an end to cutting the very services that the public sector is there to provide.

The opportunity that digital offers is about so much more than technology. It’s about enabling more resource to flow where taxpayers wanted it to go in the first place: the frontline.

[Update: this blog continues with a second post — (continued) more thoughts on government in the digital age]

[1] Gill Plimmer, Financial Times, August 23, 2011

[2] Kelly Fiveash, The Register, 9th July 2010. Retrieved from http://www.channelregister.co.uk/2010/07/09/dell_francis_maude_it_spending_cuts

[3] Kitsing, M. An Evaluation of E-Government in Estonia. Prepared for delivery at the Internet, Politics and Policy 2010: An Impact Assessment conference at Oxford University, UK, on September 16-17, 2010.

[4] Tim O’Reilly, Government as a Platform, 2010.

[5] International Council for IT in Government Administration (ICA). Canada Country Report for 2012. p2.

[6] BBC News, 4th December 2014. Retrieved from http://www.bbc.co.uk/news/uk-politics-30323690

Posted in digital, future Britain, IT, IT strategy, public services, technology, technology policy, Uncategorized | 1 Comment

Understanding and implementing new digital business models

Digitizing Government SmallOur new book about understanding and implementing new digital business models, Digitizing Government, is published 1st December in the UK and 26th December in the USA.

I’ve written it with Alan Brown and Mark Thompson, bringing together a range of experiences from our work with large organisations trying to adapt to the digital age, together with some of our own academic research. All three of us are a blend of active practitioners as well as being academics — Alan as Professor of Entrepreneurship and Innovation at the University of Surrey’s Business School, Mark as Senior Lecturer in Information Systems at Cambridge Judge Business School, and myself as Senior Research Fellow at Bath Spa University.

Although we’ve focused on governments — since they face some particularly complex challenges in transforming into truly digital organisations — investment in digital technologies and adaptation to digital culture has become essential for success and sustainability across a whole variety of organisations.

We’ve aimed to make our book practical rather than academic in nature, sharing experiences, insights and advice for understanding and implementing digital transformation to increase business value and improve client engagement. It’s in three broad sections — a “why”, “what”, and “how” that articulate and explore the major elements of digital transformation, and offer clear steps for execution of digital strategies.

We’ve included case studies from both private and public sectors, together with a detailed chronology of current digital change efforts here in the UK government. We relate these to government efforts in the USA and elsewhere in the world.

Ultimately we hope it provides a practical and unique set of insights into organisations in the digital economy. We don’t claim to have all the answers — merely to help nudge things in a better direction and to open up a more open and constructive debate about the future of our public services.

You can order it on Amazon UK and Amazon US, amongst other places — or better, why not order through your local bookshop and show them a bit of support? If you do read it, please let me know what you think: the move to digital is very much work in progress. We all need to share and learn more effectively if we’re going to help organisations successfully adapt.

Posted in digital, future Britain, IT, IT strategy, open government, public services, technology, technology policy | 1 Comment

be careful what you wish for … (part 94)


Hold on — haven’t we been here before? There’s something very familiar about the recent unveiling of new powers for the state to snoop on the UK population through a proposed new Counter-Terrorism and Security Bill.

I doubt that anyone reasonable argues with the superficial intent — to detect criminals (specifically terrorists) and bring them to justice. The much more complex issue is the question of where lines are drawn: what is the most appropriate way of achieving that outcome? How much should the state intrude into everyone’s daily lives?

We need to find a solution that is proportional, sustainable and reasonable in our democratic state. The worst possible outcome would be that we blindly erode the very values we once used to uphold — not because someone bombed us day after day into doing so, but because we voluntarily surrendered our own freedoms, and hence our legitimacy, out of a misplaced sense of fear.

Part of the problem is that often these issues only seem to be considered through one lens: that of counter-terrorism. When in opposition, political parties tend to listen to a wide variety of expert opinion, and at least offer us the hope of developing reasonably balanced policies. Once in power however, governments seem to turn to single sources of truth, all of them peering through the same lens. Over time, it becomes almost impossible to distinguish a new government’s policy on these issues from the ones that preceded it, as their once-intended policy becomes progressively degraded.

These latest counter-terrorism proposals seem intended to work by eradicating any remaining vestiges of anonymity and privacy in our daily lives. This extinction of personal anonymity has profound implications — not just for journalists whose sources can no longer be protected, not just for MPs trying to meet in confidence with their constituents, or for NHS whistleblowers, but also for the police themselves. And indeed for the rest of us, just trying to muddle along and get on with our lives.

I do wonder which undercover criminal sources, or “double-agent” jihadists, are going to run the risk of communicating with or meeting with the police or intelligence agencies, when they know it’s no longer either secret or safe to do so? With that telltale, pointing-finger trail of mobile phone interactions and email exchanges left in their wake, who would risk putting their lives on their line? The risks for such essential insider informants are being multiplied by the very measures presumably intended to help.

I’m surprised we haven’t heard more from the likes of important players such as Crimestoppers — given that their assurances of anonymity for those wanting to help presumably plays a key role in encouraging people with information to come forward. Their site says they received over 100,000 pieces of useful information about crime from the public last year, and over 6,000 criminals were arrested and charged. Yet who has evaluated the impact that the removal of anonymity will have on such essential sources of information?

Without a proper debate, and a rigorous assessment of the likely real-world impacts of a further erosion of our online and electronic device privacy, who knows whether these latest familiar proposals changes will actually assist — or degrade — counter-terrorism intelligence work?

Part of the debate that we need includes news programmes and journalists asking these sort of questions: “What impact will the end of anonymity have on essential intelligence gathering sources like Crimestoppers?”, “How will the the police be able to meet with informers if all the details of who met with whom and when are automatically being gathered electronically?” and even (admittedly more self-serving) “How will journalists protect their sources?”. But ultimately these issues and their very real impacts are not going to go away merely because a properly informed debate doesn’t take place.

We need a much better public discussion about where these lines are drawn: what information is gathered, from whom, in what detail, what it is stored and how it is protected and how accessed. No computer system is 100% secure. There’s no such thing. Information will leak from the systems holding all these sensitive information. The odd rogue insider will occasionally — and inevitably — abuse their position: sources will be compromised, confidence undermined, sources of intelligence lost. Possibly far, far worse.

If these proposals do go ahead, the controls and democratically-accountable oversight regimes put into place must be robust and demonstrably independent to counterbalance them. Those who abuse the system — and they will — must be brought promptly and publicly to trial, and those who are inadvertently exposed — police sources, journalist sources, MPs’ constituents, NHS and financial services whistleblowers — rigorously protected. Parliament needs the capability, commitment and power to ensure our (unwritten) constitution is not undermined by the drip, drip, drip of incremental responses to the fear of terrorist activities.

To answer my own opening question — yes, we have been here before. And I’m sure we’ll be here again. We’ve seen this well-meaning, but one-sided perspective in the past: that’s partly what my semi-dramatised 2006 blog when guilty men go free was all about. When proposals such as this are put in front of us, they need to be robustly assessed by a credible public challenge rooted in the wider reality of the way our country operates and our people live their lives — and not simplistically considered through a counter-terrorism lens, darkly.

Posted in future Britain, privacy, security, technology, technology policy, Uncategorized | 1 Comment

Happy 20th anniversary online government

It’s 20 years ago this month that the UK government first launched a website intended to provide a simplified, single point of access to information from across the public sector. I thought I’d add a little more detail — or at least, a few historic screenshots — to support my recent CIO column marking the anniversary.

The Government Information Service (GIS), hosted at open.gov.uk, launched in November 1994. It was intended that over 400 public sector organisations, including government departments, local authorities and police forces, would provide their information on the site, which received around 200,000 hits a day shortly after launch.


In July 1996, this summarised the state of play:

23 July 1996

By mid 1997 it was approaching 2m requests a week.


In 1999, the “Portal Feasibility Study” (PDF) set out plans for a more comprehensive approach to delivering all government services online in one place. The portal element of this architecture was originally nicknamed “me.gov”: below are some mockups from 2000 of how it was envisaged it might look during early envisaging.

me.gov 1 2000

me.gov 2 2000By the time of its launch, it had become “UKonline”. UKonline initially appeared as a beta site in November 2000, followed by a formal launch in February 2001.

UK Online

UKonline aimed to provide more integrated services, built around citizens’ “life episodes” (events that had meaning to them), rather than just projecting the departmentally-based  silo services already in existence.

UK Online life episodes

The 1st March 2004 saw another rebrand and relaunch, this time as Directgov.


In May 2011, Directgov (and its sister site, BusinessLink — dedicated to meeting the needs of UK business users) began to be superseded by GOV.UK, initially as an alpha.


In October 2012, the site replaced Directgov and went fully operational as GOV.UK, celebrating its second birthday just last month.

UK.GOV.October 2014

I’ve collated some stats on the usage of the online site(s) in various guises over the past 20 years below — not helped by early stats relating to “hits” or “visits” and more recent measures relating to “unique visitors/users”. So don’t take this as the definitive or final comment on the growth of online government information and services but a partial snapshot at a moment in time … (and if any of you have additional interim dates and usage stats not shown, let me know and I’ll revise/improve the list).

  • 1994 — 200,000 hits a day
  • 1997 — 285,000 hits a day
  • 2004 — 172,257 unique visitors a day
  • 2012 — 1m unique visitors a day
  • 2014 — 1.4m unique visitors a day

Happy 20th anniversary!

[A more detailed narrative of the last 20 years of online government is provided in an earlier blog here]

Posted in IT, IT strategy, open government, public services, technology, technology policy | 1 Comment