The big consultancy companies have gone into overdrive hyping up artificial intelligence (AI). They boast it’s going to help governments “become more cost efficient and increase citizen satisfaction,” and make “government agencies more efficient, improve the job satisfaction of public servants, and increase the quality of services offered.” Magic, isn’t it?

Yet similar claims about technology have been made for nearly three decades. And not just by the big consultancy companies, but governments too:

“[Technology will] provide better and more efficient services to businesses and to citizens, improve the efficiency and openness of government administration, and secure substantial cost savings for the taxpayer.”

Government Direct. A Prospectus for the Electronic Delivery of Government Services. 1996. Cabinet Office.

“We must harness the power of ICT to modernise public services so they are as personalised, efficient and responsive as the most successful companies.”

Connecting the UK: the Digital Strategy. March 2005. Prime Minister’s Strategy Unit, the Cabinet Office.

But technology alone can’t solve complex political, social, and economic problems. And that includes AI. Its evangelists conveniently overlook significant problems with accountability and discrimination, the inherent tendency of some AI models to hallucinate and falsify, and an eye-watering environmental impact. And then add into this toxic mix the inaccurate and derivative nature of systems like ChatGPT.

Getting involved with technology that plagiarises and breaches copyright won’t be a good look for governments. The New York Times has recently announced it’s suing OpenAI and Microsoft for copyright infringement. It will be an interesting case to follow: it surely won’t be long before AI companies need to openly identify the source material they’ve used to train their models and agree a fair way to reward the original creators, perhaps via the likes of the Copyright Licensing Agency.

Along with the need for a less hyperbolic and more scientific approach to AI itself, the current state of government data isn’t exactly ideal for implementing AI given it relies on access to high quality, accurate data and metadata. But the National Audit Office reports that government “data quality is poor” and “a lack of standards across government has led to inconsistent ways of recording the same data.”

Yet why is this, given the UK’s early progress towards mapping, standardising, and making better use of data?

A promising start

Data has been central to the success of modernising government from the earliest days of Whitehall digital initiatives in the mid-1990s. Open data, for example, can help deliver more democratic, transparent, and effective policymaking, and improve public administration.

But governments’ use of citizens’ personal data requires trust, security, and privacy. Which is why the UK Government committed to:

“Safeguard information collected from citizens and businesses … [they] should be able to understand how this is achieved, should have access to their own data, and should be confident that personal and other sensitive information is protected.”

Government Direct. A Prospectus for the Electronic Delivery of Government Services. Cabinet Office. 1996.

Modernising government and moving towards joined-up policymaking and administration requires the ability to access and analyse data drawn from across the public sector, not just from within a single silo organisation. And to do this, governments need to put into place:

“Standard definitions and programming tools to allow Departments to develop new systems in a consistent and standardised way, and to present the data they already hold in a common way.”

Modernising Government. Cabinet Office. March 1999.

In 2001, the UK adopted Dublin Core for its pan-government metadata framework:

“[To] help us to identify and exploit our information assets … More importantly, it will make it easier for businesses and citizens to find what they want.”

e-Government Metadata Framework. Office of the e-Envoy. May 2001.

A few months later, the Public Record Office announced:

“The adoption of cross-government standards for metadata and interoperability to support greater commonality and inter-departmental working.”

Electronic Records Management. Public Record Office. July 2001.

Recognising that “Better public services tailored to the needs of the citizen and business … require the seamless flow of information across government”, the e-Government Interoperability Framework (e-GIF) and GovTalk established open standards for the cross-government use of data.

The UK GovTalk site in 2001

During the early 2000s, they were used in the design and implementation of multiple government platforms to deliver secure, authenticated data submission, access, and interoperability across multiple departments and systems. They supported both citizen/business to government interactions, as well as government-to-government.

The cross-government orchestration and interoperability of data in the early 2000s

Subsequent years saw continued work on data, including the UK Government Data Standards Catalogue. It defined the standards needed to implement “interoperable systems working in a seamless and coherent way … providing better services, tailored to the needs of the citizen and business.” But the platforms providing cross-government delivery received little investment and fell into decline and disuse.

In 2013, data standards and interoperability once again became a major policy focus. That year, the government’s Technology code of practice promised that “Users should have access to, and control over, their own personal data,” echoing the 1996 commitment.

To help deliver this policy, the Office of the Chief Technology Officer (OCTO), part of the Government Digital Service (GDS), launched a new pan-government data programme: mygovdata.

Hope renewed—mygovdata

mygovdata had three objectives:

  • To identify efficiency opportunities such as data rationalisation and de-duplication
  • To improve the use of citizen-related data in public administration and governance, ensuring appropriate use and re-use of data and data-related systems
  • To understand the size and nature of effort needed to fulfil the policy objectives of users having access to, and control over, their own personal data, as well as letting citizens see what data government holds about them

Rather than a big-bang approach, data quality would be improved by taking advantage of new policy initiatives—and updates to existing policies and systems—to rationalise and normalise data.

To test and refine the approach, joint work started between OCTO/GDS and one of the big Whitehall departments to map their entire data estate. The objective was to identify what data was held, where; the formats and standards in use; the extent of duplication within and between systems; and how accessible the data was (whether it could be accessed, via example, via application programming interfaces, APIs).

What mygovdata found

The mygovdata discovery found over 120 different places where citizen data was stored in the department. This wasn’t a big surprise: government systems reflect the top-down legislation, policymaking, and organisation-centric boundaries, processes, and budgets of Whitehall.

Each system held at least one element of information about a citizen, such as their name, and the department had multiple copies of the same databases. This duplication was the result of different teams having their own copies to use—for example, for management information or statistical analysis.

The design of the department, reflected in its information systems, prevented it from taking a holistic view of citizens and their interactions. The fragmentation and duplication of data also meant it would be difficult, if not impossible, to give citizens access to, and control over, their own personal data without first improving data quality, consistency, and access.

These findings within just one department revealed the scale of the challenge, and the diverse, fragmented and variable quality of government-held data. But it also confirmed a significant opportunity to move towards a more citizen centred data design, and to improve government’s operational effectiveness and efficiency in the process.

Alongside improvements to data quality and management, mygovdata recognised the importance of application programming interfaces (APIs) to support more integrated, effective, and efficient policymaking and administration. Achieving consistency in the way APIs were designed and deployed would enable data to be secured, accessed, and used more effectively—mirroring the type of interoperability and integration standards achieved over a decade earlier by the pan-government state machine. Citizens would be able to access their own data across multiple departments and agencies, whilst still honouring their status as separate legal and constitutional entities.

As with the government’s earlier work on delivering data standards and interoperability, mygovdata made promising initial progress. However, subsequent changes in Whitehall meant it lacked the backing and commitment essential to sustain delivery.

The current landscape

Over nearly 30 years of digital government initiatives, we’ve seen how the use of technology in government can be both a blessing and a curse. On the positive side, it’s been used to streamline existing transactional interactions, such as applying for a passport or completing a tax return. On the negative, it’s breached the rule of law and rapidly expanded the reach of the surveillance state.

To modernise government and move towards joined-up policymaking and administration whilst preserving and enhancing democracy—rather than undermining it—government should:

Both of these steps are required before automated systems, including AI, are let loose: poor quality data will only contribute to erroneous conclusions, biases, falsehoods, and hallucinations.

If its pioneering data initiatives had been nurtured and maintained over the past three decades, the UK public sector would be far better placed to cut through the hype and assess both the benefits and drawbacks of AI. It’s why government should also make time to learn the lessons of why previous good work is left to rust away instead of being consistently and continuously developed and exploited. Otherwise it risks being caught in a perpetual groundhog day—waiting in vain for a magical technology to do the hard work of making government more effective and efficient.