Westminster eForum — implementing ID cards

The text below is a copy of my speech delivered today at the Westminster e-Forum session on implementing ID Cards (keynote speaker Andy Burnham, MP, Home Office Minister for ID Cards).

The IT industry has learned a lot about security, identity and privacy. In fact, let me be candid here, as the most obvious, high-profile target for attacks, Microsoft has learned, and continues to learn, a lot about security, identity and privacy.

Much of this is of course to be found plastered across the front pages and headlines of the media. But the underlying lessons that underpin these headlines have wider resonance. Let’s take one simple, well known example. Microsoft Passport – one of the largest identity systems in the world, with over 1 billion authentications a day. It provides a fascinating insight into an identity system that is simultaneously a great success – and a failure. But why should this be? And what wider conclusions might we draw from this?

One of the many lessons we learned early on is that people do not want inappropriate third parties sitting between them and their services. They find it an unnecessary and undesirable intrusion. Empirical industry experiences such as this need to be factored into the design and implementation of identity systems.

Leading identity experts such as Kim Cameron have helped document such industry evidence into codified models that help ensure the delivery of successful identity systems – what I might term, with apologies to Covey, “7 habits of highly successful identity systems” – systems that balance at least three key perspectives:

  • policy – be that business or public policy
  • technical aptness – by which I mean issues such as sustainability, reliability, scalability, predictability, security and privacy
  • consumer or citizen benefit

In particular, consumer or citizen benefit is often the poor relation – but in many ways, it’s the most significant factor in the success or failure of identity systems. When you hear a local authority such as Bracknell Forest, which the Minister mentioned earlier, talk about their smartcard system, they emphasise the benefits it brings their residents – the richness, number and convenience of the services it unlocks.

Of course, identity systems and ID cards are not in themselves the end goal. They are the keys to unlock improved, more appropriate access to and usages of information: and so they need to involve parallel changes in policies and processes for information handling – to minimise risk and ensure these very real potential benefits are realised in practice.

Digital rights management (DRM) raises some interesting questions here. I don’t mean the technology per se – we could argue for some time about its ability to provide the experiences we might desire – but rather the currently prevailing misconception that it’s only a technology for content providers, such as the movie and music industries. Think instead about its application across areas such as Freedom of Information (FOI) and the Data Protection Act (DPA). We hear much of citizen centric services: so what role might DRM use in ensuring citizen data is accessed and released only under their control? In the context of the strategy set out in the Transformational Government paper, it provides an interesting model of how access and information could be truly citizen centric.

We should also take advantage of the guidance made available by the worldwide technology community. Ensuring for example that we avoid the problem of the insertion of inappropriate third parties, as we learned with Passport. And recognising that systems will always be compromised – whether it be hackers breaking in through the front or back doors, or insiders falling prey to social engineering. We also should acknowledge that privacy is an integral aspect of security – but one focused on the interests of users rather than the systems, or system owners.

Compromises of such vital systems need to contain the damage caused – and that they do not lead to the type of catastrophic issues we’ve seen elsewhere, such as the public disclosure of 40 million users’ records in the USA in a single incident. Such systems should aim to hold the absolute minimum of information – so that if they are compromised, negative outcomes are limited. This problem will become much more acute as not just our biographics, but our biometrics, are stored in an increasing number of systems around the world, many under governance regimes far less rigorous than those of the UK.

As an illustration of this, I think the Personal Genome Project offers some interesting insights. The PGP, which I recall is being run out of Harvard, is openly publishing their DNA sequence, their genomes, online on the Internet. This openness is designed to maximise the potential for beneficial discovery – and could offer a virtuous circle of pro-active medical interventions provided by third-party genomic software tools.

So what has this to do with identity, identity cards and biometric databases? Well, this model of openly publishing something we have generally assumed should be kept under close guard raises some interesting challenges and insights into how we think about the value of the biometric systems currently being built and proposed around the world on the back of ICAO and other standards.

It seems to me that as more and more people in more and more systems and countries store copies of our biometrics, their value atrophies – probably on an exponential scale.

What value and risk assumptions will we be able to make when we know that anyone, anywhere in the world, effectively has access to our biometrics in digital form? It seems to me that this would make it impossible to verify identity using biometrics in anything other than an intensive face to face environment. Online authentication, and any other form of automated model, would become well nigh impossible since we would have to assume our raw biometrics could be acquired and replayed by anyone. In a world in which our biometrics are stored so universally, we will have to assume they are irredeemably compromised if this prevailing orthodoxy is maintained.

The PGP team point out a variety of reasons why publishing genomes to public scrutiny could have a downside. I think it is at reason four that they state:

“Volunteers should be aware of the ways in which knowledge of their genome and phenotype might be used against them. For example, in principle, anyone with sufficient knowledge could take a volunteer’s genome and/or open medical records and use them to ….. make synthetic DNA corresponding to the volunteer and plant it at a crime scene.”

The same risk will be run with our biometrics when they are stored in a wide variety of computer databases around the world. We may well have the gold standard in the UK – I know well that we have some of the best security expertise available anywhere in the world. No-one questions that. But we live in a global community. The very programmes that aim to use biometrics to improve identity checks could end up undermining and devaluing their worth entirely.

There are of course major concerns here for all of us if we devalue biometrics – and even our DNA – to such an extent. If we undermine these invaluable tools, we could find ourselves in a far worse position than we are today in criminal investigative work, the reliability of forensic evidence and the privacy and security of our societies. I find this a deeply worrying prospect with implications I can’t even begin to quantify. The IT industry is much castigated for not bringing its expertise to the table: well, that is one accusation I hope will not be levelled in my direction.

“It is clear that the more joined-up the service, the greater the need for secure systems.” Not my words, but those of the Council for Science and Technology (the CST) in their useful paper “Better use of personal information – opportunities and risks”. They highlight the undesirability of creating a single database of identity information “because of the potential difficulties with maintaining the security and integrity of the data”. Again, their words, not mine.

We have seen this in the US and Canada, where they are making moves away from cross-indexing all their systems in a simple way, because of the risks involved. The CST states that its preference is for “a series of federated databases, each with its own identity but with common linkages”. This mirrors real-world lessons, including those learned at Microsoft, and more broadly in the US, Canada and elsewhere.

So we clearly need to work together, to crystallise these ‘best practice’ ideas – and real-world experiences from existing identity implementations – into a set of robust guiding principles that enable us to ensure the integrity, consistency and success of identity systems.

That is, the delivery of successful identity system implementations that achieve a sustainable balance of

  • policy
  • technical aptness, and
  • citizen benefit

This blog post originally appeared when I hosted NTOUK on SimpleBlog. It’s one of several I’m retrieving and posting here to bring together my posts in one place. The content, date and time shown for this post replicates the original. Many links are, inevitably, broken: where I can, I’ll substitute ones that work, particularly where the Internet Archive Wayback Machine has captured the content originally linked to.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.