single sign-on and data sharing

Picture1

I haven’t seen my old colleague Stefan Brands for a while, but most days I’ll find myself thinking of or mentioning his work – particularly given some of the naivety and poor thinking that still seems to be doing the rounds when it comes to “data sharing“.

Thankfully, a lot of Stefan’s excellent work in this area from a decade or more ago is still online at his Credentica website.

For anyone who wonders how you can tackle problems like proving who you are online and proving your entitlement to something without handing over loads of sensitive personal data too, take a look at his site and catch-up with state of the art … as it already was 10 years ago.

It’s a reminder that solutions to these issues have been around a long time, yet we still see ill-informed approaches to the topic that assume slopping data around is the way to go – an approach which seems rooted in the pre-digital mindset of carbon paper and filing cabinets.

For anyone wanting an overview of how to do secure, privacy-aware single sign-on and data sharing in a government context, the PowerPoint deck here (.PPT) provides a good overview. It’s animated so you’ll need to run it in presentation mode.

What’s frustrating is why, after all this time, so few people seem to have implemented it and instead keep banging on about the need to copy data around – apparently oblivious to the negative impact it has on fraud,  security and self-esteem. Perhaps that’s because, as Bill Buxton has pointed out, it can often take around 20 years from an initial idea to its mainstream implementation and adoption.

Hopefully, with the likes of the federated trust model being implemented by Verify and the desire to make better use of data in the public sector and to strengthen cyber security, we’ll start to see some implementations of the type of approach Stefan outlined sooner rather than later.

It’s time to finally get away from all the security, fraud and privacy risks of the lazy “data sharing” mindset – and put citizens and consumers back in control of their own data.

Posted in digital, future Britain, identity, IT, IT strategy, privacy, public services, security, technology, technology policy | Leave a comment

the future shape(s) of the public sector?

The Institute for Government produced a report in 2014, the shape of the Civil Service: remaking the grade.  It looked at the breakdown of the civil service by salary grade and included this interesting visualisation:

4_Grade_dept_2010-14_600.png

It illustrates a wide range of organisational shapes in terms of the relative distribution of staff across the grade scales.

What interested me more however, and which was not addressed by the report, was to understand the balance between those directly delivering public services (generally referred to as “the frontline” in journalistic shorthand), and those focused on feeding and watering the organisations’ own internal processes and overheads (“the bureaucrats”, in the same crude shorthand).

We would all like our taxes to go directly into frontline services – to the doctors, nurses, teachers, firemen, etc. After all, hardly anyone I imagine falls asleep at night thinking “Thank goodness my taxes paid today for a bunch of well-paid middle managers to sit in a room all day drinking coffee, tweeting and talking about slide 91 on their PowerPoint deck.” 

Yet most of us recognise that there’s a valid need for a supportive, enabling organisation around frontline staff – one that can ensure those on the frontline have everything they need to focus on their job. It would be nice to think that these supporting organisational structures are as efficient and well-organised and as resource light as they can be. After all, every pound that goes into them is a pound denied the frontline.

But how do we know which leaders in the public sector are doing a great job, getting the majority of resources to the frontline, slimming down and purging unnecessary overheads and office staff and broken processes, and which are less efficient, absorbing far too many resources into their own internal roles, processes and overheads at a direct cost to frontline services? How many public sector organisations have taken advantage of the digital revolution – by which I mean not just the technology but the culture, processes and practices of genuinely digital organisations? How many have redesigned the way they work and operate, rather than mistaking digital as a sideshow focused on updating their website in new fonts and colours while they carry on business as usual?

Mark Thompson, a senior lecturer in information systems at the University of Cambridge Judge Business School, has suggested that UK voters are being sold a lie: that there is no need to cut public services if they were to become truly efficient and to move precious resources away from their own internal overheads and into the frontline. And to prove that this criticism is grounded in the real world, he quotes an interesting example:

The other day I read a case study from Holland in Reinventing Organizations that made me fall off my seat. The Buurtzorg community nursing organisation has a back office of 30 people to support 7,000 frontline nurses. It has almost no middle management – no HR, legal, estates, comms, finance, IT, procurement and so on. An audit firm reported that 40% fewer hours of care are required by the organisation’s patients, and its nurses have 60% less absenteeism and a 33% lower turnover than other nursing organisations.

This organisation has been able to provide a radical increase in frontline services, of much better quality, with greater satisfaction for both patients and carers – and for much less money. No one in the UK has been able to do this.

[source: The Guardian, Thursday 12th February 2015. UK voters are being sold a lie. There is no need to cut public services]

 

So how far has such organisational re-thinking impacted the UK public sector? According to Mark, very little. The new ways of designing and running organisations in the digital age have been little adopted on any scale in the public sector, despite the huge potential upside of digitising government. Mark suggests that “middle managers in both the public and private sector are skimming funds from public services – and nobody is talking about it.”

To help tackle this problem, we need an objective way of measuring how well resources are being applied within our public sector organisations, how much goes to the frontline and how much is lost feeding and watering out-dated organisational roles, processes and structures. And some means of comparing how similar organisations across the public sector stack up against each other.

After all, if one local council can run its services in an efficient way with minimal overheads and middle managers, and the maximum number of frontline services – from social care to libraries to refuse collection – we might question the priorities, competence and motivation of other councils that repeatedly cut the frontline and yet maintain a top-heavy organisational shape that compares poorly against their more efficient peers.

One way of enabling such comparisons might be courtesy of the W3C, which has produced a useful organisation ontology. That might sound a bit dry, but it’s a way of consistently describing organisational structures in computer-readable form. The taxonomy can be used as the basis for classifying organisations and their roles, including organisational activities, and it’s extensible to model specific needs.

In other words, it has the potential to provide what I’m suggesting – a standard way of using open, consistent data to describe public sector organisations and of showing who is in what type of role, at what cost and to what value, from frontline to administration. Of distinguishing between what Mark terms ‘administration’ and ‘service’: modern digital organisations have eradicated the need for many traditional roles, processes and activities, yet perversely, many such roles are mushrooming in the public sector even as their contribution to the public good decreases.

If every public body were required to openly publish their organisation’s data in an agreed, consistent, W3C-based form, that data could be automatically accessed, read and compared (unlike much of the organisational and financial data currently put out, which is buried in obscure PDFs and uses inconsistent descriptions between different organisations). We need transparency not obscurity, and we need accurate, consistent data, not mere opinion and assumption, if we are to help encourage the right type of reforms in our public sector.

Such data would enable the type of graphic at the top of this blog to be automatically generated for all local councils, for example. It would enable us to map roles and costs to frontline services rather than internal grade structures (which tell us nothing about public service value): we would all be able to see the variations across the sector and to ask why they exist. Perhaps we would see something like this:

W3C Org Analysis.png

W3C organisation ontology used as the basis for comparative analysis

Bodies such as the National Audit Office could investigate such data more closely – at times local circumstances might well lead to justifiable variations. There’s unlikely to be any mythical “one size fits all” model for public sector organisations. But automating the generation and collection of organisational data would at least provide a starting basis on which to assess how many public sector organisations are taking advantage of the ability to modernise public services, and hence taking the opportunity to move resources into frontline services, and those zealously protecting their own administrative roles and interests to the direct detriment of the frontline.

Making such a transition won’t be easy. The obvious approach is to start small, with a pioneer group who can help define, extend and iterate the W3C organisation taxonomy, and a group of initial public sector bodies keen to become more efficient and transfer value to the frontline. We can see what works, what value it brings, whether it helps: the transition to modern organisational structures needs to be grown, tested, iterated and improved – not imposed from on high. And it needs to be owned and led by those in the public sector who best understand the very real and essential opportunities now possible to deliver radical service improvements.

I’ll leave Mark with the last word, also from his Guardian piece:

Doctors, nurses, teachers, home visitors, librarians will be [the] winners. We’re told we must learn to live with fewer of these people, but we could actually have more if we sorted out our organisational model. The other big winners are, of course, the customers: everybody who consumes public services.

Long term, the losers will be the legions of workers in hierarchical organisational structures – private or public sector – who [now] play a less important part in post-bureaucratic organising.

[Transparency declaration: Mark Thompson is a co-author of my book “Digitizing Government” and we’ve also written some academic papers together including “Digital government, open architecture and innovation: why public sector IT will never be the same again“]

Posted in digital, future Britain, IT, IT strategy, open government, public services, technology, technology policy | 2 Comments

Hype, Blockchain – and some Inconvenient Truths

Screen Shot 2016-06-01 at 15.33.10.png

For all the froth and hype about blockchain, you’d think it was going to bring about world peace, and simultaneously solve every problem known to mankind. There’s probably been more tosh written about it over the past year or so than all that previous guff about “big data”. Quite frankly, I’m disappointed blockchain hasn’t defeated ISIL single-handed and rebuilt the Seven Wonders of the Ancient World by now. Come on blockchain, what are you waiting for?!

Blockchain is being misunderstood and over-hyped to the point where any rational discussion of where it might be useful – and where it might not be – has become buried beneath a sea of self-serving hyperbole. It may do one or two things well – such as distributed ledgers for financial or supply-chain transactions – but it’s also socially and technologically ill-suited in many other contexts. Yet most of the people writing about its potential uses seem to have little understanding either of the technology or the practical limitations on its use outside of some well-defined and well-bounded scenarios.

For those who’ve slumbered through the tsunami of hype, a blockchain is essentially a public ledger of all transactions that have ever been executed. It provides proof of all the transactions on a network and a full history of transactions in the order that they occurred. Transactions are entered chronologically in a blockchain and the blockchain database is shared by all nodes participating in a system so that no single node can ever be in the position of falsifying or tampering with it.

The blockchain contains records of every transaction ever executed. It can be used to ensure that transactions have happened and cannot be tampered with, yet it can also potentially retain a degree of anonymity – which is why it has provoked such interest with Bitcoin, the digital currency, since it enables secure financial transactions to take place without necessarily revealing who is involved in those transactions and without needing a single, central authority to oversee the system.

These characteristics – proof that a transaction, such as a purchase or payment of tax, has happened, combined with potential anonymity – make it a candidate technology to be considered for use in various scenarios. However, the ability to maintain anonymity may require better design than that currently used in the Bitcoin network, as this paper (PDF) points out, and its vulnerability to concerted manipulation by an adversary with sufficient computational power remain a concern. This piece is also interesting about its limitations and (possible) future direction.

So let’s take just some of the problematic examples where it’s been suggested that blockchain technology will offer a miracle solution.

First up, registers of births and deaths. Well, who could argue with the idea of an immutable public register of births and deaths? Along with taxes, there’s nothing more certain than births and deaths is there? Exactly the sort of area where absolute trust and assurance is needed.

Well – possibly. If that is you ignore some inconvenient truths. If that is you don’t understand some of the essential edge cases that need to be handled in the imperfect and often flawed world in which we live. There are various examples where an immutable public record could turn out to be a very bad idea.

Let’s take the example of someone entering a witness protection programme. The usual need is to provision these brave souls with an entirely new, but believable, identity. One with a credible historic footprint.

Hmm. But how would you do manage to do that if there is no ability to insert a back-dated record into the public, immutable, and massively loved and trusted register built on blockchain technology?

Well, possibly you could randomly continually seed a lot of fake births into the immutable public register just in case they might come in useful one day. But then that rather defeats the point of an authoritative public record which can be trusted. It also runs the danger that such fake birth records would be used for other, less altruistic purposes by anyone else who becomes aware of their existence.

Possibly you could put a façade in front of the register, masking the real one and enabling the insertion of new records from a side-chained system so that the end result is a composite of the original core register and the side register containing birth and death records required for other purposes, such as witness protection.

Well, not surprisingly this approach is not without its drawbacks, including that the “trusted register” itself is not what is actually directly available, rather undermining its supposed immutability and trust. And the second, side system becomes an interesting risk entity in itself – a very attractive target if you either want to track fake records being created, or if you want to bribe someone to create records for more nefarious purposes.

Ah well, you may think, perhaps ideas like blockchain aren’t best suited in this type of environment. But surely they can be used elsewhere, such as the land registry. After all, that needs to be reliable and we need to know it hasn’t been tampered with: it’s the authoritative record of all land ownership after all, isn’t it?

Well – perhaps. But let’s continue with our previous example, the poor old witness in the protection programme. Having (somehow) overcome the problems of creating an entirely new identity with a credible historic footprint and somehow reverse engineered it into the “trusted” immutable (cough) public record, the witness now needs to be given a new place to live. Fine. So their new name is entered on the immutable, authoritative land registry as the proud owner of a new home.

Hmm. This doesn’t seem like such a great idea. Firstly, a new name has now appeared on the register for the first time around the same time that their previous persona disappeared. Someone might find it interesting to correlate the disappearances of witnesses and the appearances of new identities around certain dates and court cases, particularly of people who have never previously appeared on the register. Secondly, the witness with their shiny new identity is meant to have a credible back history: yet there would be no scope or flexibility to amend the land registry to make it look as if that person had been resident at their new home for far longer, or to make it appear as if they had moved from somewhere else – all useful camouflage when helping construct a credible back-story around a newly established identity, and to obfuscate simplistic correlation vulnerabilities.

So here we encounter the same problems again: how would a credible historic social footprint be created for those who need one if blockchain-powered, public immutable registers become the norm? And yes, of course those in witness protection programmes may be an edge case, but similar problems arise in other areas, such as battered spouses, at risk children, undercover law enforcement and agency officers. Anyone for whom there is a legitimate need to provision a new identity and historic, credible social footprint.

All of these examples can of course be airily dismissed as edge cases, as indeed they are. But it’s these very edge cases that characterise a unique problem space in the public sector. It’s these hard, life-impacting, very human issues that the public sector has to worry about. This is why the public sector cannot simply let itself be swept along by every fad and fashion that blows through the IT industry without objectively considering the implications. Copying what others are doing, riding along on the hype-wave and joining in the chorus of “Me too!”, has never been a viable strategy – least of all when it concerns matters of potential life or death.

These are complex problems, ones that are not going to be resolved by government playing mini-me to the hype merchants and ill-informed commentators swept up in their own make-believe worlds. Government needs to cut through the hype and assess both the technical and social limitations of any new technology such as blockchain and decide very carefully if and where it may play a useful role. Any less strategic approach is likely to produce unforeseen and undesirable outcomes, including the potential breakdown of trust in some of our most essential public services.

Posted in blockchain, digital, future Britain, identity, IT, IT strategy, privacy, public services, security, social exclusion, social inclusion, technology, technology policy | 2 Comments

the problem with “data sharing”

datasharing.pngThe consultation closed recently on the “Better use of data in government” proposals. It has been some two years in the making and yet seems to be a superficial retread of many of the ideas repeatedly surfaced by civil servants to previous administrations – Transformational Government (2005), the Identity Card Act (2006), and the Coroners and Justice Bill (2008) amongst them.

Here’s a quick rundown of just some of the issues where I found the proposals lack rigour.

Lack of definitions

The paper provides no objective analysis of the problem(s) it aims to fix and no evidential reason as to why “data sharing” is proposed as the (only? best?) solution. Oddly there is no definition of what is meant by “data sharing”. Does it mean data duplication / copying / distribution, or data access, or alternatives such as attribute / claim confirmation? These are all quite different things with their own distinct risk profiles.

Paragraph 54 p.15 implies some potential use of attributes (“flags”), but without detail or context, and paragraph 77 p.21 some levels of granularity of types of data access, but these issues need to be at the core of the paper. Similarly, there is no definition of “public” and “private” data, and hence no quantified levels of granularity of sensitivity within those definitions (e.g. for at risk children, protected witnesses, undercover officials, etc.).

The paper hence displays little understanding of or references to current best practice. For example, there is an inadequate exploration of how technology could be mandated that enables confirmation of attributes (e.g. “this person is over 21”, “this household is entitled to a fuel discount”) without disclosure of any personal data records. The document is discursive and verbose when it needs to be analytical, evidence-based and precise.

There is an odd claim that APIs are a “new” technique (they are not). Neither by themselves do they “allow the access to the minimal necessary information” (p.4, para 12): any API will merely do what is specified for it, including distributing an entire sensitive personal data set for fraudsters or hostile governments to mine and exploit.

This lack of definitions is also exhibited in the “Illustrative clauses”, where references are made to “disclosure of information” without defining what that means – whether copying information to third parties; providing them with controlled one-time access; or whether it would merely be to confirm e.g. “this person is in debt” or “not eligible for benefit X”.

So what’s the problem we’re aiming to solve?

In terms of better and more efficient services for citizens, the description of the fragmentation of experiences of users of public services suggests the core problem is not data but poor service design. The fact that service “design” (and hence data) is fragmented across organisations is a reflection of services designed around organisational structures and their needs rather than citizens. The paper however contains no analysis of whether better services can be created by redesigning them around users’ needs rather than by trying to reverse engineer a solution through “data sharing”.

The idea of applying “data sharing” to problems that actually frequently derive from inadequate organisational and service design makes this paper read as if its purpose is to paper over the cracks and inefficiencies of existing public sector organisations and hence protect them and their existing poor processes rather than fixing the underlying problems. The paper appears rooted in a bureaucracy-centric viewpoint when what is required is a user / service-centric one.

In terms of better search and statistics to inform better decision-making, there is inadequate distinction between public and private data. It provides no specific detail on the process of de-identifying personal data nor any reference to the known problems of achieving this successfully, although paragraph 107 p. 30 proposes putting into legislation the key criteria of the de-identification process. This will need to be about more than just removing personal identifiable information before disclosure however, also placing an obligation on the data owner to ensure no re-identification can be made using the data released in combination with other data. This is a much more complex issue than focusing on a single data set released in isolation and requires co-ordination and risk assessment across and between data sets.

Where public data rather than private data is concerned, a useful policy position would be that by default all such data should be automatically published or accessible via open APIs. Neither should it be limited to government’s own internal needs, but provide a public good, open resource for the wider UK economy. There should be no additional cost in doing this: the same interface can serve ONS alongside everyone else.

Lack of policy alignment

The paper does not make clear how proposals to “data share” comply with the government policy of citizens’ data being under their own control rather than civil servants’ (para 8 UK Government Technology Code of Practice). Instead, it appears to place the bureaucracy at the centre, weakening citizens’ control over their data in order for public bodies and their employees to “share” it around between their organisations rather than by improving the design of public services. It thus appears out of step with the focus on user needs and better services being pioneered by the Government Digital Service.

The paper appears unaware of, or unaligned with, other government initiatives. For example, it is notable for the complete absence of any reference to the Verify user identification programme. If Verify is to be used, why is it not included? If it is not to be used, why not?

Its absence suggests that citizens and their needs do not lie at the centre of this paper. What alternative identification, authentication and verification mechanism will be used by citizens to ensure secure and authorised access to personal data if Verify is apparently to be ignored? And what identity and access management approach is going to be used within and between public sector bodies? No such system currently exists.

What alpha or proof of concept work has taken place over the 2 years this paper has been running to explore and validate different models and inform the policymaking process? Why not “show the thing” rather than just spend 2 years producing paperwork?

Illustrations are simplistic and unrealistic

The illustrations provided are well-meaning but overly simplistic. For example, the illustration given of registration of a birth makes no mention of user identification, authentication or verification. As a result, the examples as they stand are more likely to increase fraud rather than help mitigate it: fraud often arises from poor data management and inappropriate data access and access controls (including social engineering to exploit such weaknesses), providing fraudsters (both insiders and external agents) with the ability to game the system. “Data sharing” more widely – providing an even larger pool of people and organisations with access to useful personal data – will remove it from current domains of control and context, exacerbating and increasing problems of fraud.

Security is mentioned only 11 times in the entire paper, but there is no detail of the computer security techniques to be applied. In particular, the paper makes only one mention of encryption. Along with the absence of any detail about identification, authentication, access controls (authorisation), confidentiality, integrity, non-repudiation, audit, protective monitoring etc., the proposals are inadequate in determining how opening up personal/private data will reduce fraud rather than increase it. There is a risk of repeating the poor design of earlier central government programmes (e.g. see the lessons learned on New Tax Credits [1], which took a well-meaning but simplistic approach to simplifying a complex data issue).

In the absence of such details, they could effectively lead to these proposals becoming a fraudsters’ charter.

Drilldown into an Example

Let’s take just one example – that provided on p.17 – to explore how these proposals lack detail and an explicit understanding of the problem domain and the issues that need to be tackled:

Screen Shot 2016-05-09 at 15.32.08.png

For a paper asserting to be the major proposed policy on sharing citizens’ private data with more civil servants and more organisations, it is notable for its failure to provide detail on the processes to be applied to the protection of data, whether it is attributes that are being confirmed, how users are authenticated, how audit happens etc.

The questions lack a meaningful context

The weaknesses above undermine the questions posed in the paper, since they lack an objective basis against which they can be assessed and answered. Let’s take an example, that of Question 8:

“Should a government department be able to access birth details electronically for the purpose of providing a public service, e.g. an application for child benefit?” (p.17)

Of course government services should be more efficient and online and seamless and painless: no-one argues with that. But this is not the issue here: it is unclear how anyone will be able to answer this question with any meaning given the absence of any description of how the system would work. Details missing include:

  • how will a civil servant in the “government department” identify themselves as a person with a legitimate interest in the birth details and with an appropriate level of clearance?
  • how will their access be monitored and how will it be audited – and will this be real-time protection or retrospective? (particularly important if someone is accessing an at risk individual’s personal data)
  • will civil servants be able to trawl all records or only the specific one related to the event on which they are currently working, and how will such mapping/matching happen?
  • how will the data be “shared”? Will it be copied to their system, will they get access to the full record, will they merely view the record on the system where it is currently retained, or will the system merely confirm attributes (e.g. “this parent has a child and is eligible for child benefit”) without disclosing any details about the child or their data?
  • how will data be secured, what levels of protection are being applied to data at rest and in motion? What levels of granularity are being applied to access controls to ensure more sensitive data is not disclosed without appropriate authority?
  • how does the civil servant prove they are acting on behalf of a legitimate parent or guardian and not participating in a potential or actual fraud and merely “fishing” for data?
  • how does the parent or guardian initiating the claim for child benefit prove who they are and prove that the child that they are asserting is theirs *is* theirs?
  • how is the data of those most at risk going to be “shared” whilst ensuring preservation of security and without tell-tale flags that in turn reveal that a sensitive record has been “hidden”?

In the absence of a definition of how this will work, how are the questions asked in this paper going to be answered with any credibility or meaning? Without such detail, the potential for more widespread and automated fraud and the compromising of potentially at risk people, such as vulnerable children, will be compounded.

Summary

We all need government to become smarter in the way it works, and to play a more positive role in our economy. Better use of data must play an essential role in making this happen. The problem is that this paper provides inadequate detail about basic, fundamental areas (such as security, privacy, accountability) – or indeed any early proof points – that will determine the success of the systems it proposes to put into place.

Without these details being clearly defined, in either the paper or the draft illustrative clauses, the proposals to “data share” will expand the pool of people and organisations able to access citizens’ personal data. In an increasingly digital economy, expanding access to useful personal data is more likely to increase the risk of fraud, not reduce it. There are smarter ways of tackling these problems – from improved service design to technical measures to protect data whilst enabling it to inform decision-making.

Disappointingly, they are inadequately covered by this paper.

[1] See for example “Online tax credit system closed” and https://www.nao.org.uk/press-releases/hm-revenue-customs-2005-06-accounts-the-comptroller-and-auditor-generals-standard-report-2/

Posted in digital, future Britain, identity, IT, IT strategy, open government, privacy, public services, security, social exclusion, social inclusion, technology, technology policy | 4 Comments

UK Government’s single Website since 1994

I tweeted four screenshots a couple of days ago showing the main UK Government Websites since 1994. A few folks have asked for better quality resolution, so here are the best I currently have (if you have better, happy to share them).

I briefly covered the history of government attempts to duplicate the single Website / portal model of AOL and CompuServe in my piece Happy 20th anniversary online government.

GIS

1994 – Government Information Service

UKOnline.png

2001 – UKonline

direct.gov.png

2004 – Directgov

Gov.UK 2016.png

GOV.UK – 2012

Posted in digital, IT, IT strategy, public services, technology, technology policy | 2 Comments

King Canute, diffusion and the Investigatory Powers Bill

Canute

King Canute rebuking his advisors for suggesting he could hold back the waves

We can all learn something from King Canute. At least he had the humility to know, contrary to popular misconception, that he could not hold back the waves.

The same humility is absent however from the Investigatory Powers Bill – which seems to imply it can hold back waves of diffusion.

Diffusion is the way in which a new innovation, such as the car, television or the telephone, starts from a small niche market of early adopters through to being a commodity used by all. It’s best known from the classic Everett Rogers curve.

Think of any new innovation or technology – from flat screen TVs to contact lenses to electric cars – and it becomes apparent how diffusion works across multiple markets, industries and organisations.

Let’s consider how diffusion will impact the IP Bill by taking a look at encryption.

Computer encryption was initially limited to those with the compute power and expertise to use it – such as the intelligence agencies. Early work on public-key cryptography was kept secret. But as with any innovation it was only a matter of time before what was available to a very limited and select few followed the diffusion curve. Encryption moved out of its niche market and into the mainstream.

The impact of this process has been good for us all – better security for our online financial and commercial transactions, and better security for devices such as laptops and mobile phones. Successive waves of technical innovation have provided the intelligence agencies with short-term advantage. But over the longer term, those advantages flow out and diffuse to us all.

There’s a big downside too of course. This same pattern of diffusion happens in less helpful ways  – such as criminal hacking.

At one time hacking was limited to those with in-depth technical capabilities. Now hacking is increasingly commoditised. Today someone without any technical knowledge can download and run automated hacking scripts and launch potentially damaging criminal attacks without any real technical understanding. What was once niche and specialist has become mainstream.

And this is where diffusion and the IP Bill clash. Big-time. Here’s why.

The Bill talks about being able to demand “the removal of electronic protection applied by a relevant operator to any communication or data”. The Bill also seeks other significant powers, such as making it legally permissible to remotely hack computers.

So let’s assume the Bill passes. Someone creates a way to “remove electronic protection” from any communication or data. So too hacking tools are created and exploited so that computers can be remotely compromised and their contents accessed. So far so good – just what the Bill’s authors wanted. Trebles all round.

Ah. But we haven’t yet considered the impact of diffusion. Unfortunately what starts today as a specialist way of compromising security and enabling remote hacking will tomorrow become a commodity, available to all. A universal way to remove “electronic protection” from every device, communication or data.

It’s hard to believe anyone considers this a good idea. Consumers will no longer be able to trust their devices or online financial and commercial transactions, or businesses their mission critical information systems. Without trust, our online commerce and financial environment will fail. Worst of all, the intelligence and law enforcement communities will find their own operations and security progressively, and fatally, compromised.

The IP Bill in its current form will lead to the very opposite outcome to that its authors foresee.

More time is needed to get the Bill right. The wrong decisions now would prove devastating. Not just to our trust in technology – but to our personal and national security.

Just as King Canute accepted that he could not hold back the waves, the civil servants authoring the IP Bill need to recognise that they can’t hold back diffusion.

Posted in digital, future Britain, IT, IT strategy, privacy, public services, security, technology, technology policy | 1 Comment

security, privacy and the Internet of Things

photoI had the pleasure recently of opening a Cambridge Union Society debate on the topic of “This House Fears The Large Scale Collection Of Personal Data“. This theme is partly what inspired my CIO Column on “The Internet of Thieves“. The issue of enterprise and Internet security (or more usually in my experience the lack of security) has occupied much of my career – and unfortunately seems likely to continue to occupy much of the rest of it!

unspecified4

Jerry Fishenden proposing the motion at the Cambridge Union Society 2016.
Photo © Chris Williamson 2016.

Alongside me proposing the motion was Heather Brooke, the investigative journalist and freedom of information campaigner. And opposing us were solicitor and academic Professor Christopher Millard and journalist Edward Lucas. Both sides of the debate were complemented by a student speaker: supporting us with the proposition was Katherine Dunbar, student and competitive debater; and the opposers were supported by Katie Heard, Durham student and competitive debater – both of whom demonstrated their expert familiarity with the format and a commendable ability to have read and learned about the topic in remarkably short time.

unspecified5

Edward Lucas and Katie Heard consider their response, while Professor Millard looks on.
Photo © Chris Williamson 2016.

The formality of the setting and format of the debate all seemed a very long way from the “debates” we used to have at my old south London comprehensive. It made me realise how rarely I engage in formal debate – most conference events consist of “panel discussions” and tedious broadcast-mode slideware instead.

The core of my opening proposition was that far too many digital businesses rest on a profit model centred on the relentless commercial exploitation of our personal data. Some of our personal data of course we may share voluntarily with others in return for a benefit – store loyalty discounts for example. But a great deal are taken, analysed, manipulated, sold and exploited without our consent – often indeed without even our knowledge.

unspecified3.jpg

Professor Christopher Millard makes his case for opposing the motion. 
Photo © Chris Williamson 2016.

Not so long ago it was a unique, unpleasant characteristic of totalitarian states that citizens were permitted no secrets – no private, personal spaces. No freedom. We pointed wagging, righteous, critical fingers at such regimes.

So it’s ironic that a worryingly similar invasion of nearly every aspect of our personal lives has now been adopted as the routine, prevalent, business model of many Western companies and even, shamefully, some of our governments.

Let’s think about this another way. What would we make of somebody we discovered rummaging daily through our dustbins to examine our discarded letters, beer bottles and food packaging? Of somebody who stalks a few paces behind us everywhere we go to observe who we meet and to eavesdrop on and record our conversations?

We would, I think, regard such an individual to be perverted. Possibly even insane. Certainly not someone you’d invite to your birthday party – somebody probably best subjected to a restraining order.

unspecified 5.jpg

The debate in full flow at the Cambridge Union Society. 
Photo © Chris Williamson 2016.

And now imagine this person also randomly and obsessively runs up to us from time to time and shouts “I think you might want to buy this car!”, or “Are you looking for a new house?” or “You’re drinking too much!”.

Yet this invasive and obsessive behaviour is precisely how our technology behaves. Every second of every day. We should regard this use of technology – to trawl, monitor, gather and mine our personal data ­– as no less perverse.

Instead of using new technology to partner with us as equals to our mutual benefit, far too many organisations are obsessed with fleecing us of our personal data for short-term gain, without any regard for the consequences. All in the vainglorious hope that it will provide them with the power of precognition, the ability to understand us better than we understand ourselves – in order to take even more money from us.

unspecified7.jpg

Heather Brooke debating in favour of the proposition: “I believe in privacy for the private citizen going about their private business, and transparency for the public official making policy decisions which affect us all.”
Photo © Chris Williamson 2016.

But but but!“, my critics will counter, I am concerned for no good reason. We should all just enjoy the benefits bestowed on us by this largescale collection and use of our personal data. Where’s the harm?

Well, in response, consider the expert advice given by those who safeguard our critical national infrastructure. They warn of the grave risks of aggregating bulk personal data – creating a pool of valuable information that will be targeted, exploited and abused by everyone from foreign hostile powers to opportunist hackers.

We should heed such warnings.

unspecified2.jpg

Katie Heard opposing the motion.
Photo © Chris Williamson 2016.

If our bulk personal data is collected it will, without any doubt, sooner or later flow into the hands of whoever wants it. Whether by accident or design. So what? “Nothing to hide, nothing to fear.” Isn’t that what we keep being told? The same self-serving line trotted out by those totalitarian governments we once rightly criticised.

In any case, try parroting that nonsense phrase to a battered spouse, abused child, whistle-blower, informant, witness to a serious crime, journalist source, barrister and their client, or undercover law enforcement official. Do you really think they have “nothing to hide”? Of course they do, and for very good reason – this is part of the reality I was arguing in my column “Securing digital public services“. Access to personal data can, literally, become a matter of life and death.

This abuse of our personal data threatens us all in other ways too. It undermines our everyday security. What’s the point after all of protecting an online financial account with “secret” details of your first car, favourite colour and memorable place when those very same details are being Hoovered up and sprayed around the world?

unspecified1.jpg

Katherine Dunbar argues passionately in favour of the motion.
Photo © Chris Williamson 2016.

The irony is that all of this sucking up of our personal data isn’t even necessary: it’s the by-product of a badly broken and ill-conceived business model. How much simpler it would be if we had better business models, ones designed to enable and secure the Internet age. Empowering technology that lets us maintain and control our own personal data, and choose with whom we wish to share it.

What a terribly brilliant, but dangerous idea that is. Rather like democracy itself. Yet we urgently need to adopt this type of imaginative new approach if we are going to end the toxic legacy of analogue thinking in the digital age. The intrusive and dangerous large scale collection of our personal data needs to end, whether by businesses or ­governments. Our democratic right to safeguard and control our own personal data must be strengthened.

Until this happens, we must do everything in our power to protect our data – by using ad blockers, virtual private networks, cookie wipers, onion routing, end-to-end encryption. Whatever it takes to keep our data, and us, secure.

unspecified 6.jpg

Edward Lucas makes the closing case for voting against the proposition.
Photo © Chris Williamson 2016.

The large scale collection of our personal data must not be seen as some sort of ransom or blackmail we have to pay in order to enjoy the benefits of our digital age: quite the opposite in fact. I  supported the Society’s proposition that we should fear the current abuse of our personal data – because it has become the biggest risk to this emerging, amazing, exciting, digital age.

Reflecting on the debate afterwards, I think there was little significant distinction between either side – the underlying consensus seemed to be that we should all have better control over our own personal data. You can’t have security without privacy and vice versa.

unspecified8.jpg

… relaxing after the debate. 
Photo © Chris Williamson 2016.

The post-debate Press Release from the Society provides a high level summary of the debate, as does the short, edited highlights video below (I understand the “Director’s cut” full version will be available at the end of this academic term). This is an important topic that needs much more discussion and understanding – and not just in the debating hall of the Cambridge Union Society.

 

Posted in digital, future Britain, identity, IT, privacy, public services, security, technology, technology policy | Leave a comment