Trust in an algorithmic world

We have long since entered a world increasingly reliant upon software – from the software in our mobile phones, to online websites, to fly-by-wire aviation systems, to specialist medical devices in our hospitals. Many processes and decisions formerly made by humans are now being made or assisted by digital technology.

Much of this software is enormously beneficial and improves the quality of our daily lives, enabling us to do things with a speed, efficiency and at a scale that simply wasn’t possible before. We are inhabiting a world shaped by computer code.

Yet such code is often riddled with bugs, human errors, foibles and bias – accidental or otherwise.  Even the most sophisticated software, relying upon algorithms that assess and act upon data, is subject to the same failings. Whereas in the past most software relied upon programmers who designed in advance the way such systems would work, many computer systems, devices and sensors now also incorporate machine learning, or artificial intelligence (AI) – self-learning software that is able to update the way it works in a process of continuous improvement.

But can we trust a world increasingly based upon software? It’s already a recognised problem that some algorithms discriminate. In September, the House of Commons Science and Technology Committee published its report on “Robotics and artificial intelligence” (PDF). However, I think this is too narrow a focus, and that our approach to all algorithms and indeed all code-based systems, really need to be considered. After all, some of the current hype about “deep learning” and “machine learning” have turned out to be Wizard of Oz facades at various events I’ve attended – with the systems being showcased turning out to be using fairly old-school algorithms based on processes such as frequency analysis.

Ultimately – to people impacted by poor code and hence the poor decisions and outcomes arising – it matters little to them whether the software is relatively “dumb” or is genuinely using AI: we need to establish trust in software.

The House of Commons report highlights the social, ethical and legal questions being raised by rapidly developing technology. In particular they flag several areas that require more work:

These include taking steps to minimise bias being accidentally built into AI systems; ensuring that the decisions they make are transparent; and instigating methods that can verify that AI technology is operating as intended and that unwanted, or unpredictable, behaviours are not produced.

The Committee report adds to a growing body of work looking at the impact of software as it becomes ever more ubiquitous and influential. The Royal Society, for example, has ongoing work exploring machine learning , which, like the House of Commons report, is considering “the opportunities and challenges of machine learning; including the social, legal and ethical challenges.

Last week Future Advocacy, a “social enterprise working on the greatest challenges humanity faces in the 21st Century“, published a report “An Intelligent Future“. Echoing the other work ongoing in this area, it too found that

The fast-moving development of AI presents huge economic and social opportunities. Over the coming years AI will drive economic productivity and growth; improve public services; and enable scientific breakthroughs. It will also bring significant risks.

It’s useful that there is finally a growing awareness of the potential upside that an increasingly code-based world could bring – alongside a recognition that there are major risks of getting this wrong, with significant ethical, social, security, privacy, regulatory, economic and legal implications. But some of the ideas that have been floated – such as being able to explore the code and algorithms and AI etc. within the “black boxes” (the systems, devices, and sensors) seems to be unrealistic in a world of the internet of things. And, given AI-based systems adapt, a one-off check would have little value in an evolving ecosystem of self-learning, adaptive and interacting systems.

The scale of work involved in examining the code in black boxes would defeat attempts to regulate or control the quality in this way. Quite simply, opening and peering inside such black boxes seems to be unfeasible – although perhaps technically viable in one or two high risk environments, such as health care. Even in specific domains, as the VW emissions scandal demonstrates, it’s clear how difficult it can prove for regulators to ensure software is doing what they think it is doing.

A better approach might be to agree and enforce quality and security standards on software engineering, testing and deployment together with enforcing open APIs so that the behaviour of such black boxes can be openly observed and analysed and, in particular, that this can be done on a continuing basis, capturing changes in behaviour over time. This will be of far more value than a one-off inspection process. There is also the question of how such systems might be patched and corrected remotely – whilst recognising that remote access can itself increase risk, as SCADA related incidents and the recent massive denial of service attack using compromised devices demonstrate.

We need to focus on trust in computer systems, trust in algorithms, of which AI or machine learning systems, sensors and devices are but a subset. So the same principles should ideally be applied across all systems to ensure their behaviours are as expected (and do not discriminate or exhibit bias).

But I’m getting ahead of myself. We first need some co-ordinated work in this area to better define the problem and evaluate the best options, including where we need to make better-directed research investments to ensure we can benefit from the upside of ever more ubiquitous software without suffering from its downside.

This area is of sufficient importance that it needs to be co-ordinated by government, ideally on a cross-party basis. But it needs the right balance of people and organisations involved – and should not be run by generalists with no domain knowledge, nor left to a niche of narrow domain specialists too abstracted from the wider social concerns.

Making headway in this complex area requires a small core team able to ensure a genuinely open exercise and to build and sustain public trust. This work should be open to all citizens to provide inputs, and encourage the active participation from civil society groups, charities, business, scientific, academic, technical specialists, the Information Commissioner’s Office, the National Cyber Security Centre and others.

This group needs to make significant, timely progress and end the cycle of useful, but unco-ordinated, reports and concerns by evaluating, proposing and agreeing principles and approaches that will help guard against the downside of poor software, whilst still enabling innovation to thrive.

This important strand needs to be clearly defined, and included in the UK’s strategy for delivering a successful, sustainable and trustworthy digital economy.

Advertisements

One comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s