Social media is back in the news for all the usual negative reasons: hateful, racist, sexist, fraudulent and otherwise abusive users. None of which is surprising or new. For well over a decade, ideas about how best to tackle online abuse have been repeatedly explored, and then generally left to rot on the shelf.
As a result, we’ve had to endure the appearance of the annual chorus of useful idiots with their tired refrains:
“Everyone online should prove who they are!”
“Nothing to hide, nothing to fear!”
Authoritarian regimes must be chuckling with Bond-villain like delight as they enjoy the spectacle of fusty politicians and media ‘pundits’ clumsily contributing to democracy’s erosion from within — shutting down the ability of whistleblowers, critics, social activists, at risk individuals, political opponents, satirical and humorous accounts, and a whole host of other essential voices, to be heard.
Rather than rush into clamping down on free speech, a more practical approach would surely be to first consider why we lack an effective regulator, one able to hold the social media companies to account in a timely and effective fashion? Instead, they and the wider AdTech industry seem to operate largely untouched —
“… research shows that social media companies allow bigots to keep their accounts open and their hate to remain online, even when human moderators are notified.”Social media giants fail to curb 90% of antisemitism, The Times, 02.08.2021
So what’s the problem?
Instead of adequately resourcing and enforcing existing regulation — to ensure the social media companies act responsibly — the long-standing policy challenge has instead found itself redirected to basically considering this question:
How do we find an effective way of tracking down and dealing with social media users who break the law, but without creating a dystopian dragnet that ensnares users expressing their lawful opinions — even if some may find those opinions objectionable or offensive?
A recap of various historical options
In attempting to answer this question, numerous policy and technical options have been proposed and explored since at least around 2012/2013 (probably far earlier) — and I remember revisiting many of them yet again inside Whitehall back in 2017/2018.
One of the most persistent policy ideas is that:
“Social media users should use their real identity — if everyone on social media has verified identity, then anyone who breaks the law can easily be identified”
We’ll see why that’s not such a good idea in a moment, even if social media companies may be salivating at the prospect of gaining lucrative access to our passports, driving licences, photos, biometrics and other personal data if we’re forced to prove our “real identity” to use social media.
Alongside the idea of “mandatory user verification” an alternative approach is to let users voluntarily verify their identity if they want to do so. Users who opt to verify their identity could be given greater control over who can follow them: for example, they could remove the ability of anonymous users to engage with them in any way.
To understand a few of the ideas that are regularly lifted down from the policy library shelf, dusted, and kicked-around yet again, here’s a quick canter through some of them.
Mandate verified identity for all social media users
This is a bad idea on so many levels, and the antithesis of what any democracy that believes in free speech should be considering. It would end at a stroke the diverse opinions and voices essential to democracy, and silence anyone who has difficulty proving their “real identity”, or who could place themselves at risk by doing so.
In addition, the idea of trusting social media companies to access even more of our personal data, given their lax security and long-established cavalier attitude towards our personal data, would be funny if it were not so scary.
I suspect the implementation of mandatory user verification would see mainstream social media wiped out overnight as most users rush to leave or shut their accounts, or endure their most sensitive personal data being exploited, hacked and hijacked. Meanwhile, authoritarian regimes would gleefully point at our introduction of mandatory user verification as the perfect dystopian template for their own vindictive suppression and persecution of critics and opponents.
Let users choose whether to have their identity verified
An alternative that’s previously been considered is to forget mandatory verification but let users opt to verify their identity when they create a social media account (or to “upgrade” their account to verified user status at any time they choose to do so).
This does NOT mean that the user’s true identity would be revealed to the world. Details of the verified identity would be retained securely to help prevent malicious or accidental compromise, including by company insiders.
Voluntarily verified users could still operate multiple accounts, including ‘anonymous’ accounts
A user with a voluntarily verified identity could still present themselves online as anonymous (although more accurately probably better described as ‘pseudonymous’).
For example, a verified user might run a spoof account, a humorous or satirical account, a whistleblower account, a political account critiquing an incompetent or corrupt government, or calling out an authoritarian regime. Alternatively, a user could opt to use the name that matches their verified identity if they have no concerns about being publicly identifiable, such as a celebrity or government Minister. The choice remains with the user.
Either way, the user behind the account(s) will have been verified regardless of whether they choose to present themselves online as ‘anonymous’ or openly identifiable. If the need ever arises, the verified user behind an account or accounts can be identified through a legal process of disclosure.
Details of verified users must be legally and technically private and secure
Details of the verified user behind an account or accounts must be kept secure, including by cryptographic means, protective monitoring and so on. It could only ever be disclosed to an appropriate legal entity under due legal process, such as a warrant from a regulator.
STOP — hold on a moment!
“Whoah! Hold on a moment. Have you gone mad Jerry? You want us to give even MORE personal data to the AdTech bro’s?!”
Okay, okay, I hear you. That’s why I’m setting out here some of the ideas that have been kicking around for years — so we can have a more open debate about the options and their implications.
Giving all our secure and private identity-related information to the AdTech-crazy social media companies seems like a really bad idea to me. They have an unrivalled track record of dodgy parasitic practices and systemic abuse of personal data, combined with woefully inadequate regulation.
The danger in letting social media companies hold users’ verified identity data — including both biographical and biometric data — is that they will doubtless find ways of accessing and exploiting it for their own commercial purposes, helping fuel an increase in fraud and abuse and making online life even more hateful, bullying and crime-ridden than it is today.
When the idea of verified identity has previously been considered, so too has the role of ‘trusted third parties’. It’s seen as offering a potentially less risky approach than trusting social media companies.
Trusted third parties
Our identity-related information is likely to be far more secure if someone we already have a relationship with and trust — such as say our bank, or the Post Office (…. er, assuming they’re still trusted of course after the Horizon scandal) — vouches for our identity.
Trusted third parties could verify to a social media company that they know the true identity of a social media user, but without revealing to the company who it is. That way the social media companies will never get to see our personal details, but a link will still exist between a verified identity and the social media account(s) they use.
Importantly, this approach means the social media company won’t know the true identity of a user, and the trusted third party won’t know which social media account(s) the verified user operates. It’s more private and secure than the absurdly naive idea that we can trust social media companies.
Only if the two pieces of information are combined — when requested under due legal process for example — will the linkage between the social media account(s) and the verified identity of the individual operating them be disclosed.
Verified users should have greater control over who can interact with them
Verified social media users would have access to new, enhanced options to manage who can follow them online. These additional options would offer much higher granularity of control than those currently available — including being able to prevent anyone who has not verified their identity from following them, seeing their tweets, replying to them, quote tweeting, “@-ing” them etc.
They could, of course, still let anyone, verified or not, follow and engage with them, much as now, continuing to use existing features such as blocking or muting someone.
Assuming they choose to only allow followers whose identity has also been verified, if illegal activities take place the victim would be able to report the verified account(s) involved. Under due legal process, the perpetrator’s true identity could be disclosed to the authorised legal entity, such as a regulator, and legal action taken against them as appropriate.
Truly anonymous users must also continue to be allowed
Whatever other ideas may be considered or implemented, it’s important that users who wish to remain anonymous and still use social media must remain free to do so, exactly as now. There must never be any compulsion to have an identity verified when creating or using a social media account. However, unverified users may no longer have access to verified accounts/users that do not wish any unverified accounts to engage with them.
Depending on your perspective, this would either leave anonymous haters and trolls shouting their vitriol and abuse into the abyss, unseen and unheard by those they once targeted; or it would deny verified users the ability to engage in open, democratic debate with anyone other than verified users.
No-one said any of this is easy.
Perception and implementation
Over the years, some of the common concerns that’ve been expressed about doing anything in this space include:
“Government will know who everyone is on social media, it’s the end of democracy and free speech. Chilling and further evidence of the push towards a national identity scheme and big brother state.”
It must always be left to users to decide if they want to verify their identity. There must never be any compulsion to do so. Even those users who do verify their identity must not have their true identity publicly revealed unless they choose to do so themselves. The true owner of an account would only be disclosed under due legal process, and only to the appropriate regulatory/legal entities.
“Authoritarian regimes will use these changes to track down, identify and punish dissidents, minorities, political opponents, etc”
Any user must be able to continue using an anonymous account exactly as now. No-one must be forced to verify their identity in order to use social media.
This should also include the many “self-asserted” accounts that exist, like my own. While my Twitter account is not anonymous, it’s also not subject to any “verification” — and neither should it be unless I want to do so.
We need to reach a sensible consensus on how any new initiative in this space operates under the rule of law and protects free speech. There are many, many good people on social media who bring us jaw-dropping talent and creativity, expose corruption, highlight important research, expose illegality, speak out against oppression, etc. ‚ or indeed who just make us laugh. We should cherish them, not threaten to extinguish them.
We mustn’t lose the many positive upsides of social media in the rush to be “seen to be doing something” about the problems caused by a small but abhorrent criminal element.
Proof of “identity”
Identifying and verifying users online has long proved a challenge. To date there has been no consistent way of doing this in the UK when the need arises, particularly in a secure and privacy-enhancing way.
The emergent UK Digital Identity and Attributes Trust Framework from DCMS could help streamline and standardise a trusted, secure, private process for those users wishing to verify their identity, or something about themselves (for example, “I’m over 18”). It could be a good first test of the true value of the Framework — for users, approved identity service providers, trusted third parties, and the social media companies — to discover how well it helps tackle a thorny problem in the real world while guaranteeing users’ security and privacy.
Along with support for trusted third parties, the Framework also enables users to self-manage their own identity and attribute related information, including through the use of smartphone apps that function as secure, private “digital wallets” — confirming something about a person (“Over 21”, “Newcastle resident”) without disclosing unnecessary personal details (such as date of birth, or precise address).
So what’s next?
This is just a quick snapshot of some of the ideas and proposals discussed periodically in this policy area over the past decade plus.
What they illustrate is the need to protect anonymity and freedom of speech whilst improving the ability, under due legal process, to identify and deal appropriately with those who break the law.
Any proposed “solutions” must also tread carefully. They should be incremental and iterative: technical, policy and legal processes need to be refined and improved as we learn what works and what doesn’t. There’s a complex interplay here that needs to be carefully navigated and overseen by Parliament to ensure it doesn’t break essential pillars of democracy and free speech, handing governments and social media corporations data and powers that we’ll live to regret.
The government’s Online Safety Bill is already partly in this thorny policy area. Criticised by some as vague and incoherent, it plans to place a “duty of care” on the social media companies to better protect users from misinformation, abuse and hatred. It would be prudent for government to make time first to see how that plays out in practice before considering any additional measures — not least by making sure that regulation is far more effectively resourced and enforced than it has been to date.
Given how long we’ve waited to see anything meaningful happen in this area, government should take the time to get it right. After all:
Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiersArticle 19, Universal Declaration of Human Rights
Postscript — 16th August 2021
On August 10th 2021, Twitter released its analysis of the racist abuse directed at England players during the Euro 2020 final. It stated that:
Our data suggests that ID verification would have been unlikely to prevent the abuse from happening — as of the permanently suspended accounts, 99% of account owners were identifiable.Twitter UK Official account (here)
Given this, and the discussion over the past ten or more years, it’s worth stepping back and asking that basic, but often missing question:
“What problem is it we’re trying to fix?”