Navigating The UK’s AI Odyssey
The last couple of weeks has been a bonanza of public-sector AI-related news in the UK. Rather than summarize it here, you can check out a blog from myself and my colleague Enza Iannopollo outlining the policy decisions, partnerships, lack of signing up to global accords, and not-so-subtle departmental name changes that UK-dot-gov has embarked upon.
A Third Way: The UK Government Wants To Fuel Public-Sector AI Innovation
The underlying signal in the recent announcements is clear: The UK government wants to fuel AI innovation. Politics aside, it seeks a third path, one that treads the line between what some see as an overregulated EU and that others see as an underregulated US — bronze-helmed, sword girded at the waist, standing a-prow of the trireme navigating the narrow strait between the deadly dangers of monstrous Scylla and Charybdis, the whirlpool. You can decide which is which in this scenario.
Classics aside, this is a noble cause — perhaps the embodiment of the Brexit promise of Singapore-on-Thames. We can look to Estonia for inspiration here. But there’s a key missing component to the strategy — or, perhaps, one that’s there but has been somewhat dropped into the bilges of the galley while we row for victory.
Trust Is The Killer App
The Artificial Intelligence Playbook for the UK Government does a decent job of setting out the UK government’s stance on public-sector AI adoption. It presents guidelines and principles for ethics and risks and mentions “high-risk” and “high-impact” use cases in its aim to “help ensure that AI technologies are deployed in responsible and beneficial ways, safeguarding the security, well-being, and trust of the public we serve.”
But it fails to:
- Define what drives, or erodes, citizen trust in AI systems. The word trust, or trustworthy, appears 22 times (compared to risk, which appears 176 times), but it falls short of giving civil servants guidance on what system features, characteristics, or behaviors create, or destroy, citizen trust.
- Anticipate divergence between private- and public-sector AI adoption. A lack of any wider legislation governing private-sector development of AI systems means that, as long as UK firms comply with existing relevant legislation such as the GDPR or the Crime and Disorder Act covering hate speech, they are free to develop AI in whatever way they want — potentially, ethics-free. A rise in spammy, hallucinating, biased, unexplainable bots could erode citizen trust in AI, damaging the government’s own efforts to convince citizens that the (hypothetical) Driver and Vehicle Licensing Agency license renewal bot is safe.
Take A Risk-Based Approach To Building Citizen Trust In AI
Forrester’s trust framework defines seven levers of trust. It defines words such as transparency, consistency, and dependability, words that also crop up in both the Artificial Intelligence Playbook for the UK Government and in the EU’s 2019 Ethics Guidelines for Trustworthy AI. This is not a coincidence.
The EU guidance predates, and somewhat underpins, the recent EU AI Act. But what the EU AI Act does that the UK guidance doesn’t is more clearly define levels of risk, from unacceptable, such as social scoring or biometric profiling that infer sensitive traits like ethnicity or sexual orientation, through high-risk, such as using AI to screen x-rays to spot cancer, to minimal risk, such as AI-powered NPCs in computer games or generative AI content creation for email personalization.
We took our trust model (the seven levers) and looked at what drives or erodes UK consumer trust in AI applications at different levels of risk. We found:
- When the risk is high, empathy is the key driver of trust. Consistency is number two, and transparency is third. This makes sense. We want safety-critical use cases to be safe, consistent, and explainable, right?
- When the risk is low, dependability is the key trust driver. Consistency drops right to the bottom, yet empathy and transparency remain key. Again, this makes sense. We don’t really mind if the marketing copy for that tin of beans is different each time or if the NPC in “Baldur’s Gate” says something different each time we greet them. But we want them to be there.
Want to know more? We will be publishing both our AI trust findings and our Government Trust Index for the UK, as well as for a number of other European countries, over the next few months. Keep an eye out, and in the meantime, if you are a client, please book a guidance session if you want to learn more.