This Week In AI, UK Edition: Much Ado About Not That Much
The UK continues to pursue its ambition to become the “geographical home of global AI safety regulation.” Just in the last few days, the UK government published new guidelines for secure AI system development and the new UK Artificial Intelligence (Regulation) Bill made it into the House of Lords for its second reading. But this is likely to add up to less practical impact than you might think. Here’s why.
The Guidelines For Secure AI Systems Focus Narrowly On AI Security
The UK government boldly announced the publication of what it is labeling the first global set of guidelines for AI security last week. The guidelines, which are a joint effort between the UK and the US, got the support of another 15 countries, including Australia, Japan, Nigeria, and several EU countries. Compared to other guidelines, such as the G7 AI guidelines, they are better structured, easier to digest, and more focused on an explicit and specific outcome (security). But they remain a high-level, voluntary tool. Managing constant tension between profit and corporate responsibility when it comes to AI, organizations can choose to use these guidelines, change them, or simply ignore them. The guidelines:
- Build on the “security by design” principles and follow the software development lifecycle. The guidelines cover four areas: secure design, secure development, secure deployment, and secure operation and maintenance. Secure development is where some of the most interesting pieces are, such as: 1) securing the supply chain; 2) carefully documenting data, models, and prompts; and 3) managing technical debt.
- Cover ML systems only but address a broad audience. The guidelines are created to apply specifically to machine learning applications, and they leave aside other approaches such as rules-based applications. They also address providers of any systems that use artificial intelligence, by which they mean every organization that uses AI in its products, services, or engagement mechanism with customers.
The UK AI Bill Aims To Orchestrate AI Regulation Enforcement
AI regulation was notable by its absence from the King’s speech on November 7, 2023, which set out the legislative agenda for the current Parliamentary session. Instead, the initiative to regulate AI comes from a private member’s bill introduced to the House of Lords that is now in its second reading. Private members’ bills don’t often pass into law, and we’ll have to see how far this one progresses. On the one hand, it’s well known that the government is reluctant to introduce any form of AI regulation; on the other hand, publicity and public reaction to this bill may force some action.
The bill represents a framework aimed at enabling UK regulators to enforce existing regulations on AI use cases effectively and to produce new policies if required. Crucially, it leaves key actions and decisions to the secretary of state rather than Parliament. If passed, it would kick off multiple downstream regulatory processes. The bill:
- Asks the secretary of state to establish an AI authority as a central point of oversight. The agency would be responsible for a wide range of activities; these include providing direction and harmonization for the enforcement of existing rules, monitoring compliance and enforcement, and producing new policies and rules as needed.
- Calls for all businesses to appoint an “AI officer.” Details on the job description are unclear, but the new role will be responsible for the safe, ethical, unbiased, and nondiscriminatory use of AI and for ensuring that the data that AI systems use is free from bias. While the bill proposes to insert this requirement into the Companies Act 2006, it doesn’t contain any equivalent proposition for government and public bodies.
- Proposes the setting up of regulatory sandboxes. The purpose would be to allow firms to innovate and experiment while at the same time ensuring that they put in place appropriate safeguards and consumer protections.
- Reiterates the importance of key AI governance principles. From bias to privacy, security, and AI protection, the bill indicates the key principles that downstream policies, guidance, and safeguards must contain. Agreeing on the principles is a no-brainer, but understanding how and when these principles will come to life remains a guessing game.