Last week, the UK government published its Artificial Intelligence (AI) rulebook which sets out plans for the regulation of the technology on domestic soil.
As well as offering insight into the opportunities the technology could hold for UK-based organisations, the guide also features a number of principles which it hopes businesses will adhere to in order to protect consumers.
Taking the opposite stance to the EU is certainly an intriguing move from the UK government. In many ways, it’s another post-Brexit artefact. But rather than focusing on the UK being the perpetual contrarian in matters relating to our continental cousins, I’m interested in the differences between the UK’s “pro innovation regulatory framework” and the EU’s draft AI act.
To Centralise or Decentralise… That Is the Question
In a nutshell, it’s the perennial chestnut of “do we centralise or decentralise”? On one hand, centralised regulation is intuitively “safer” and should enforce fairness and explainability, etc. But, it could stifle innovation, and in the field of AI, some of that innovation can and is saving lives, especially in the field of healthcare. One could state that stifling innovation in AI is in itself unethical.
The EU’s AIA (AI Act) will regulate through a single agency and introduce concrete legislation on a quickly growing and moving field of computing. However, the UK’s approach aims to preserve innovation by giving a lot of licence to both the industry and regulators – regulating the usage in context rather than the technology itself. This risk-based approach encourages the industry to focus on big challenges we’re facing as a society, rather than being overly cautious in approach.
Offering Principles and Guidance
The six principles outlined in the UK-based paper, in my opinion, are solid from an AI ethics point of view, as well as being generic enough to be useful to all sectors:
- Ensure that AI is used safely – AI should do no harm, and in the healthcare and physical infrastructure sector, this is a prime directive.
- Ensure that AI is technically secure and functions as designed – Obviously any personal data should be secure and preserve the CIA triad (confidentiality, integrity and availability), but beyond that, the training data and methods themselves should also be secure and reliable.
- Make sure that AI is appropriately transparent and explainable – This is an oft-overlooked element of AI, being willing and able to clearly show how a decision has been made.
- Consider fairness – This feels like the weak link in the rulebook to me, at least in its wording, as bias has been shown to be a problem in AI implementation.
- Identify a legal person to be responsible for AI – This is fairly consistent with things like the GDPR, and could be an extension of a more traditional Data Protection Officer.
- Clarify routes to redress or contestability – As with existing data legislation, having a clear route to request data surrounding a decision and then contesting it will be key.
If the above principles are implemented well, this is probably one of the simpler principles to follow.
The above are reasonable, but principles are not hard rules, so it remains to be seen how many people subscribe to them. Also, there is the question of how people will show their workings, or how the governing bodies will “interpret and implement the principles”.
We already know that the UK regulators are a heterogeneous bunch, including Ofcom, the CMA (Competition and Markets Authority), the ICO (Information Commissioners Office) and the FCA (Financial Conduct Authority), so approaches and implementation pace may differ.
One possible outcome is that other legislation will move quickly enough to guard against breaching these principles (e.g., the GDPR mandates a “right to explanation”), but that’s only invoked when a problem has already occurred and they’re contextual to usage, so there will be an element of people abusing gaps in coverage. This is certainly one aspect that needs some additional consideration, in my opinion.
‘Shifting Left’ with the EU’s Draft AI Act
Although the EU’s draft AIA itself has been criticised – aside from its rigidity, for having limitations and loopholes – it does attempt to ‘Shift Left’ the legal and ethical concerns with the applications of AI.
In reality, technology legislation necessarily lags behind the technology itself, but the interesting part for me, is the notion of shifting left, which is a mantra in much of software development – this meaning that assurance (not only of the software itself, but the approach and viability) should be performed as early as possible in the software life cycle.
Taking this approach to AI means unsafe and unethical approaches can be discarded before significant investment has taken place.
I wonder if, in our ‘Brexit seizing moment’ and attempts to preserve innovation through a risk-based approach, we will miss these important ‘Shift Left’ moments? Coordination and collaboration between the regulatory bodies will likely be the key to making this framework usable and clear to implementers. So, I hope that these regulators will build in elements of Shift Left into their guidance to preserve public trust and reduce uncertainty.
UK as AI Test-Bed?
What does look likely though, is that with less onerous regulation, the UK could become a test-bed for services and techniques. For good or for worse, if the current estimations are correct and the UK’s AI sector swells to £745 billion by 2035 (up from £15.6 billion in 2020), we might see significant investment within our shores in AI across multiple sectors, and that we might attract leaders in this field, too.
The opportunity to become a leader in one of the most exciting fields in tech is undoubtedly alluring, but I hope it doesn’t come at the expense of the dark sides of AI implementation.
As both the EU AI Act and the UK AI Rulebook are drafts under consultation, and some way away from implementation, only time will tell which of these approaches provides the most benefits and doubtless both will be revised as the world of technology continues to rapidly evolve.
By Jeff Watkins, Chief Product and Technology Officer at xDesign.