UK Government Targets Ethics with New AI Defence Strategy

With talk about being ‘Ambitious, Safe and Responsible’, we analyse the strategy, ethics, the MOD’s approach and how AI technology will be used.

The UK’s Ministry of Defence (MOD) is aiming for modernisation and ethics with the launch of its new AI Defence Strategy.

Unveiled at London Tech Week, the MOD explains that it is dedicated to better integrating and enabling the use of AI-driven technology in the UK’s armed forces.

The strategy was accompanied by a policy statement titled ‘Ambitious, Safe and Responsible’ which outlines the MOD’s stated approach to how AI technology will be utilised.

This comes in the wake of the UK’s 2021 Integrated Review where the government committed itself to the strengthening of the security and modernisation of its armed forces. The strategy will be overseen by the 2nd Permanent Secretary and the Vice Chief of the Defence Staff.

The MOD’s document is broken down into four goals: “Transform Defence into an AI ready organisation,” “Adopt and exploit AI at pace and scale for Defence advantage,” “Strengthen the UK’s defence and security ecosystem” and “Shape global AI developments to promote security, stability, and democratic values”.

Alongside these goals, AI as a concept is defined and explained, and potential threats and use cases of the technology are laid out for readers. Plans for the establishment of the Defence AI Centre (DFAIC) are also found in the strategy.

The UK first utilised AI in a military context in May 2021 in a NATO trial called ‘Exercise Spring Storm’ wherein a machine learning engine was used to process complex data relevant for military operations, such as terrain and environmental data.

The engine in question, according to the MOD, “[provided] commanders with better information during critical operations and [transferred] the cognitive burden of processing data from a human to a machine”.

AI has grown significantly in the past 10-15 years, in both use by businesses and consumers as well as in popular relevance. It’s only natural the MOD would want to modernise its various agencies and branches to better utilise and counteract the use of AI by other nations.

It’s not merely fluff either. The strategy contains several examples of practical applications for AI in military capacities, such as SAPIENT (Sensing for Asset Protection with Integrated Electronic Networked Technology) which is a network of AI-integrated sensors used for base protection and security.

  • Other strategies are available – read about ‘UK’s New Health Data Strategy Seeks Sovereignty Popularity’ here

The issue comes, arguably, with regard to ethics. The accompanying ‘Ambitious, Safe and Responsible’ policy document outlines ways for the ministry to use AI ethically.

While associating ethics with the military at all is a bit of an oxymoron, the MOD does mention that it worked with the Centre for Data Ethics and Innovation (CDEI) while creating the policy and has convened an AI Ethics Advisory Panel to help guide 2nd Permanent Secretary for Defence Laurence Lee as he enacts the AI Defence Strategy across the department.

However, like so many ethics boards and advisory bodies, the panel explicitly has “no formal decision-making powers,” and while the policy document outlines that the MOD is bound by “existing obligations under UK law and international law, including as applicable international humanitarian law (IHL) and international human rights law,” are those laws enough?

Lawmakers in many countries in the 21st century have been slow to adapt to the technologies and advancements that have rapidly altered society, most notably the Internet.

While the EU’s GDPR and the UK’s post-Brexit version of it are incredibly robust with data protections and the like, GDPR wasn’t put into effect until 2018. As of this writing, no laws or regulations regarding AI even exist in the UK.

While it’s commendable of the MOD to make ethics such a focus in their policy document, it would be foolish to take any major world military at their word. In particular, the UK government was accused of war crimes cover-ups as recently as 2019.

The Overseas Operations bill, essentially an attempt to protect British soldiers from prosecution for war crimes committed overseas, was brought to Parliament in 2020 and was being discussed and amended as recently as July 2021.

While the government eventually did exclude war crimes from the safeguards in the bill, it was only after significant backlash from organisations like the Human Rights Watch and politicians like former Defence Secretary Lord Robertson.

With actions and allegations like this so recent in the government’s history, the MOD will need to back up all this talk of ethics and responsible use of AI with deeds before trust can even be considered.

Zephin Livingston
Zephin Livingston
Zephin Livingston is a content writer for eWeek, eWeek UK, IT Business Edge, and SoftwarePundit with years of experience in multiple fields including cybersecurity, tech, cultural criticism, and media literacy. They're currently based out of Seattle.
Get the Free Newsletter
Subscribe to Techrepublic UK for weekly updates from Techrepublic and eWEEK on the latest in UK top tech news, trends & analysis
This email address is invalid.
Get the Free Newsletter
Subscribe to Techrepublic UK for weekly updates from Techrepublic and eWEEK on the latest in UK top tech news, trends & analysis
This email address is invalid.

Popular Articles