A Foundry of Artificial Intelligence? The Case of UK National Security

In Romaniuk, S. N., and Manjikian. M. (Eds). Routledge Companion to Artificial Intelligence and National Security Policy. Routledge. Forthcoming.

21 Pages Posted: 17 May 2022

Date Written: May 4, 2022


The United Kingdom, like many other states across the world, is undergoing major structural changes to the way that it negotiates national security through the application of a broad range of computational techniques known as artificial intelligence (AI). This has led to a diverse, and often complex, arrangement of organisations, aims, and priorities across the UK Government and its interactions with industry and academia. Despite claims that the UK – and other allies– are falling behind in an ‘AI race’ (Warrell, 2021), only recently has there been a sustained approach to addressing the implications of AI for national security. In 2021, a range of official publications sought to address this relative lack of attention, from a National AI Strategy (Department for Digital, Culture, Media & Sport, 2021), to an Integrated Review of Security, Defence, Development and Foreign Policy (HM Government, 2021), as well as a forthcoming AI defence strategy. This reflects similar developments in the United States, where an independent National Security Commission on Artificial Intelligence (2021) released a report detailing similar concerns to those raised in the UK over autonomy, talent, innovation, standards, and more.

Within defence, an AI centre to draw together expertise from across the Ministry of Defence on digital transformation under the umbrella of The Foundry (Nicholls, 2021) has been proposed. Despite comprehensive public strategic thinking on national security being limited, such recent developments reflect the substantive work to address AI: ranging from operational military applications to domestic defence, through to a greater attention of ‘sovereign’ capabilities, technologies, developing centres of expertise, as well as broader economic incentives and levers. These point towards the UK formalising and organising a distributed array of actors and responsibility for AI and national security. This chapter then sketches out a diverse and evolving ecology of institutions and practices that deal with often contested, and imprecise, definitions of ‘AI’ and ‘National Security.’

The scope of national security is not stable, evolving and transforming over time as it is used to response to threats. These may range from the practices of different institutions such as the police and the military, to policies to enhance economic protection, promoting technological ‘sovereignty’, as well as the protection of citizens. Within the UK, national security refers primarily to those acts reserved to the Government. However, there is explicit acknowledgement that security is today performed by a greater variety of actors, from private endpoint detection products, industry providing algorithms and other computational technologies, big data analysis on private clouds, the setting of technical standards, as well as the broader development of policy, such as in specialist think tanks like the Royal United Services Institute (RUSI). Indeed a ‘national’ security occludes much of the personal, social, and embodied, relations to security. For example, AI can be used to draw on personal attributes for facial recognition and profiling and what we consume on social media in disinformation campaigns.

AI is also difficult to rigorously define and one which the UK Government acknowledges . This chapter understands AI as a process to enable some form of ‘problem-solving’ through computational means. This incorporates a broad range of algorithms including contemporary ‘deep’ machine learning applications. In doing so, it aims to capture the vibrancy of different interpretations of AI as well as avoiding the contentious distinction between automated and autonomous systems. This is because there is a great fluidity between these systems, where the former can be summarised as broadly following written rules and the latter having an ability to reason on indirect rules. However, the distinction and outputs of these, especially with notions of control through the ‘human in the loop’ are hotly debated (Schwarz, 2021). Thus, the chapter explores the full range of AI systems to capture a variety of interpretations, which may or may not align with the UK Government’s position.

In the UK, like elsewhere, national security has increasingly employed computational methods that have increasingly transitioned from deductive, rules-based methods, to inductive methods that facilitate defence through pre-emption, precaution, and probability (Amoore, 2013; Hammerstad & Boas, 2014). AI has been crucial to fulfil a desire to expand the scope of national defence, as it processes data and delivers ‘insight’ into a growing number of objects and subjects of interest. AI does not sit outside of transformations in how national security has been performed but is integral to it. With strategic thinking by the UK Government to address national security through a ‘whole-of-society’ approach, it is imperative to address how this is potentially expands what becomes included. This is particularly so as persistent engagement of adversaries is a central theme to the approach as the UK tackles what it sees as a changing international environment, particularly with regards to ‘systemic competitors’ like China and Russia (HM Government, 2021; Ministry of Defence, 2021a). Such persistence is thought to come, at least partially, through AI in defensive and offensive capabilities; ranging from cybersecurity, to robotics, and new forms of military technologies as investment, especially by China, is perceived to challenge the UK’s position in international military affairs.

This chapter then proceeds across two interlocking threads. First, what the UK is currently doing in the space of AI, and second, how this is integrated with national security and its implications. This analysis is conducted through a reading of various recent policy documents released by the UK Government and broader security policy ecosystem. This is structured in the following sections: 1) addressing a fracturing order of conventional UK national security interests; 2) how the UK articulates what the role of AI in national security will be; 3) how the UK is organising its national security through a ‘whole-of-society’ approach along with three examples of the application of AI through a defence centre for AI (The Foundry), the Defence and Security Accelerator, and the Active Cyber Defence programme, before; 4) concluding that the UK is in the midst of rapid adoption of AI technologies, with a fractured landscape of actors and priorities which has only been recently addressed for national security.

Suggested Citation

Dwyer, Andrew, A Foundry of Artificial Intelligence? The Case of UK National Security (May 4, 2022). In Romaniuk, S. N., and Manjikian. M. (Eds). Routledge Companion to Artificial Intelligence and National Security Policy. Routledge. Forthcoming., Available at SSRN: https://ssrn.com/abstract=4100035 or http://dx.doi.org/10.2139/ssrn.4100035

Andrew Dwyer (Contact Author)

Durham University ( email )

Durham, DH1 3HY
United Kingdom

HOME PAGE: http://https://acdwyer.co.uk

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Abstract Views
PlumX Metrics