Artificial Intelligence and Crime

70 Pages Posted: 28 Jun 2019 Last revised: 12 Jul 2019

See all articles by Roderic Broadhurst

Roderic Broadhurst

School of Regulation & Global Governance (RegNet); Australian National University (ANU) - Cybercrime Observatory

Donald Maxim

(ANU) - Cybercrime Observatory

Paige Brown

ANU Cybercrime Observatory

Harshit Trivedi

ANU Cybercrime Observatory; Australian National University

Joy Wang

Australian National University (ANU)

Date Written: June 21, 2019

Abstract

This review of the rapidly growing literature on the impact of artificial intelligence (AI) on crime and crime prevention outlines some of the likely limits and potential of AI applications as a crime enabler or crime fighter. In short, our focus is on the criminogenic potential as well as preventative role of AI.

Investment and interest in developing machine learning (ML) technologies that underpin AI capabilities across industry, civil and military applications have grown exponentially in the past decade. This investment in AI research and development has boosted innovation and productivity as well as intensifying competition between states in the race to gain technological advantage. The scale of data acquisition and aggregation essential for the ‘training’ of effective AI via ML processes also poses broader associated risks to privacy and fairness as such technology shifts from a niche to general purpose technology. The potential weaponization of AI applications such as computational marketing, ‘deep’ fake images or news and enhanced surveillance, however, are pressing challenges to democracies and human rights. Poorly implemented ethical, accountability and regulatory measures will also provide opportunities for criminal activity as well as the potential of accidents, unreliability and other unintended consequences.

AI is generally described as weak, medium or strong and it is widely understood that the AI currently in use can be classified as weak. This is because the underlying ML iterative processes are based on domain specific or limited ‘training’ particular to the problem or activity addressed and thus offer only narrow or highly focused AI applications. Five major functional areas have stimulated the civil development of AI and these are:

● Voice and Speech Recognition

● Computer Vision

● Extended Reality (XR)

● Game Playing

● Natural Language Processing (NLP)

We explore some of the concerns about AI as currently realised and argues that automated decision-making technology should not be relied upon as a sole decision-making tool. Scepticism is currently necessary when evaluating an AI’s performance. The United States White House’s Office of Science and Technology Policy has warned of the potential threat that AI poses to privacy, civil rights, and individual freedoms through the “Potential of encoding discrimination in automated decisions” (Executive Office of the US President, 2016, p. 45).

The various ways in which AI can be used as a defensive or an offensive tool in respect to cybercrime are described and evaluate some of the opportunities and risks that arise when AI is employed either to defend and detect attacks against computer systems or to attack computer systems. A summary of the legal and regulatory dimensions of AI and discusses the extent to which existing laws are sufficiently technologically neutral or ‘future proof’ to incorporate the use and potential ‘misconduct’ or misuse of AI in decision-making.

AI applications will also be used to enhance many existing surveillance systems incorporating and aggregating data from different sources including fusing individual biometric data with expenditure and activity transaction records to create ‘whole of life’ behavioural classifiers. PR China’s newly piloted social responsibility system is such a system that ‘scores’ an individual citizen’s conduct (e.g. by rating the ‘social worthiness’ of consumption and activities) so as to prioritise access to state-controlled activities or services (e.g. state offices/jobs, travel, educational or medical services). The ubiquitous melding of data to classify conduct and rate individuals driven by AI algorithms may emerge as the key method of the new surveillance state or corporation. In this sense, AI may also evolve as a self-preserving autonomous system. These new forms of total surveillance will also be a key target for manipulation by criminal actors and others.

Keywords: artificial intelligence, cybercrime, crime prevention, ethics

Suggested Citation

Broadhurst, Roderic and Maxim, Donald and Brown, Paige and Trivedi, Harshit and Wang, Joy, Artificial Intelligence and Crime (June 21, 2019). Available at SSRN: https://ssrn.com/abstract=3407779 or http://dx.doi.org/10.2139/ssrn.3407779

Roderic Broadhurst (Contact Author)

School of Regulation & Global Governance (RegNet) ( email )

Canberra, Australian Capital Territory 0200
Australia

Australian National University (ANU) - Cybercrime Observatory ( email )

Donald Maxim

(ANU) - Cybercrime Observatory ( email )

Acton, ACT 2601
Australia

Paige Brown

ANU Cybercrime Observatory ( email )

Acton, ACT 2601
Australia

Harshit Trivedi

ANU Cybercrime Observatory ( email )

ANU Cybercrime Observatory
Coombs Ext, Building 8
Canberra, 2601
Australia

Australian National University ( email )

Canberra, Australian Capital Territory 2601
Australia

HOME PAGE: http://www.anu.edu.au/

Joy Wang

Australian National University (ANU) ( email )

Canberra, Australian Capital Territory 2601
Australia

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
2,387
Abstract Views
6,462
Rank
11,385
PlumX Metrics