Artificial Intelligence and Agents
NUS Centre for Technology, Robotics, Artificial Intelligence & the Law Working Paper 21/02
23 Pages Posted: 4 Oct 2021 Last revised: 26 Oct 2021
Date Written: October 1, 2021
With the increasing sophistication of AI and machine learning as implemented in electronic agents, arguments have been made to ascribe to such agents personality rights so that they may be treated as agents in the law. The recent decision by the Australian Federal Court in Thaler to characterize the artificial neural network system DABUS as an inventor represents a possible shift in judicial thinking that electronic agents are not just automatic but also autonomous. In addition, this legal recognition has been urged on the grounds that it is only by constituting the electronic agents as legal agents that their human principals may be bound by the agent’s actions and activities, and that a proper foundation of legal liability may be mounted against the human principal for the agent’s misfeasance. This paper argues otherwise. It contends that no matter how sophisticated current electronic agents may be, they are still examples of Weak AI, exhibit no true autonomy, and cannot be constituted as legal personalities. In addition, their characterization as legal agents is unnecessary because their actions (and misapplications) can be legally addressed by the under-appreciated instrumentality principle. By treating the electronic agents as instruments or extensions of the acts of the human principal, issues in contract and tort law can be readily resolved. This essay concludes that until a Strong AI application can be demonstrated, the issue of legal agency of electronic agents ought not to detain the development of technology and of the law in this space.
Keywords: AI, machine learning, electronic agents, Thaler, DABUS, personality, contract, tort
Suggested Citation: Suggested Citation