42 Pages Posted: 19 Jul 2017
Date Written: May 23, 2017
Home robots will cause privacy harms. At the same time, they can provide beneficial services — as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms.
We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology.
Keywords: AI, Robotics, Robots, Privacy, Technology, Law, Privacy Law
Suggested Citation: Suggested Citation
Kaminski, Margot E. and Rueben, Matthew and Grimm, Cindy and Smart, William D, Averting Robot Eyes (May 23, 2017). Maryland Law Review, Vol. 76, p. 983, 2017. Available at SSRN: https://ssrn.com/abstract=3002576