Animal Expectations: Intelligible Classification of Self-Driving Cars
Posted: 26 Aug 2018
Date Written: August 14, 2018
Abstract
One of the main challenges with self-driving cars are the outlandish human expectation associated with them. Part of this challenge refers to the mental models associated with understandings of machine “learning” and artificial “intelligence”, which are typically associated with human learning and intelligence. Another key challenge is related to the way in which these systems are marketed, sold as ‘self-driving systems’ or ‘autopilots’ which misrepresent the actual technical capacities of the system and particularly their ability to function autonomously. To consumers, saying these systems are ‘Level 2’ self-driving systems is meaningless. Thus, we instead propose a mandatory classification scheme based on the animal world to set appropriate expectations of self-driving vehicles. In this scheme, different levels of automation would be associated with similar levels of animal intelligence. This would help set consumer expectations more accurately and ensure an increase in safety while using self-driving cars. It also provides for a fantastic opportunity for community-led certification and generation of appropriate imagery. For example, Level 2 autonomous vehicles could most appropriately be described as being driven by ‘worm intelligence.’ Thus vehicles driving at this level such with a Tesla AutoPilot or a Nissan ProPilot system would require a large sticker stating ‘Tesla AutoPilot brought to you by worm intelligence’ together with the picture of a worm affixed on the outside of the vehicle. Successive levels of automated vehicles could then have different appropriately selected animals associated with them to combat false advertising claims and ensure that consumers have an accurate picture of the capabilities of their vehicles. Additionally, we thus strongly believe that vehicle licensing authorities should provide meme generators as part of the licensing process to ensure that consumers can fully understand their vehicle capabilities. The ability of individuals to playfully reimagine the exact capabilities and failures of their existing vehicle would be a welcome change to the existing assumptions of technological performativity. Only the type of animal would have to be restricted to certain levels - you can’t claim that your caterpillar AI is in fact a Labrador AI – but beyond that anything is possible. However, this raises a separate associated problem around human understandings of animal intelligence. In particular dog owners or cat owners may be likely to overimagine the capabilities of their specific pets to drive cars. Using domesticated animals to represent intelligence is thus likely to lead only to further anthropomorphising of artificial intelligence. As a result, the choice of animals for such certification mechanisms should be restricted to non-typically domesticated animals only. At the same time the ability of dogs should not be underestimated. Some dogs such a Borzoi or Rhodesian Ridgebacks have been trained to hunt wolves or lions in packs. However, nothing in this article should be used to suggest that wolf-hunting dogs should necessarily be driving cars down motorways. Finally regular testing of public reactions to specific animal associations are necessary, as the metaphorical labelling expectation may shift and or change over time.
Suggested Citation: Suggested Citation