Making Artificial Intelligence Transparent: Fairness and the Problem of Proxy Variables
25 Pages Posted: 16 Feb 2021
Date Written: January 11, 2021
AI-driven decisions can draw data from virtually any area of your life to make a decision about virtually any other area of your life. That creates fairness issues. Effective regulation to ensure fairness requires that AI systems be transparent. That is, regulators must have sufficient access to the factors that explain and justify the decisions. One approach transparency is to require that systems be explainable, as that concept is understood in computer science. An system is explainable if one can provide a human-understandable explanation of why it makes any particular prediction. Explainability should not be equated with transparency. Instead, we define transparency for a regulatory purpose. A system is transparent for a regulatory purpose (r-transparent) when and only when regulators have an explanation, adequate for that purpose, of why it yields the predictions it does. Explainability remains relevant to transparency but turns out to be neither necessary nor sufficient for it. The concepts of explainability and r-transparency combine to yield four possibilities: explainable and either r-transparent or not; and not explainable and either not r-transparent or r-transparent. Combining r-transparency with ideas from the Harvard computer scientist Cynthia Dwork, we propose for requirements on AI systems.
Keywords: artificial intelligence, AI, machine learning, predictive analytics, fairness, transparency, explainability
Suggested Citation: Suggested Citation