Towards Accountability in Machine Learning Applications: A System-Testing Approach
66 Pages Posted: 11 Feb 2022
There are 2 versions of this paper
Towards Accountability in Machine Learning Applications: A System-testing Approach
Date Written: 2022
Abstract
A rapidly expanding universe of technology-focused startups is trying to change and improve the way real estate markets operate. The undisputed predictive power of machine learning (ML) models often plays a crucial role in the ‘disruption’ of traditional processes. However, an accountability gap prevails: How do the models arrive at their predictions? Do they do what we hope they do – or are corners cut? Training ML models is a software development process at heart. We suggest to follow a dedicated software testing framework and to verify that the ML model performs as intended. Illustratively, we augment two ML image classifiers with a system testing procedure based on local interpretable model-agnostic explanation (LIME) techniques. Analyzing the classifications sheds light on some of the factors that determine the behavior of the systems.
Keywords: machine learning, accountability gap, computer vision, real estate, urban studies
JEL Classification: C52, R30
Suggested Citation: Suggested Citation