The Effect of Feedback and Learning on Dss Evaluations
Posted: 26 Aug 2006
Date Written: January 26, 2006
Abstract
Model-based decision support systems (DSSs), designed to help decision-makers make better decisions, often do not help decision makers understand either how or why they work. As a result, there is likely to be a large gap between a managers mental model and the decision model embedded in the DSS. We suggest that this gap is an important reason for the poor subjective evaluations of DSSs, even when the DSSs have been shown to be of high objective quality, ultimately resulting in unexpectedly poor DSS adoption and usage. In this paper, we hypothesize that to increase its effectiveness, a DSS should not only be of high quality, but must also help reduce any mental model-DSS model gap. We evaluate two design characteristics that together lead users to update their mental models, resulting in better DSS evaluations: providing feedback on upside potential and providing suggestions for corrective actions. We hypothesize that, in tandem, these two types of feedback induce managers to update their mental models, a process we call deep learning, whereas individually, these two types of feedback will only have a small or negligible effect on deep learning. We validate our framework in an experimental setting, using a realistic DSS in a direct marketing context. We conclude with a discussion of both the theoretical and practical implications of our findings.
Keywords: Marketing Decision Models, DSS, Decision Making, Learning, Feedback
Keywords: Marketing Decision Models, DSS, Decision Making, Learning, Feedback
JEL Classification: M, C44, M31, C53
Suggested Citation: Suggested Citation