Comparing the Validity of Alternative Belief Languages: An Experimental Approach

39 Pages Posted: 31 Oct 2008

See all articles by Shimon Schocken

Shimon Schocken

affiliation not provided to SSRN

Date Written: August 1991

Abstract

The problem of modeling uncertainty and inexact reasoning inrule-based expert systems is challenging on nonnative as well oncognitive grounds. First, the modular structure of the rule-basedarchitecture does not lend itself to standard Bayesianinference techniques. Second, there is no consensus on how tomodel human (expert) judgement under uncertainty. These factorshave led to a proliferation of quasi-probabilistic belief calculiwhich are widely-used in practice. This paper investigates thedescriptive and external validity of three well-known "belieflanguages:" the Bayesian, ad-hoc Bayesian, and the certaintyfactors languages. These models are implemented in manycommercial expert system shells, and their validity is clearly animportant issue for users and designers of expert systems. Themethodology consists of a controlled, within-subject experimentdesigned to measure the relative performance of alternativebelief languages. The experiment pits the judgement of humanexperts with the recommendations generated by their simulatedexpert systems, each using a different belief language. Specialemphasis is given to the general issues of validating belieflanguages and expert systems at large.

Suggested Citation

Schocken, Shimon, Comparing the Validity of Alternative Belief Languages: An Experimental Approach (August 1991). NYU Working Paper No. IS-91-31, Available at SSRN: https://ssrn.com/abstract=1289061

Shimon Schocken (Contact Author)

affiliation not provided to SSRN

No Address Available

Here is the Coronavirus
related research on SSRN

Paper statistics

Downloads
8
Abstract Views
263
PlumX Metrics