An Algorithmic Approach To Evaluating Default Privacy Options
Posted: 1 Apr 2014 Last revised: 23 Aug 2014
Date Written: March 31, 2014
In many online services, users are asked to configure their privacy settings by choosing between a small set of initial privacy options. For example, in Facebook, the world’s largest online social network, users can choose between three default privacy settings: sharing their profile and posts with their Facebook friends, their friends-of-friends, or with the general public (which is selected by default.) Research in human-computer interaction and in economics has repeatedly shown that reducing options and prescribing default options have a profound and lasting affect on users’ ongoing behavior, pushing users to the prescribed set of options. It is not surprising that default choices are lately the subject of legislation attempts, aiming to regulate the option selection and default selections in online services. For example, the California bill S.B. 242, required web sites to establish a default privacy setting that prohibits the display of most personally identifiable information. In light of this regulatory trend, it is becoming crucial to better understand the nature of reducing the set of privacy options or prescribing default options.
This study takes an algorithmic approach to analyzing default privacy options in online services. We suggest and evaluate an assessment method that analyzes the service’s default options, by comparing them to the actual preferences of a sample of people who already use the service. The assessment is based on a set of indicators that measure the extent to which the default options reflect the users’ preferences and behaviors. The indicators can be used to analyze the options, providing designers and policy makers with tools that evaluate how well a set of options can fairly serve a given population. First, we look at how representative the default options. We calculate the proportion of the population that would be satisfied by at least one of the options, and measure how different default options and selections impact that proportion. Second, we look at the social welfare that different default options can provide to the user population, with regard to how well the options match the users’ preferences. We analyze measures of disparity between users’ preferences, and define formal notions of equality and fairness. Finally, we suggest methods for deriving an optimal set of default options, which satisfy the largest proportion of users with the highest possible matching quality.
We empirically evaluate our assessment using data collected from Facebook, describing the detailed privacy behavior of about 400 Facebook users. We analyze the users’ privacy preferences, and validate it using external resources and surveys regarding Facebook usage. We then evaluate Facebook’s existing default privacy settings, as well as other popular privacy configurations in Facebook, against the users’ actual behavior. The findings of this research formally and mathematically describe the inherent bias in the current default privacy settings offered by online service providers. We demonstrate how our methods can be used to evaluate default privacy options, to analyze them, and to synthesize representative options that maximize users’ social welfare.
Keywords: privacy, default options, formal evaluation, algorithms, online social networks, Facebook, human-computer interaction
Suggested Citation: Suggested Citation