Track Gap: Policy Implications of User Expectations for the 'Do Not Track' Internet Privacy Feature
affiliation not provided to SSRN
Jon M. Peha
Carnegie Mellon University
September 25, 2011
Several new Do Not Track features provide ways for Internet users to opt out of some online data use and collection for certain purposes. Privacy advocates have discussed Do Not Track (DNT) for several years, and DNT gained momentum after an FTC staff report requested comments on the idea in December, 2010. Three web browsers implemented Do Not Track solutions in March, 2011. While DNT pertains to more than just advertising, it was motivated in part by privacy concerns around online behavioral advertising (OBA). Advertisers build behavioral profiles of individual users in order to serve targeted ads, plus also collect what may be identical data for other purposes including fraud prevention, analytics, and billing for ad impressions. Some users prefer privacy to personalization and take technical measures to block data collection, for example by deleting cookies. Advertisers have tried increasingly esoteric means of data collection, including Flash cookies (LSOs) and cache cookies, to avoid being blocked by users. Do Not Track enables users to express a preference not to be tracked regardless of the technical details of tracking mechanisms. Ideally, Do Not Track will be an easy way for users to make privacy choices without requiring users to master the technical details of half a dozen very different tracking technologies. Yet all three browser implementations differ on what Do Not Track means in practice.
One potential problem for Do Not Track is that none of the three of the implementations meet user expectations for a feature named Do Not Track. Users’ expectations are key components in how the FTC evaluates privacy practices. If differences are slight, there is an opportunity to communicate over the gap between expectations and reality. If differences are large, there is a risk of loss of user trust and a backlash. For example, if users expect Do Not Track means all data collection stops, but then discover Do Not Track results in identical data collection and analysis but simply changes the ads they see to less relevant ads, users may be surprised. The central question to our research is what is the nature of the gap between expectations and reality, and how may that gap be addressed?
To answer this we studied user expectations for Do Not Track through a large-scale user survey. We began with an open-ended question asking what a mocked up button labeled “Do Not Track” would do in a web browser. We investigated the types of data users believe can be collected before and after clicking Do Not Track, and data uses before and after clicking Do Not Track. We then posed hypothetical descriptions of what Do Not Track might mean while visiting a website, or with regard to advertising, and ask both how likely and how desirable users find the hypotheticals. We then described each of the three browser implementations for Do Not Track, and asked if the implementations are expected, liked, and if users think they would use them if offered. Finally we asked about other data privacy practices and for demographic data.
We also compared all three implementations of Do Not Track. In Google’s Chrome, opt-out cookies can result in at least three different outcomes: data collection stops, data collection continues but only on an aggregated basis, or data collection is unchanged. In Mozilla’s Firefox, the browser sends a Do Not Track preference through HTTP headers. Advertisers choosing to honor DNT will each define tracking. Microsoft’s Internet Explorer has Tracking Protection Lists (TPLs) to block tracking content from loading, but one of the TPL authors has favored their own member companies, leading to questions of transparency and intent for TPL creators. In addition to TPL, Microsoft also uses the same DNT headers Mozilla uses.
We contrast the current DNT implementations to user expectations for Do Not Track. More than half of users believe Do Not Track stops all data collection, and reject definitions of Do Not Track that rely on data aggregation. None of the current DNT implementations preclude data collection, and many websites implementing DNT will likely rely on aggregation. Less than a quarter of participants expected Do Not Track to allow data use for counting interactions with ads, ad demographics, or reporting data to law enforcement, when DNT does not specifically bar any of these data uses. Our results suggest a large gap between user expectations for Do Not Track and the way it is being implemented.
We conclude the paper with a discussion of policy implications. We consider changes to DNT that could bring implementations closer in line with expectations. We discuss ways to improve communications with users, which could change user expectations. Common methods of communication, including privacy polices and online help files, are unlikely to reach most users. We discuss approaches to allow communication from individual companies honoring DNT to allow companies to indicate how they implemented DNT, and what DNT support means within the context of their organization. Last, we discuss other ways to establish norms around defining what "tracking" means, including standards bodies, self-regulatory mechanisms, FTC regulation, and legislation.
Because Do Not Track is so new, as far as we know this is the first scholarship on this topic. This paper has been neither presented nor published. Results from a pilot study on expectations for data collection were part of a presentation to Yale Law School’s “From Mad Men to Mad Bots: Advertising in the Digital Age” conference, March 25-26 2011, with an accompanying short paper.
Number of Pages in PDF File: 36
Date posted: January 31, 2012
© 2016 Social Science Electronic Publishing, Inc. All Rights Reserved.
This page was processed by apollobot1 in 0.219 seconds