Beyond the ‘Net Neutrality’ Debate: Price and Quality Discrimination in Next Generation Internet Access

47 Pages Posted: 11 Jul 2012  

Christopher Marsden

University of Sussex Law School

Jonathan Cave

University of Warwick

Date Written: August 15, 2007


This paper is a technical regulatory analysis of Next Generation Network (NGN) access for consumers to internet content, to be read by those familiar with regulatory law and economics concepts, and the broad outline of the debate within telecoms regulation. The problem is defined and the analysis explained in Section 1. Section 2 develops an economic analysis of discrimination among content types. Section 3 analyzes the scope for command-and-control or co-operative regulatory intervention. This takes into account issue linkage (including regulatory capture by both carriage and content provider incumbents and regulatory delay) and the available policy instruments, in the European and international context. Next Generation Networks (NGNs) present enhanced opportunities to offer internet content to consumers. They can be defined as networks with a packet-based architecture, facilitating provision of existing and new services through a loosely coupled, open and converged communications infrastructure. Access to NGNs offers the potential for abusive discrimination. Abuse is usually characterised in telecoms as a monopoly problem: where one or two Internet Service Providers (ISPs) have dominance – typically in the “last mile” of end-user access. ISPs can impose discriminatory treatment on all content or, where they are vertically integrated, on particular content against which they compete with (for example, Voice Over IP [VOIP], as in Madison County). The suggestion is therefore that the problem can be resolved by introducing greater competition or closely policing conditions for vertically integrated services, such as VOIP.

Our approach neither proposes an absolute ban on price discrimination nor an absolute prohibition on regulatory oversight. Instead it begins by asking which potential abuses define the problem in its European manifestation. The issue, broadly put, is whether ISPs are motivated to provide incentives to content providers to pay for superior service via either:
• lower levels of service for the same price (e.g. blocking or “throttling” content)
• higher price for higher Quality of Service (QoS).

This can take place even where an ISP does not have dominance (expressed in European debate as Significant Market Power [SMP]. Certain types of Peer to Peer (P2P) traffic are highly valued by end-users and are potentially discriminated against in whole or in part by non- dominant ISPs. They have socially-beneficial competitive and traffic-management reasons to do so – it makes their networks safer and more efficient. It is thus hard to determine whether their discrimination has less-benevolent motives or consequences, like blocking competition. Many assertions are made about the impacts of certain types of traffic, but regulators have no basis for deciding whether they represent big or small problems. ISPs’ assertions that P2P traffic contains a high proportion of malware may be disingenuous. Email spam and web surfing are vectors for malware, but ISPs don’t block such traffic. A research base in monitoring Internet traffic and usage would help the regulator to understand the importance of such stakeholder claims.

NGN access to content in Europe has been dismissed as an American debate caused by a lack of competition to the reintegrated AT&T and Verizon and to cable companies. We suggest that this debate does have serious implications for Europe. The extreme positions taken in the US debate cloud interesting questions about the benefits and costs of discrimination in faster and higher quality internet content services. Europe is not different, for two reasons. First, competition among ISPs in some metropolitan and suburban networks is limited by both geographical scale and feature-price scope. Second, as identified above, there can be motives to “throttle” content such as P2P no matter what the competitive position of an ISP. This could ultimately jeopardise the rise of Web2.0 content services – and thus user content generation - in Europe. SMP operators primarily want to ensure they have (and their rivals lack) the resources to innovate. To a lesser extent, they may want to encourage stakeholders on whom they depend to innovate (e.g., SMP content providers such as broadcasters may want to encourage innovation on the part of a network operator to enable a required new service to effectively deliver content). However, many European citizens will want to use P2P services to share user-generated content. A “universal service obligation” that is upgraded as broadband network speeds increase can ensure a minimal open internet layer is maintained.

The argument about who should pay for the significant extra investment required for advanced “next generation” internet infrastructure continues; thus far, the suggestion is that network or content providers (but not end-users or government tax incentives) should do so. The research behind this paper developed scenarios that offer alternative paths for a wide range of stakeholders to invest and benefit from each others’ investment. It is vital to elaborate more closely the costs and benefits of different relationships between content and carriage. For example, it is still unclear whether “broadcast TV on the internet” or an unrestricted P2P model is more attractive to investors, users and public welfare. Regulators and policymakers could devote more attention to the evidence base for modelling those choices (Broadband Stakeholder Group, 2007).

The economic analysis first developed a process map of the content distribution system, taking into account both the layered structure of the telecommunications and the extended, non-linear “value mesh” linking key players. This approach considered vertical and horizontal integration, bundling of content and services and changes from client-server to user-generated business and societal value models. It also highlighted the potential importance of alternative (e.g. mobile and other wireless) channels. Market structure, conduct and performance aspects of the current state of play were assessed on the basis of published sources and peer-reviewed literature in order to highlight the demonstrable extent of market failure, significant trends and key (present and future) uncertainties. This was followed by analysis of potential changes in market alignment in response to exogenous (e.g. technological) shocks and trends and as a manifestation of endogenous (e.g. competitive and self-regulatory) forces. We explain this in detail in Section 2.

Section 3 analyzes the scope for intervention by command-and-control or co-regulatory means. This takes account of issue linkage (including regulatory capture by carriage and content provider incumbents and regulatory delay) and available policy instruments, in the European and international context. An open content model tends towards ‘Web2.0’-type “public good” value and innovation concentrated in end-users rather than in network operators and associated clusters of developers. While this is by no means the only model for internet-based and ICT- oriented innovation, it is a promising approach. Current choices about regulation of these sectors can affect this business model choice and therefore end-user benefits. This analysis supports a light-touch regulatory regime involving reporting requirements and co-regulation, with as far as is possible, market-based, low-cost solutions.

Because market and regulatory development are dynamically linked and far from linear, heavy regulation not only distorts market development but suppresses signals about key aspects of that development. For that reason, regulation to ensure any form of NGN access to internet content in Europe should be carried out with as light a touch as possible while maintaining effective oversight based on two modalities: 1. Regulation of information to require service providers to inform consumers about the choices they are making when they sign up for a service; and 2. Intervention to correct harmful and unjustified discrimination, which must be timely and evidence-based.

These two regulatory modalities imply a reporting burden on service providers to provide transparency in their traffic-management practices. Regulatory monitoring of potential abuses, including strengthening investigatory capacity and transparency for end-users, maintains maximum flexibility and policy choice while ensuring that abuses or unforeseen consequences can be quickly detected and dealt with appropriately. This reporting requirement could be provided in a co-regulatory forum. The European Commission as well as Member States will need to closely monitor developments in this area, especially in view of encouraging innovation and end-user access policies for “ContentOnline” and the wider Lisbon Agenda goals. Dangers of fragmentation and regulatory arbitrage may arise for two reasons: a type of “regulatory holiday” for ISPs in one country but not in another is quite likely; and enforcement of NGN access to content may be highly divergent even under the current 2002 framework. Solutions may be international as well as local, and international coordination of best practice and knowledge through fora such as the European Regulators Group and OECD will enable national regulators to keep up with the technology “arms race”.

Suggested Citation

Marsden, Christopher and Cave, Jonathan, Beyond the ‘Net Neutrality’ Debate: Price and Quality Discrimination in Next Generation Internet Access (August 15, 2007). TPRC 2007. Available at SSRN:

Christopher T. Marsden (Contact Author)

University of Sussex Law School ( email )

Brighton BN1 9QN
United Kingdom

Jonathan Cave

University of Warwick ( email )

Gibbet Hill Rd.
Coventry, West Midlands CV4 8UW
United Kingdom

Paper statistics

Abstract Views