Reconciliation between Factions Focused on Near-Term and Long-Term Artificial Intelligence

Forthcoming, AI & Society, doi 10.1007/s00146-017-0734-3

11 Pages Posted: 31 May 2017  

Seth D. Baum

Global Catastrophic Risk Institute

Date Written: May 29, 2017


Artificial intelligence (AI) experts are currently divided into “presentist” and “futurist” factions that call for attention to near-term and long-term AI, respectively. This paper argues that the presentist-futurist dispute is not the best focus of attention. Instead, the paper proposes a reconciliation between the two factions based on a mutual interest in AI. The paper further proposes a realignment to two new factions: an “intellectualist” faction that seeks to develop AI for intellectual reasons (as found in the traditional norms of computer science) and a “societalist faction” that seeks to develop AI for the benefit of society. The paper argues in favor of societalism and offers three means of concurrently addressing societal impacts from near-term and long-term AI: (1) advancing societalist social norms, thereby increasing the portion of AI researchers who seek to benefit society; (2) technical research on how to make any AI more beneficial to society; and (3) policy to improve the societal benefits of all AI. In practice, it will often be advantageous to emphasize near-term AI due to the greater interest in near-term AI among AI and policy communities alike. However, presentist and futurist societalists alike can benefit from each others’ advocacy for attention to the societal impacts of AI. A reconciliation between the presentist and futurist factions can improve both near-term and long-term societal impacts of AI.

Keywords: Artificial Intelligence, Near-Term Artificial Intelligence, Long-Term Artificial Intelligence, Societal Impacts of Artificial Intelligence, Artificial General Intelligence, Artificial Superintelligence

Suggested Citation

Baum, Seth D., Reconciliation between Factions Focused on Near-Term and Long-Term Artificial Intelligence (May 29, 2017). Forthcoming, AI & Society, doi 10.1007/s00146-017-0734-3. Available at SSRN:

Seth D. Baum (Contact Author)

Global Catastrophic Risk Institute ( email )

Paper statistics

Abstract Views