A Recent Survey Predicts AI May Surpass Human Performance in All Tasks Within Two Decades

A Recent Survey Predicts AI May Surpass Human Performance in All Tasks Within Two Decades

Subscribe to our newsletters for daily and weekly updates on industry-leading AI coverage and exclusive content.

AI is progressing at an incredibly rapid pace. Over just the past year, it’s fundamentally changed the way we talk about technology in both business and everyday life. This rapid advancement has even caught experts off guard. Many are surprised and increasingly worried, according to a new survey from AI Impacts, the University of Bonn, and the University of Oxford. They conducted the largest study of its kind, gathering opinions from 2,778 authors whose work appeared in top industry publications and forums.

Participants in the 2023 Expert Survey on Progress in AI noted that if scientific progress continues without interruption, there is a 10% chance that machines will outperform humans in every possible task by 2027, and a 50% chance by 2047.

Respondents also indicated that by 2037, there’s a 10% chance that all human occupations could become fully automatable. Alarmingly, there’s at least a 10% chance that advanced AI could lead to severe disempowerment or even extinction of the human race. This aligns with the concerns of those who believe in “existential risk” or “x-risk” related to AI, a view closely associated with the effective altruism movement. Critics of these beliefs argue that focusing on them minimizes real, short-term harms like job loss or inequality.

Despite optimistic scenarios highlighting AI’s potential to transform various aspects of work and life, the more pessimistic predictions—especially those involving extinction-level risks—serve as a stark reminder of the high stakes involved in AI development.

This survey was the third in a series, following studies conducted in 2016 and 2022. Opinions and projections have dramatically changed over this period. The 2023 survey took place after a particularly eventful year of progress, including the launch of ChatGPT, Anthropic’s Claude 2, Google’s Bard and Gemini, as well as government actions in the U.S., UK, and EU.

Respondents were asked how soon 39 specific tasks would become feasible for AI, meaning that the best-equipped labs could implement these tasks within a year. Some tasks included translating text in a newly discovered language, recognizing objects seen just once, writing simple Python code, autonomously constructing a payment processing site from scratch, and fine-tuning a large language model. Most tasks were predicted to have at least a 50% chance of being feasible within the next 10 years.

A few tasks expected to take longer than 10 years included producing differential equations for a virtual world (12 years), physically installing electrical wiring in a new home (17 years), and solving long-standing unsolved problems in mathematics like a Millennium Prize problem (27 years).

Researchers also gathered opinions on when machines might achieve human-level performance in tasks (High-Level Machine Intelligence, HLMI) and when entire occupations might become fully automatable (Full Automation of Labor, FAOL). HLMI is predicted to have a 50% chance of being reached by 2047, moving up from 2060 in the 2022 survey. FAOL is estimated to have a 50% chance by 2116, significantly earlier than the previous estimate.

While there are broad ranges in predictions, this year’s survey indicates a general shift toward earlier expectations. Researchers highlighted several concerns about AI, particularly relating to alignment, trustworthiness, predictability, and security risks. By 2043, many participants believe AI will achieve unexpected ways to meet goals, speak like experts on most topics, and frequently surprise humans with its behavior.

By 2028, AI might often produce outputs that puzzle humans, making it difficult to understand the true reasons behind its actions. Participants expressed significant concern about AI being used to spread false information, manipulate public opinion, create dangerous tools, and exacerbate economic inequality.

A strong consensus emerged that AI safety research should be prioritized as AI tools continue to develop. Participants were almost evenly split on whether AI’s impacts would be more positive or negative. While 68% believed good outcomes are more likely, 58% acknowledged that extremely bad outcomes remain a “nontrivial possibility.” About half of the respondents estimated a greater than 10% chance of human extinction or severe disempowerment.

One in ten participants even placed at least a 25% chance on outcomes that could lead to human extinction. However, researchers caution that, while experts in AI are familiar with the technology and past progress, they do not necessarily have any special skill at forecasting future developments.

Given the variability in responses and the influence of how questions are framed, the researchers recommend that AI forecasts be considered as part of a broader discussion, including trends in computer hardware, AI capabilities, and economic analyses. Despite these challenges, AI researchers are in a unique position to improve our collective understanding and expectations for the future, though these “educated guesses” are inherently unreliable.

Stay informed by subscribing to get the latest news delivered to your inbox daily.