Moving Beyond the AI Excitement: The Practical Implications Post Davos 2024

Moving Beyond the AI Excitement: The Practical Implications Post Davos 2024

Subscribe to our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage.

AI was a major topic at Davos 2024. According to Fortune, over two dozen sessions focused directly on AI, addressing everything from AI in education to AI regulation.

Some prominent figures in AI attended the event, including OpenAI CEO Sam Altman, Inflection AI CEO Mustafa Suleyman, AI pioneer Andrew Ng, Meta’s chief AI scientist Yann LeCun, and Cohere CEO Aidan Gomez.

Shifting from wonder to pragmatism

Unlike the speculative discussions at Davos 2023, which followed the fresh release of ChatGPT, this year’s conversations were more measured. Chris Padilla, IBM’s VP of government and regulatory affairs, explained that the focus has shifted from excitement to concerns about AI’s risks and the need to make it trustworthy.

Among the concerns discussed were the dangers of misinformation, job displacement, and widening economic divides between wealthy and poor nations. The most frequently discussed risk was the threat of misinformation and disinformation, particularly through deepfake photos, videos, and voice clones that can distort reality and erode trust. For example, before the New Hampshire presidential primary election, robocalls used a voice clone impersonating President Joe Biden to suppress votes.

Deepfake technology can fabricate false information by making it seem like someone said something they did not. Carnegie Mellon University professor Kathleen Carley pointed out that this is just the beginning of potential voter suppression efforts or attacks on election workers. AI consultant Reuven Cohen noted that we should expect a surge in deepfake audio, images, and videos as the 2024 election approaches. Despite significant efforts, a foolproof method to detect deepfakes hasn’t been developed. As Jeremy Kahn noted in a Fortune article, a solution is urgently needed to maintain trust in democracy and society.

AI mood swing

The mood shift from 2023 to 2024 led Suleyman to suggest in Foreign Affairs that a “cold war strategy” is necessary to mitigate the threats posed by AI. He emphasized that foundational technologies like AI inevitably become cheaper and more accessible, reaching all levels of society for both positive and harmful uses.

Concerns about AI aren’t new, dating back to the 1968 movie “2001: A Space Odyssey.” These concerns persisted through the late 1990s with the Furby toy, which the National Security Administration (NSA) banned over fears it could act as a listening device.

Contemplating AI’s future trajectory

Recently, concerns about AI have intensified with claims that Artificial General Intelligence (AGI) could be achieved soon. While AGI’s exact definition is unclear, it’s generally seen as AI that surpasses the capabilities of a college-educated human across a broad range of activities. Altman believes AGI could be developed in the near future, and Gomez shares this view. However, not everyone agrees; LeCun is skeptical and believes human-level AI will take much longer and require unknown scientific breakthroughs.

Public perception and the path forward

Uncertainty about AI’s future persists. The 2024 Edelman Trust Barometer, launched at Davos, showed that global respondents are divided, with 35% rejecting AI and 30% accepting it. People recognize AI’s potential but also its risks. According to the report, people are more likely to embrace AI if it is vetted by scientists and ethicists, if they feel they have control over its impact, and if it promises a better future.

While rushing to “contain” the technology is tempting, it’s important to remember Amara’s Law, which states that we tend to overestimate a technology’s short-term impact and underestimate its long-term effects. We are in a phase of experimentation and early adoption, but success isn’t guaranteed. Rumman Chowdhury, CEO and co-founder of AI-testing nonprofit Humane Intelligence, predicted that 2024 might bring a realization that AI isn’t as groundbreaking as many believe.

2024 could be the year we discover the true impact of AI. Meanwhile, people and companies are learning how to best use generative AI for personal and business benefits. Accenture CEO Julie Sweet indicated that while excitement about the technology is high, there’s a need to connect it to tangible value. The consulting firm is now conducting workshops for top executives to understand the technology’s potential and move from theoretical use cases to actual value.

The benefits and the most harmful impacts of AI (and AGI) may be imminent but not necessarily immediate. As we navigate AI’s complexities, we’re at a crossroads where thoughtful management and innovation can lead to a future where AI amplifies human potential without compromising our integrity and values. It’s up to us to courageously envision and design a future where AI serves humanity, not the other way around.