Subscribe to our daily and weekly newsletters for the latest updates and exclusive content on leading AI coverage.
After the unprecedented drama at OpenAI over the past 10 days, where the board fired CEO Sam Altman, temporarily replaced him with CTO Mira Murati, and nearly all employees threatened to resign, only for Altman to be reinstated right before Thanksgiving, I thought the US holiday weekend would be a chance for Silicon Valley to step away from AI buzz and enjoy some turkey and stuffing.
However, that wasn’t the case. On Thanksgiving morning, Reuters reported that before Altman’s brief ousting, some researchers had warned the OpenAI board about a potentially dangerous AI discovery. This project, known as Q, was rumored to be a significant step toward building AGI, which OpenAI describes as systems that surpass humans in most economically valuable tasks. The model was reported to solve certain math problems, and although it performed at the level of grade-school students, this success had researchers hopeful about Q’s future potential.
So, what’s driving this relentless wave of AI excitement? The fact that the AI news and social media cycle didn’t pause even for a day — even with all the OpenAI drama and holiday traditions — made me question what fuels this nonstop hype. After all, despite the buzz, Nvidia’s senior AI scientist Jim Fan dismissed Q as a “fantasy,” noting there were no papers, stats, or products to substantiate the claims.
While there is genuine intellectual excitement, media competition for headlines, and some anticipatory greed and self-promotion, I believe this constant attention also stems from anxiety and uncertainty about AI’s future. According to a University of Wisconsin paper, uncertainty about potential threats disrupts our ability to handle them, leading to anxiety. The human brain tries to anticipate future events to improve outcomes, but uncertainty hampers this ability and increases anxiety.
This anxiety isn’t limited to everyday people; top AI researchers and leaders also feel it. Even AI pioneers like Geoffrey Hinton, Yann LeCun, and Andrew Ng don’t know what lies ahead for AI, making their social media debates more about speculation than providing certainty.
Given this, the ongoing discussions about OpenAI and Q — including my own — can be seen as a response to the uncertainty surrounding AI’s future. Constantly seeking information and reassurance may avoid facing the reality that AI’s future is uncertain and unknown.
Of course, it’s essential to debate, plan, and prepare for AI’s evolution, but surely we can take a break to enjoy our meals and have a rest? With Christmas just four weeks away, perhaps everyone in the AI community — from altruists and accelerationists to optimists and doomsayers, industry leaders and researchers — can take a brief pause for some holiday cheer. The AI discussions and the hype will still be there after New Year’s. I promise.