Subscribe to our daily and weekly newsletters for the latest updates and exclusive AI industry insights.
Today, Meta CEO Mark Zuckerberg surprised everyone with an Instagram Reel announcing that the company is working on developing open-source artificial general intelligence (AGI). By merging its two AI research teams, FAIR and GenAI, Meta aims to create fully functional general intelligence and make it accessible to the public.
In a caption accompanying the video, Zuckerberg mentioned that their long-term vision is to responsibly open-source general intelligence so that everyone can benefit from it. He explained that the next generation of services will require advanced general intelligence to build superior AI assistants, tools for creators, businesses, and more. This endeavor demands progress across various AI domains, such as reasoning, planning, coding, memory, and other cognitive abilities.
Zuckerberg highlighted that Meta is currently training Llama 3 and developing an extensive computing infrastructure, which will include 350,000 Nvidia H100s by the end of the year. He also discussed the importance of the metaverse and Meta’s Ray-Ban smart glasses. According to him, new AI-compatible devices will be essential, and the combination of AI and the metaverse is crucial. He believes that we will frequently interact with AI throughout the day, often through devices like smart glasses, which can see and hear what we do, providing constant assistance.
This announcement follows recent comments by OpenAI CEO Sam Altman about AGI at the World Economic Forum in Davos, Switzerland. Altman had recently softened his stance on the existential risks of AGI, shortly after being reinstated following his firing in November 2023. Additionally, despite Meta’s chief scientist Yann LeCun’s skepticism about the near-term arrival of AGI, Meta’s push towards open-source AGI is significant.
The news also comes after VentureBeat reported that Llama and open-source AI dominated 2023. This move by Meta will likely reignite the debate about the advantages and risks of open-source versus closed-source AI, especially following Anthropic’s paper suggesting that open models could contain dangerous ‘sleeper agents’.