Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Last week, an OpenAI PR representative emailed me to announce that the company has formed a new “Collective Alignment” team. This team is focused on “prototyping processes” to incorporate public input in guiding AI model behavior. The aim is democratic AI governance, building on the efforts of ten recipients of OpenAI’s Democratic Inputs to AI grant program.
I couldn’t help but chuckle. The cynical part of me finds it amusing that OpenAI, with its grand vision of ‘creating safe AGI for all humanity,’ is also busy selling APIs and GPT services while dealing with copyright issues. They now aim to tackle one of the toughest challenges in human history — crowdsourcing a democratic consensus.
Considering the current state of American democracy and fears about AI systems spreading disinformation, it’s hard to imagine public opinion being applied to AI rules effectively, especially by a company like OpenAI, a leader in commercial AI.
Despite my skepticism, I found the idea intriguing. People at OpenAI are working full-time on creating a more democratic AI guided by humans, which is a hopeful and important goal. But is this more than just a PR move under regulatory scrutiny?
OpenAI researcher admits collective alignment could be a ‘moonshot’
To learn more, I had a Zoom call with Tyna Eloundou, an OpenAI researcher focused on the societal impacts of technology, and Teddy Lee, a product manager at OpenAI. The team is actively looking to add more members and will work closely with OpenAI’s “Human Data” team, which collects human input on the company’s AI models.
I asked Eloundou about the challenges of developing democratic processes for AI rules. According to an OpenAI blog post from May 2023, “democratic processes” involve a broadly representative group exchanging opinions, engaging in discussions, and deciding on outcomes transparently.
Eloundou acknowledged that many see it as a “moonshot.” However, she emphasized that as a society, we have always dealt with the complexities of democracy. People decide the parameters and whether the rules make sense.
Lee highlighted the difficulty of integrating democracy into AI systems, noting the various directions this effort could take. He mentioned that the grant program was designed to explore what others are doing in this space and identify potential blind spots.
10 teams designed, built, and tested ideas using democratic methods
According to a recent OpenAI blog post, the democratic inputs to AI grant program awarded $100,000 to 10 teams out of nearly 1000 applicants. These teams designed, built, and tested ideas to use democratic methods for governing AI systems. The challenges they faced included recruiting diverse participants, producing coherent outputs that represent various viewpoints, and ensuring transparency.
Each team approached these challenges differently, using methods like video deliberation interfaces, crowdsourced audits, mathematical representations, and mapping beliefs to dimensions for fine-tuning AI behavior.
Not surprisingly, they encountered roadblocks. Public opinion can shift quickly, reaching diverse participants is challenging, and finding consensus among polarized groups is tough.
But OpenAI’s Collective Alignment team remains undeterred. They have advisors, including Hélène Landemore from Yale, and are consulting with social scientists involved in citizens assemblies — groups selected by lottery to deliberate on public issues.
Giving democratic processes in AI ‘our best shot’
Lee explained that the program’s starting point was acknowledging their own uncertainties. The grantees, from fields like journalism, medicine, law, and social science, brought excitement and expertise to the projects. Lee found this both exciting and humbling.
Is the Collective Alignment team’s goal achievable? Lee compared it to democracy itself — a continual effort that evolves as people’s views change and interactions with AI models develop.
Eloundou agreed, emphasizing their commitment to trying their best.
Whether a PR stunt or not, at a time when democratic processes are fragile, any effort to enhance them in AI decision-making deserves recognition. So, I say to OpenAI: Hit me with your best shot.