Establishing a Governance Framework for AI: Insights from an Open Source Syllabus

Establishing a Governance Framework for AI: Insights from an Open Source Syllabus

Kevin Frazier has big plans to get more people involved in important conversations about artificial intelligence (AI). As an assistant professor of law at St. Thomas University, he’s working on creating new educational tools for the AI sector. He believes that global cooperation is crucial for navigating the complex societal impacts of AI.

In a recent interview, Frazier talked about his development of an open-source legal syllabus that provides teaching materials on AI, law, and policy. He emphasized the importance of understanding the basics of building AI models, their inputs, and outputs to participate effectively in discussions about AI governance.

The curriculum he’s developing covers fundamental AI concepts, risks, and legal frameworks. It also includes lectures from experts in the field to help people understand the technology and its implications. Frazier hopes this will encourage informed, multidisciplinary dialogue to shape oversight frameworks.

Frazier’s vision, using what he calls “living documents” like this syllabus, aims to find principles-based solutions by welcoming global participation. He believes that involving more voices will guide responsible progress as technologies reshape society.

Frazier has noticed that current AI discussions often involve a very exclusive group of people. He sees the need for more inclusive and representative conversations due to the technology’s broad impact. He also pointed out that past governance efforts have often relied on self-regulation by CEOs, which may not be the best approach. He believes that building understanding among a diverse group of knowledgeable stakeholders is essential.

Frazier’s open-source syllabus is designed to help develop informed perspectives worldwide. He wants to ensure that anyone, whether at St. Thomas University or the Harvard Kennedy School, has the opportunity to become a voice in AI governance conversations.

Drawing lessons from other technologies, Frazier pointed to geoengineering as an example. Like AI, geoengineering introduces complex risks with wide-reaching and long-term implications. However, discussions around geoengineering have often involved limited voices, much like early AI policy talks.

Frazier believes that input from scientific communities that understand the technical aspects of geoengineering is crucial for developing effective frameworks. Similarly, AI requires guidance informed by technical expertise to address its diverse impacts and opportunities.

He also emphasizes the importance of broader representation in global discussions about AI. Inspired by a Member of Parliament from Tanzania, Frazier is motivated to involve more diverse communities in these conversations. The MP highlighted the need to actively include communities from the global south, which stand to be significantly affected by AI but have had limited participation in governance discussions so far.

Frazier’s initiatives aim to cultivate progress through collaboration. By connecting AI policy educators and making resources widely accessible, he hopes to advance inclusive governance. The modular structure of his syllabus supports ongoing contributions from other institutions and localized expertise. Various scholars and organizations have already provided feedback and support for the syllabus.

According to Frazier, business leaders also have a crucial role in these discussions, as the rules shaped today will have long-term impacts on operations and innovation.