Report Reveals Flaws in AI Governance Solutions

Report Reveals Flaws in AI Governance Solutions

Sign up for our daily and weekly newsletters to stay updated with the latest and exclusive content on leading AI coverage.

A recent report from the World Privacy Forum reviewed 18 AI governance tools used by governments and international organizations, revealing that over a third (38%) have “faulty fixes.” These tools and techniques, intended to evaluate AI systems for fairness and explainability, often lack proper quality assurance or use measurement methods unsuitable for their context.

Some tools were created or distributed by major companies like Microsoft, IBM, and Google, which also develop many of the AI systems being assessed. One example is IBM’s AI Fairness 360 tool, praised by the US Government Accountability Office for its guidance on ethical AI principles. However, it faced criticism in academic circles for the research behind its “Disparate Impact Remover algorithm.”

Pam Dixon, founder and executive director of the World Privacy Forum, highlighted that many AI governance tools today lack established quality assurance standards. An AI tool designed to remove bias might come without documentation or context-specific instructions, leading to inappropriate applications in various settings.

The report defines AI governance tools as methods to assess AI systems for inclusiveness, fairness, explainability, privacy, safety, and other trust-related issues. These tools include practical guides, self-assessment questionnaires, process frameworks, and software. While they might reassure regulators and the public, they can also create a false sense of security and potentially cause more problems.

Following the EU AI Act and President Biden’s AI Executive Order, it’s crucial to scrutinize how governments and organizations are implementing governance tools. Kate Kaye, deputy director of the World Privacy Forum, noted this as a chance to improve the AI governance ecosystem. While it’s early days, there’s potential to refine these tools and ensure they effectively implement AI policies and regulations like the EU AI Act.

Kaye provided an example of how AI governance efforts could misfire: The four-fifths rule in US employment law is being inappropriately applied in contexts outside of employment in AI tools used in countries such as Singapore and India. This misuse strips away the rule’s original nuance, highlighting the need for context-appropriate application.

Organizations face pressure to establish legislation and AI governance, Dixon and Kaye stressed. However, embedding flawed methods into policy would exacerbate existing issues.

Looking forward to 2024, Dixon and Kaye are optimistic about improving AI governance tools. Dixon mentioned the OECD’s willingness to collaborate on enhancing these tools as a positive sign. The National Institute of Standards and Technology (NIST) also aims to create a rigorous evaluation environment with thorough testing and evidence-based procedures. With dedicated effort, significant improvements in AI governance tools could emerge within six months.