The VB AI Impact Tour: What is the Future of Human Involvement in Auditing?

The VB AI Impact Tour: What is the Future of Human Involvement in Auditing?

The big question keeping executives up at night is how organizations can audit their AI models for bias, performance, and ethical standards. At VentureBeat’s recent VB AI Impact Tour in New York City, experts discussed methodologies, best practices, and case studies on this topic. Key speakers included Michael Raj from Verizon Communications, Rebecca Qian from Patronus AI, and Matt Turck from FirstMark. The event concluded with a talk between VB CEO Matt Marshall and Justin Greenberger from UiPath about what successful AI audits look like and where to begin.

Greenberger highlighted the evolving nature of risk assessment. Previously, risks were evaluated annually, but now they need almost monthly reviews. He pointed out the importance of understanding and controlling risks through updated frameworks like the one from the Institute of Internal Auditors (IIA). This includes monitoring key performance indicators (KPIs), ensuring data source transparency, and having accountability measures in place. He also drew parallels to GDPR, which was initially seen as over-regulation but has since become a foundational aspect of data security for many companies. What’s notable about generative AI is that global markets are evolving at the same pace, leveling the competitive field as organizations assess their risk tolerance and potential impacts.

Despite the emerging nature of enterprise-wide transformation, many companies are already experimenting with AI through pilot projects. Some persistent challenges include finding subject matter experts with the necessary contextual understanding and critical thinking skills to define use cases and their implementation. Another hurdle involves employee education and engagement. With the rapid development of technologies like deep fakes, it’s still unclear what employees need to know. Additionally, organizations are integrating generative AI into existing workflows rather than completely overhauling their processes, which complicates audits, especially in sectors like healthcare where private data usage is a concern.

Greenberger also discussed the evolving role of humans in the AI landscape. Currently, humans remain involved as risks and controls develop alongside the technology. For instance, a user might query a system, then generative AI performs calculations and provides data necessary for the job. However, this human role could eventually be minimized. He emphasized that while humans still play a role in decision-making now, this may decrease over time as audit controls and spot checks become more reliable. Instead, humans might focus more on creative and emotional aspects of work, as these are areas that AI is less likely to take over.