Join our newsletters to stay updated with the latest in AI. Verizon is leveraging generative AI to improve customer support for its 100 million+ phone customers and is bolstering its responsible AI team to manage risks.
Michael Raj, a vice president at Verizon, explained that the company is taking several measures to implement this initiative. These include requiring data scientists to register AI models with a central data team for security reviews and closely monitoring the language models used to reduce bias and prevent offensive language.
Raj discussed these efforts during the VentureBeat AI Impact event in New York, highlighting the challenges of auditing generative AI, which is still a developing field. He and others noted that companies need to speed up their efforts, as regulators have not yet provided detailed guidelines. The frequent mistakes by AI customer support agents from big companies have emphasized the need for more reliable systems.
The technology is advancing quickly, but current regulations are broad, leaving companies to fill in the details. Justin Greenberger from UiPath likened this situation to the “Wild West,” suggesting that companies need to establish clear rules and policies for using generative AI. While audits are crucial, many companies lack the resources to perform them effectively.
A report from Accenture showed that 96% of organizations support some level of AI regulation, but only 2% have fully integrated responsible AI practices.
Verizon aims to be a leader in applied AI, focusing on providing intelligent assistants to support its frontline employees. These AI assistants help manage customer interactions by providing instant, personalized information and handling routine queries, allowing human agents to address more complex issues.
Verizon is also using AI to enhance customer experience across its network and predict customer churn. The company has consolidated its AI governance into a single organization, scaling up to set standards for privacy and respectful language. This unit collaborates closely with Verizon’s Chief Information Security Officer and procurement executives.
Verizon’s approach to managing AI involves making datasets available to developers to ensure they use approved models. Justin Greenberger from UiPath predicts that more companies will adopt this model registration approach, akin to how the pharmaceutical industry regulates drugs. He also suggested that companies should frequently evaluate their risk profiles due to rapid technological changes.
Centralized AI teams like Verizon’s are becoming common among sophisticated companies. These AI Governance groups are essential for working with third-party language model suppliers, each offering various models with unique capabilities.
The inherent unpredictability of generative AI makes it challenging to legislate auditing processes. Rebecca Qian from Patronus AI emphasized the need for regulations addressing failures related to safety, bias, and other issues specific to different industries. For example, in transportation and healthcare, AI failures can have life-or-death consequences, whereas the risks are lower in e-commerce.
Transparency in AI models remains a significant challenge. Traditional AI could be understood by examining its code, but generative AI is more complex. Most companies struggle with even the basics of AI auditing, with only about 5% having completed pilot projects on bias and responsible AI.
As AI evolves rapidly, Verizon’s commitment to responsible AI serves as a benchmark for the industry, highlighting the need for governance, transparency, and ethical standards.