Wells Fargo’s AI-Driven Assistant by Google Set to Reach 100 Million Interactions Each Year

Wells Fargo's AI-Driven Assistant by Google Set to Reach 100 Million Interactions Each Year

Sign up for our daily and weekly newsletters to get the latest updates and exclusive content on leading AI developments. Wells Fargo’s Chief Information Officer, Chintan Mehta, shared insights on the bank’s deployment of generative AI applications. Their virtual assistant app, Fargo, has managed 20 million interactions since its launch in March. Mehta expects this number to grow to 100 million interactions annually as they enhance the app’s capabilities.

Wells Fargo’s progress with AI is noteworthy because it contrasts with many large companies that are still in the proof-of-concept stage with generative AI. Despite significant financial regulations around privacy, Wells Fargo is advancing rapidly. The bank has trained 4,000 employees through Stanford’s Human-centered AI program. Mehta mentioned that numerous generative AI projects are already in production, mainly aimed at optimizing back-office tasks.

Mehta spoke at VentureBeat’s AI Impact Tour event about achieving an AI governance blueprint, especially concerning generative AI using large language models (LLMs) for intelligent responses. Wells Fargo is a top-three U.S. bank with $1.7 trillion in assets. Their multiple LLM deployments operate on their “Tachyon” platform. Fargo assists customers with everyday banking through text and voice interactions and averages 2.7 interactions per session. It helps with tasks such as bill payments, money transfers, and transaction details. Built on Google Dialogflow and initially using Google’s PaLM 2 LLM, Fargo now incorporates multiple LLMs for various tasks.

Another app, Livesync, launched recently and quickly gained a million monthly active users by helping with goal-setting and planning. Additionally, Wells Fargo uses open-source LLMs like Meta’s Llama 2 for internal applications. Open-source LLMs allow more tuning and control over model capabilities, useful for specific needs.

The Tachyon platform, based on the premise that no single AI model or cloud provider will dominate, and that data transfer between stores can be problematic, is versatile and supports new and larger models efficiently. It includes advanced techniques like model and tensor sharding to optimize computational resources. This platform has given Wells Fargo a production edge, although Mehta anticipates competitors will eventually replicate it.

Looking ahead, multimodal LLMs that handle images, videos, and text are expected to be significant. Mehta provided an example where a commerce app could process a user’s photo of a cruise ship and guide them through the booking process. While current LLMs excel in text, there’s a focus on improving ‘input multimodality’ to understand user intent with minimal textual input.

On AI governance, Mehta emphasized the necessity of documenting each application thoroughly. While many governance challenges have been addressed, cybersecurity and fraud remain concerns. Regulatory changes are critical, though they lag behind technological advances. This regulatory gap impacts Wells Fargo’s operations and forces the bank to invest in additional engineering efforts to prepare for unexpected regulatory demands.

Finally, the bank is focusing on explainable AI, researching to understand the reasoning behind AI model decisions.