Exclusive: Databricks Introduces Advanced Tools for Crafting Premium RAG Applications

Exclusive: Databricks Introduces Advanced Tools for Crafting Premium RAG Applications

Subscribe to our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Today, Databricks announced new tools for its Data Intelligence Platform, aimed at helping customers build, deploy, and maintain high-quality large language model (LLM) apps for various business use cases.

Starting today, these tools are available in public preview. They tackle key challenges in developing production-grade retrieval augmented generation (RAG) apps. These challenges include serving relevant real-time business data from various sources, combining that data with the right model for specific applications, and monitoring those applications for issues like toxicity.

Businesses are eager to develop and deploy RAG apps, but they often struggle to create solutions that consistently provide accurate and high-quality responses while preventing undesirable outcomes. The new tools are designed to address this problem.

RAG enhances the accuracy and reliability of LLMs by using specific data sources. This technique is crucial for businesses that need models to be up-to-date and tailored to specific topics. However, RAG involves multiple layers of work, including data collection, preparation, model integration, prompt engineering, and monitoring. This fragmented process can result in underperforming RAG apps.

Databricks’ new RAG tools aim to simplify this by allowing teams to quickly prototype and deploy high-quality RAG apps. The new vector search and feature-serving capabilities eliminate the need for complex pipelines to load data into a custom serving layer. Structured and unstructured data from Delta tables are automatically pulled and synced with the LLM app, ensuring access to the most recent and relevant business information.

Unity Catalog automatically tracks the lineage between offline and online copies of datasets, making it easier to debug data quality issues. It also enforces consistent access control settings, enhancing data security.

With the unified AI playground and MLFlow evaluation, developers can access models from different providers, including Azure OpenAI Service, AWS Bedrock, Anthropic, and open-source options like Llama 2 and MPT. They can evaluate these models on metrics such as toxicity, latency, and token count, allowing them to deploy their project on the best-performing and most cost-effective model.

Databricks is also releasing foundation model APIs, a fully managed set of LLM models available on a pay-per-token basis. This offers cost and flexibility benefits with enhanced data security.

Once a RAG app is deployed, monitoring its performance in a production environment is crucial. Databricks’ fully-managed Lakehouse Monitoring capability can automatically scan application responses for toxicity, hallucinations, or other unsafe content. This monitoring feeds into dashboards, alert systems, and related data pipelines, allowing teams to quickly address and prevent large-scale issues.

Early adopters, such as RV supplier Lippert and EQT Corporation, are already testing and using these tools. Internally, Databricks teams have built several RAG apps using these tools, including a RAG slackbot for account executives and a browser plugin for sales and business development reps.

Given the growing demand for LLM apps tailored to specific topics, Databricks plans to invest heavily in its suite of RAG tools, ensuring customers can deploy high-quality LLM apps based on their data at scale. The company is committed to ongoing research and plans to announce more innovations in the future.

Stay informed! Get the latest news delivered to your inbox daily. Subscribe