Boosting Generative AI Growth with a Collaborative and Open Ecosystem

Boosting Generative AI Growth with a Collaborative and Open Ecosystem

The rapid rise of generative AI has been quite an experience for both companies and consumers globally. Luckily, the initial mix of excitement and concern has evolved into more fruitful discussions about building a comprehensive AI ecosystem for everyone.

Today, a growing number of apps and frameworks from companies like NVIDIA, Hugging Face, and Anyscale are laying the foundation for more widespread use of AI and machine learning. The potential benefits are enormous, with McKinsey estimating that generative AI could contribute up to $4.4 trillion annually to the global economy.

While every company has the chance to participate, creating and effectively using new AI and ML platforms requires dedicated collaborations and active engagement from enterprise leaders to support their customers on their AI journeys.

Here’s a look at how stakeholders can collaborate to develop an open and interconnected ecosystem while enhancing their AI and ML adoption.

Building New AI and ML Systems for Sustainable Growth

Despite the significant advancements in generative AI over the past year, we’re still in the early stages. Using AI and ML responsibly and safely can lead to better customer outcomes and help organizations achieve sustainable growth in these fast-changing times.

There are several crucial steps for CIOs and other stakeholders aiming to nurture an open AI ecosystem through new collaborative efforts:

Embracing Private AI

One major question today is how organizations can speed up their use of AI and ML in a responsible manner. By using private AI, companies can balance the business advantages of AI with their privacy and compliance requirements. VMware, for instance, has shown how companies can work within an open ecosystem to support customers’ adoption of private AI. Partnering with NVIDIA, VMware is delivering a turnkey solution that includes integrated infrastructure and AI tools that customers can purchase and deploy in a consistent hybrid cloud environment. Additionally, by integrating the IBM watsonx AI and data platform with VMware Private AI, customers can enable generative AI use cases. VMware is also working with Intel to help customers use their existing infrastructure and open-source software to simplify the creation and deployment of AI models.

Setting Universal AI Standards

Industries need to establish standards, ethics, and fair regulations. This year, UNESCO published its first “Recommendations on the Ethics of Artificial Intelligence,” setting a positive tone for enterprises. To create a more open and democratic generative AI ecosystem, stakeholders need to develop clear ethical principles that ensure fairness, privacy, accountability, intellectual property protection, and transparency of training data.

Encouraging Open Collaboration

Companies are quickly experimenting with AI foundation models and generative AI tools. By sharing data and coding techniques, enterprises can collectively achieve greater success. For example, our team fine-tuned Hugging Face’s SafeCoder for our GitLab code, which adapted well to VMware’s coding style. Collaborative efforts like these help us build a stronger consensus.

Overcoming Challenges and Building Trust in AI

Generative AI tools, including large language models and computer vision, can drive innovation and improve products and services. However, there are valid concerns that need addressing. Our team has identified three primary challenges enterprises must confront directly:

Developing Affordable AI Models

Training today’s generative AI models is expensive and complex, giving enterprises a strong incentive to create and operate customized AI models at a lower cost. Training a model like GPT-3, for instance, can cost millions of dollars. With AI costs rising, many CIOs are turning to open-source software to build smaller AI models optimized for specific tasks. New solutions offering greater flexibility and choice make AI innovation more accessible to mainstream businesses and entrepreneurs.

Democratizing AI Expertise

Building successful AI models requires specialized talent, which is in short supply. Many CEOs and CIOs express the need to easily adapt to new innovations without being tied to a single setup. This adaptability is challenging when only a small percentage of tech professionals specialize in AI models. To address this skills gap, we need to simplify the creation and training of AI models. Reference architectures can provide a blueprint for enterprises lacking in-house expertise to build AI solutions from scratch.

Transitioning from Risk to Trust

Today’s generative AI models still pose significant risks that could harm customers, damage reputations, and impact revenue. These risks include security breaches, IP violations, and litigation. More organizations are collaborating to address concerns about privacy, data integrity, and bias. For example, the open-source community is developing innovative methods to help businesses train and deploy AI models responsibly. These collective efforts can build greater trust in the use of generative AI for business growth.

New rules and regulations around generative AI will evolve over time. It’s in the best interest of industry stakeholders to establish a stable foundation now.

Collaborating to Build a Stronger AI Ecosystem

Enterprises can better manage AI-driven disruptions by working collectively across the public and private sectors, including major corporations, agencies, small businesses, consumers, and employees. VMware partners closely with CIOs and other decision-makers to ensure their digital infrastructure is optimized for AI and ML integration. Greater collaboration through generative AI can