Presented by Dell Technologies
Shadow AI is becoming a permanent fixture. The use of generative AI without IT’s approval is posing significant threats to businesses of all sizes. According to a Salesforce survey of 14,000 workers, 28% of employees currently use generative AI at work, and over half do so without their employer’s consent. These figures are likely to increase as more employees realize that generating content can significantly boost productivity.
The silver lining for IT leaders is that shadow AI offers a chance to update IT governance strategies. Balancing business objectives with risk mitigation is a challenging but necessary task for CIOs. IT governance needs to evolve to meet changing business requirements and risks. It is crucial to manage strategic alignment and risk while delivering value to the organization.
The Case for Governing Generative AI
Trying to control shadow AI is like attempting to put toothpaste back into the tube. It’s similar to shadow IT but also very different. With shadow IT, employees signed up for SaaS services with corporate credit cards and invested time in learning how to use them. Shadow AI involves no credit card, just instant content creation, making it potentially riskier.
The primary concern is that employees might use generative AI services unsafely, leading to a governance nightmare. For instance, some staff may include sensitive corporate information such as product specifications or personal data in AI prompts. This could inadvertently share critical business secrets with those who can reverse-engineer prompts for competitive advantage. Including details about patents or trade secrets might expose the company to legal and copyright issues.
Although business leaders recognize the risks of generative AI, many organizations lack the mature policies and processes to govern its use. In fact, most companies are slow to establish safeguards, with 69% of businesses surveyed by KPMG either just starting to evaluate AI risks or haven’t begun at all.
Banning generative AI use is risky because it might lead to covert shadow AI activities, increasing the likelihood of data breaches, compliance violations, and reputational damage. With 44% of IT decision-makers indicating they are at early to mid-stages of their AI journeys, it’s essential for IT leaders to guide the rest of the business.
IT leaders should collaborate with legal, compliance, and risk departments to create a centralized generative AI strategy. They must determine acceptable AI usage, communicate these policies to employees, and develop training programs to encourage responsible use.
An AI Governance Playbook
To protect corporate data, organizations should align AI governance with their IT strategies. Here are some steps:
1. Implement AI governance policies: Establish guidelines for AI use, define approved systems, vet applications, and communicate the consequences of using unapproved AI.
2. Provide approved tools: Offer employees sanctioned AI applications to minimize the need for unauthorized tools.
3. Formalize training: Educate staff on responsible and ethical AI usage and the risks of inputting sensitive data into unsanctioned AI systems.
4. Audit and monitor use: Conduct regular audits and compliance checks to detect unauthorized AI usage.
5. Encourage transparency and reporting: Foster a culture where employees can report unauthorized AI use comfortably, facilitating quick responses and minimizing damage.
6. Communicate constantly: Keep AI policies and guidelines updated and ensure employees are informed about any changes.
Your Generative AI Insurance Policy
Good governance acts as insurance: it’s better to have it and not need it than to need it and not have it. Generative AI, like other emerging technologies, needs regular oversight. Its ease of adoption makes it harder to control, so guiding employee use is crucial to avoid risky behaviors.
As you update your governance model for generative AI, start by organizing your data. Identify sensitive and proprietary information that should be managed within controlled AI systems, ideally on-premises.
Generative AI is a new frontier in the AI ecosystem. Trusted partners can help navigate the learning curve. Dell Technologies, for example, is building business cases with virtual assistants to help customers start their AI journeys safely within their data centers. Leveraging open-source large language models (LLMs), you can deploy your AI systems securely, protecting corporate data.
Ultimately, integrating AI with your data governance practices may be the best approach.
Learn more at dell.com/ai.