Exclusive Insight: IBM’s Innovative Strategies for Securing Generative AI

Exclusive Insight: IBM's Innovative Strategies for Securing Generative AI

Stay updated with our daily and weekly newsletters for the latest AI news and exclusive content on industry-leading advancements. Sign up for more information.

As more organizations turn to generative AI, security becomes a major concern. IBM has introduced a new security framework to help users tackle the unique risks of generative AI. The IBM Framework for Securing Generative AI is designed to protect AI workflows throughout their lifecycle, from data collection to deployment. This framework offers guidance on common security threats and suggests effective defensive measures. Over the past year, IBM has expanded its generative AI offerings with the watsonX portfolio, which includes various models and governance capabilities.

IBM’s Emerging Security Technology Program Director, Ryan Dougherty, highlighted the need for organizations to focus on identifying likely attacks and implementing top defensive strategies to secure their generative AI projects effectively.

Generative AI security involves both familiar and novel challenges. While some risks are similar to those faced by other types of workloads, others are unique. IBM’s approach centers on three main principles: securing the data, the model, and the usage. Ensuring secure infrastructure and AI governance is crucial throughout the entire process.

Sridhar Muppidi, IBM Fellow and CTO at IBM Security, emphasized that core data security practices like access control and infrastructure security are essential in generative AI just as they are in other IT areas. Unique risks in generative AI include data poisoning, where false data corrupts a dataset, and issues related to bias, data diversity, data drift, and data privacy. Prompt injection, where a user tries to alter a model’s output maliciously, is another emerging risk requiring new control measures.

The IBM Framework for Securing Generative AI provides guidelines and tool recommendations for protecting AI workflows. This landscape includes new security categories like Machine Learning Detection and Response (MLDR), AI Security Posture Management (AISPM), and Machine Learning Security Operation (MLSecOps). MLDR focuses on scanning models for risks, while AISPM ensures proper configurations and best practices, similar to Cloud Security Posture Management (CSPM). MLSecOps covers the entire lifecycle, integrating security from design to deployment.

MLSecOps combines development and security, ensuring that generative AI initiatives are secure from end to end.