Stay updated with our daily and weekly newsletters for the latest news and exclusive content on leading AI developments.
At the Microsoft Ignite conference today, Microsoft introduced enhanced data security and compliance features in Microsoft Purview. These new capabilities aim to safeguard information used in generative AI systems like Copilot.
With these updates, Copilot users on Microsoft 365 can now control what data the AI coding assistant can access. The system will automatically classify sensitive data in responses and establish compliance controls around LLM usage.
Herain Oberoi, general manager of Microsoft data security, compliance, and privacy, and Rudra Mitra, corporate vice president at Microsoft, spoke with VentureBeat before the announcement. They provided key insights into Microsoft’s innovative strategy.
Data is crucial for AI, as it forms the foundation on which AI is built. Purview aims to secure both AI and the data it uses, reflecting Microsoft’s responsible approach to technology.
A new AI hub in Purview will allow administrators to monitor Copilot usage within the organization. They’ll be able to see which employees are using the AI and assess the related risks. Sensitive data will be blocked from input into Copilot based on user risk profiles, and AI outputs will carry protective labels from the source data. Mitra emphasized the importance of having a complete visibility picture across Microsoft’s Copilots for customers.
On the compliance front, Purview’s auditing, retention, and communication monitoring policies will now apply to Copilot interactions. Microsoft plans to extend Purview’s protection beyond Copilot to in-house built AI and third-party consumer apps like ChatGPT.
As AI adoption grows, Microsoft aims to lead in responsible and ethical data use in enterprise AI systems. Strong data governance will be essential to maintain privacy and prevent misuse in this next phase of technology. Achieving responsible AI will require the entire tech industry’s commitment. Competitors like Google, Amazon, and IBM need to prioritize data ethics for AI to gain user trust and reach its full potential.
Enterprises desire both cutting-edge innovation and strong data protection. Companies that prioritize trust will lead the way into the AI-powered future.