Anthropic Acknowledges Recent Data Breach

Anthropic Acknowledges Recent Data Breach

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage.

It’s been a busy week for Anthropic, an AI startup known for its Claude family of large language models (LLMs) and chatbots. On Monday, January 22nd, the company discovered that a contractor accidentally sent a file containing non-sensitive customer information to a third party. This file included customer names and their open credit balances as of the end of 2023.

An Anthropic spokesperson clarified, “Our investigation shows this was an isolated incident caused by human error — not a breach of Anthropic systems. We have notified affected customers and provided them with the necessary guidance.”

This incident was discovered right before the Federal Trade Commission (FTC) announced it was investigating Anthropic’s partnerships with Amazon and Google, as well as those of OpenAI with Microsoft. The spokesperson emphasized that the leak is unrelated to the FTC probe, on which they declined to comment.

In a recent post, Windows Report shared an email from Anthropic to customers acknowledging the leak. The email informed customers that their account name and accounts receivable information as of December 31, 2023, had been mistakenly sent by a contractor. However, it assured that the leaked information did not include sensitive personal data like banking or payment details. The company reiterated that this was an isolated error and there was no evidence of malicious behavior.

Anthropic urged customers to be vigilant about any suspicious communications claiming to be from Anthropic, such as payment requests, email links, or unusual requests for credentials or passwords. Customers were advised to ignore any suspicious contacts and follow their internal controls regarding payments and invoices.

The company expressed regret for the incident and offered support to affected customers.

Regarding the leak, Anthropic stated that only a “subset” of users were impacted but did not specify the number. The incident highlights growing concerns that human errors could lead to data breaches, especially as more enterprises use third-party LLMs.

Anthropic has seen rapid growth since its founding in 2021. Valued at $18.4 billion, the company raised $750 million last year and is set to receive up to $2 billion from Google and $4 billion from Amazon. It is also in talks for another $750 million from Menlo Ventures. However, its relationships with AWS and Google have caught the attention of the FTC, which is now scrutinizing these collaborations.

The FTC is seeking detailed information on Anthropic’s partnerships with Amazon and Google, and OpenAI’s partnership with Microsoft. It wants to understand the agreements, their competitive impact, and any other government inquiries into these relationships.

The agency is specifically concerned with whether these partnerships allow dominant firms to gain an unfair competitive edge.

Anthropic has strong ties with AWS and Google. Amazon is investing up to $4 billion in Anthropic and will hold a minority stake. AWS is Anthropic’s main cloud provider and supplies its chips. Anthropic has committed to making its future models available to AWS customers.

Anthropic’s partnership with Google involves using Google Cloud security services, databases, and data warehouses. They also deploy Google’s TPU v5e for the Claude LLM. Google Cloud CEO Thomas Kurian praised the partnership as a way to bring AI to more people safely and securely, underlining their shared values on bold and responsible AI development.