This week, the Federal Trade Commission (FTC) expanded its investigative reach over products and services involving artificial intelligence (AI). While AI offers numerous possibilities, its growing presence across various industries has prompted regulators to pay closer attention.
This new move means AI practices will undergo more rigorous scrutiny, with the FTC now equipped with streamlined tools to gather information through civil investigative demands (CIDs). The FTC has used CIDs before to tackle issues like illegal robocalls, and in 2022, it secured federal court orders against VoIP providers XCast Labs and Deltracon for non-compliance with CIDs.
Samuel Levine, the director of the Bureau of Consumer Protection at the FTC, emphasized that CIDs are legally binding and failure to comply can lead to contempt charges. The recent actions underscore the importance of companies promptly providing all necessary documentation and data.
Given the FTC’s broader authority over AI, tech firms should pay attention. Staying prepared by organizing internal records related to AI claims, product development, and third-party oversight will help businesses respond quickly if scrutinized.
The FTC’s announcement highlighted the need for expedited fact-gathering about AI uses that impact consumer protection and fair competition. One key area of interest will be marketing claims. If a company promotes an AI solution’s capabilities, it must have substantial proof to support those claims.
Records of model training data, validation studies, case studies showing real-world impact, and ongoing monitoring reports are examples of the evidence that may reinforce AI claims. Peer reviews, oversight of third-party data sources, and documentation of risk mitigation efforts can also boost credibility.
Addressing algorithmic fairness and mitigating biases will remain critical as AI becomes more integrated into decision-making processes. The FTC will look for businesses to proactively manage these issues throughout product development. Documentation of design processes, impact assessments, risk logs, oversight programs, and response protocols can serve as evidence of due diligence.
For organizations already using AI, compliance programs must show ongoing monitoring and a commitment to addressing new problems. While technical issues can arise, quick and transparent responses generally earn goodwill with regulators. Being proactive rather than reactive is beneficial when under scrutiny.
The FTC’s new resolution also indicates that oversight is now expected to extend to third-party relationships. If an organization relies on third parties for data, model training, or other AI aspects, access to information about these systems and activities will be required.
Strong contractual requirements for transparency, claim verification, and controls are essential. Regular audits and thorough documentation of third-party due diligence help protect both the organization and its users, highlighting a shared awareness that regulatory accountability now includes technological partnerships.
In addition to expanding its investigatory powers, the FTC announced plans to launch a Voice Cloning Challenge to encourage the development of solutions that protect users from fraud or privacy breaches involving synthetic voices. Recognizing the potential abuse of voice impersonation for scams, this initiative seeks collaboration to curb such risks.
Furthermore, during the U.S. Copyright Office’s study of generative AI, the FTC emphasized its role in consumer protection and competition, suggesting AI-generated content could facilitate deception or unfair practices. While this view has faced criticism for potentially overstepping into legal doctrines like fair use, it reflects the FTC’s proactive stance in adapting to new technological challenges. Overall, the agency aims to balance innovation with responsible oversight through a mix of advisory initiatives and enforcement tools.