Policy monitor

Federal Trade Commission vs. AI: misleading marketing of AI and the harming of consumers

Companies often make grand promises in their marketing campaigns, and the advertising for AI systems is no exception. Some companies have crossed the line by overestimating the capabilities of the AI systems they offer in their marketing campaigns. By making exaggerated promises or generating fake reviews using AI, consumers are being misled. According to the Federal Trade Commission (FTC), that must come to an end. Over the past few months, the FTC has already called several companies to account for making misleading or false statements.

What: Consumer protection – Enforcing authority

Impactscore: 2

For who: citizens, AI providers/importers/manufacturers

URL: https://www.ftc.gov/policy/adv...

Key takeaways for Flanders:

A similar trend could potentially emerge in Europe. Unfortunately, not all consumer organisation decisions are published as consistently, so it remains unclear whether the same trends are visible here. What is clear, however, is that the use of AI must take place within existing frameworks, including those for consumer protection.

With their mission statement reading, “protecting the public from deceptive or unfair business practices and unfair methods of competition”, the Federal Trade Commission (FTC) has now also turned its focus to artificial intelligence (AI) providers and sellers. Press releases of the FTC against companies who put AI systems on the market are being issued at a rapid pace. As the term ‘AI’ has become a buzzword to market technology, it is often accompanied by claims that sellers cannot substantiate. The FTC has deemed this unacceptable, urging companies to provide evidence for AI-related claims or risks being accused of unfair marketing practices.

For example, the smart-camera company IntelliVision claimed that their facial recognition technology was free from gender or racial bias due to its training on millions of images. However, the FTC revealed that the system had only been trained on 100,000 images, making it impossible to substantiate such claims. Similarly, Evolv, a company selling AI-powered security scanning products to stadiums, K-12 schools, and hospitals, faced FTC scrutiny after claiming its system provided superior protection compared to simple metal detectors. The company asserted that their AI could accurately screen for guns, knives, and other threats while ignoring harmless items. However, the FTC highlighted inaccuracies in these claims, including a failure to detect a seven-inch knife that was later used in a stabbing incident at a school. These are not the FTC’s first actions against exaggerated AI marketing claims designed to attract buyers. The FTC challenged companies for allegedly baseless claims around algorithmic solutions and accuracy of their genetic DNA testing reports (CRI Genetics, LLC), and claims that mobile apps could detect symptoms of melanoma, even in its early stages (MelApp and Mole Detective).

More recently, in January 2025, the FTC imposed a $1 million USD fine on accesiBe. The company had allegedly misrepresented the capabilities of its AI-driven web accessibility tool, falsely claiming it was compliant with the Web Context Accessibility Guidelines (WCAG) for people with disabilities.

It is clear that the FTC aims to put an end to slogans and statements that consumers cannot trust. Samuel Levine, the current director of the FTC’s Bureau of Consumer Protection, emphasised this point, stating, “Overstating a product’s AI or other capabilities without adequate evidence is deceptive, and the FTC will act to stop it.” Such claims, he noted, are false, misleading, or unsubstantiated and therefore violate the FTC Act. The FTC also emphasised to be increasingly taking note of AI’s potential for and real-world instances of harm – from incentivising commercial surveillance to enabling fraud and impersonation to perpetuating illegal discrimination.

The FTC does not just examine the advertising of AI, but also looks at the applications itself. Examples included a tool to generate fake product reviews (see the press release here) and the use of facial recognition technology that falsely tagged consumers in its stores, particularly women and people of color, as shoplifters (see the complaint here). Also AI-related impersonation, fraud, child sexual abuse material, and non-consensual intimate imagery was targeted as the FTC finalized a rule to combat impersonation of governments and business, which is now made even easier by the use of AI-generated deep fakes (see press release here).

Interestingly, while similar practices are widespread on European soil, there doesn’t appear to be an equivalent crackdown on false claims. This may be due to the fact that European and Belgian consumer protection bodies don’t publicise their activities in the same way the FTC promotes its enforcement efforts. If these institutions were to publish their decisions more transparently, it could enhance consumer and company trust in their efforts. However, the focus of consumer protection bodies does not yet seem to extend significantly to AI-related practices.