Meta AI app found to have global user privacy issues

The new Meta AI app has turned out to be a real privacy disaster. Users unknowingly share their conversations with the AI publicly: audio, text, and images become available to everyone. The reason is the “Share” button, which leads to a publication without a clear indication of the privacy level.
Hundreds of sensitive addresses have already appeared on the app, from confessions of attempted tax evasion to details of court cases and home addresses. Security expert Rachel Toback reported finding personal data in the public domain. The most shocking point is that Meta does not inform where and how exactly the content is published.
The company’s response is silence. Meta representatives did not respond to journalists’ inquiries. This is especially troubling given that many people access Meta AI via Instagram, where the account can be public by default, and thus so can requests.
The feature of publishing dialogs with AI seems shortsighted. Precedents, such as the AOL search leak in 2006, have already shown how such experiments end. Meta seems to have ignored this experience, creating a product that looks more like a social network than an AI assistant.
Despite an investment of billions of dollars, the app has been installed only 6.5 million times since the end of April. For a company of Meta’s scale, this is a failed launch, jeopardizing not only user data but also the brand’s reputation.