
Google has reported receiving over 250 global complaints about its AI software being used to create deepfake terrorism content, according to Australia’s eSafety Commission. The tech giant, owned by Alphabet, also received 86 user reports warning of AI-generated child exploitation material, the commission revealed in its latest report.
The data covers the period from April 2023 to February 2024, as part of Australia’s regulatory requirements for tech companies to disclose harm prevention efforts.
The commission described Google’s disclosure as an unprecedented insight into AI misuse, emphasizing the need for strong safeguards. While Google used automated detection tools to remove AI-generated child abuse content, it did not apply the same system to extremist material, according to the regulator.
Other platforms, including X (formerly Twitter) and Telegram, have been fined by Australian authorities for what regulators deemed insufficient action in their reports on online harm. Both companies plan to challenge the penalties.