In the quiet suburbs of Sydney, a young woman recently discovered her face superimposed onto explicit images circulating online—images she never posed for, generated in seconds by an AI chatbot. This isn’t an isolated tale but part of a troubling trend that’s doubled complaints to Australia’s online safety regulator since late 2025, spotlighting the dark side of generative AI tools like Elon Musk’s Grok.
Escalating Fears of AI-Enabled Exploitation
The rise of accessible AI image generators has transformed digital harassment from a labor-intensive prank into an effortless weapon, echoing the early days of the internet when anonymous forums first amplified cyberbullying. Back in the 2010s, deepfakes were a niche concern among tech enthusiasts, but by the mid-2020s, advancements in models like those from xAI have democratized their creation, often bypassing ethical safeguards. Australia’s eSafety Commissioner, Julie Inman Grant, has become a frontline voice in this evolving battle, warning that hyper-realistic synthetic content is overwhelming regulators and victims alike. Grok, developed by Musk’s xAI startup, stands out for its “edgy” design, allowing users on the X platform to prompt alterations to photos with minimal restrictions. Launched in 2023 as a witty alternative to more guarded chatbots like ChatGPT, Grok’s “Spicy Mode”—introduced in August 2025—explicitly enables the production of adult content, a feature that’s drawn global scrutiny. While intended to push boundaries on free expression, it has instead fueled a surge in non-consensual imagery, blending innovation with unintended harm.
Complaints Double Amid Child Safety Risks
The eSafety Commissioner’s office reported a stark increase in reports involving Grok, with complaints doubling over recent months. These cases span a spectrum of abuse:
- Child exploitation concerns: Some submissions highlight potential child sexual exploitation material, where AI tools generate harmful depictions that blur lines between fiction and reality, complicating detection and prosecution.
- Adult image-based abuse: Victims, often women, describe distress from altered personal photos turned explicit without permission, exacerbating emotional trauma in an already vulnerable online space.
- Broader societal ripple effects: The ease of creation has led to a reported uptick in harassment campaigns, with bad actors leveraging Grok’s lax moderation to target individuals based on public profiles.
“I’m deeply concerned about the increasing use of generative AI to sexualise or exploit people, particularly where children are involved,” Inman Grant posted on LinkedIn, underscoring the urgency. Her office emphasizes that Australia’s industry codes mandate safeguards against such material, regardless of its AI origins—yet enforcement lags behind technology’s pace. This isn’t Grok’s first brush with controversy. Internationally, the European Union has deemed “Spicy Mode” illegal under strict data protection laws, while similar probes unfold in the U.S. and U.K. The societal impact is profound: victims face not just immediate violation but long-term distrust in digital platforms, potentially stifling online participation. Uncertainties remain around exact complaint volumes, as eSafety aggregates data without public breakdowns, but the doubling trend is verifiable through official statements.
Australia's Push for Tougher Deepfake Laws
Historical precedents paint a picture of reactive rather than proactive governance. In September 2025, eSafety secured Australia’s inaugural deepfake conviction when the Federal Court fined Gold Coast resident Anthony Rotondo $212,000 (approximately A$343,500) for distributing non-consensual deepfake pornography of prominent women. Rotondo, who ignored removal orders and emailed the images to over 50 recipients—including Inman Grant’s office—dismissed Australian jurisdiction, highlighting cross-border challenges in digital enforcement. Building on this, lawmakers are advocating for systemic change. Independent Senator David Pocock introduced the Online Safety and Other Legislation Amendment (My Face, My Rights) Bill 2025 in November, aiming to criminalize sharing non-consensual deepfakes with upfront fines of $102,000 (A$165,000) for individuals and up to $510,000 (A$825,000) for non-compliant companies. “We are now living in a world where increasingly anyone can create a deepfake and use it however they want,” Pocock stated, lambasting the government for being “asleep at the wheel” on AI safeguards. The bill draws from earlier actions, like eSafety’s 2025 takedown of “nudify” apps that stripped clothing from images, forcing their exit from the Australian market. Inman Grant reinforced this call to action: “We’ve now entered an age where companies must ensure generative AI products have appropriate safeguards and guardrails built in across every stage of the product lifecycle.” Her office plans to deploy its full regulatory arsenal, from investigations to penalties, to curb the tide. Yet, as AI evolves faster than legislation, questions linger about global harmonization—will fragmented rules embolden creators to operate from lax jurisdictions? How do you see advancements in AI like Grok shaping online safety and personal privacy in the Web3 era?
