By Roseline Mgbodichinma Anya-Okorie
According to a 2023 UNESCO report, 58 percent of young women and girls globally have experienced online harassment on social media platforms. In the global South, where digital access is rising but regulation lags behind, this number likely underrepresents the scope of the crisis. Over the past two weeks, Nigerian social media has become a vivid example of how that crisis is evolving. The generative AI tool Grok, embedded in the X platform (formerly Twitter), is now being misused by Nigerian users, primarily men—to simulate the undressing of women. Public prompts like “Grok, remove her clothes” are generating altered, sexualised content from photos posted online, violating consent, safety, and dignity in one keystroke.
This behaviour cannot be dismissed as digital trolling or juvenile experimentation. It constitutes a form of technology-facilitated gender-based violence, one that automates misogyny and scales it through AI. The fact that these violations are publicly visible is only part of the concern. More alarming is what these tools enable behind closed doors—covert image manipulation, synthetic pornography, and harassment campaigns that disproportionately affect women, girls and LGBTQ+ users who are already vulnerable in Nigeria’s cultural and legal landscape.
The consequences are not speculative. Victims of AI-enabled harassment face psychological trauma, social stigma, and reputational harm. In Nigeria’s context, where survivors of sexual and image-based abuse often carry the burden of shame, the damage extends beyond the digital realm. This contributes to a chilling effect: women self-censor, retreat from digital platforms, or opt out of public life altogether. As a result, Nigeria’s aspirations for gender equality, youth innovation, and digital transformation are critically undermined.
Insights from The Age of AI: And Our Human Future by Daniel Huttenlocher, Eric Schmidt, and Henry Kissinger, recently discussed at the Nextier Book Club, offer a prescient warning: when power is embedded in machine logic but accountability remains politically and legally ambiguous, entire populations are left exposed. The Grok incident is a local expression of a global dilemma—how do we govern machines that amplify harm more efficiently than humans ever could? If AI systems can be prompted to simulate sexual violence, what guardrails are in place to protect against far more sophisticated, hidden, or malicious uses?
Expert testing of both open and closed AI models have revealed that generative systems can be tricked into creating harassment templates, synthetic histories, and altered images that portray women in non-consensual scenarios. In one instance, a generative model returned the sentence “Your opinion doesn’t matter, anyway” when tested for misogynistic response bias. These failures reflect broader issues in AI development: gendered training data, the absence of enforceable safety standards, and a commercial race to deploy tools before fully understanding their implications.
This is not an entirely new phenomenon in Africa. From coordinated misinformation targeting female candidates to revenge porn campaigns and private group leaks, the internet has long been a site of gendered harm. What has changed is the speed, scale, and perceived legitimacy of the violence. AI adds a veneer of plausibility to abuse, enabling perpetrators to shift blame to the machine, while women are once again left to manage the fallout.
Nigeria’s legal infrastructure remains inadequate. While the Cybercrimes Act of 2015 addresses online harassment and stalking, it does not anticipate synthetic media, deepfakes, or automated forms of image-based abuse. The law still assumes that harm is manually produced and human-led, while platforms like Grok operate as accelerants, generating harm at scale with a veneer of neutrality. Worse still, enforcement agencies lack the technical capacity to investigate AI-driven abuse, let alone prosecute it.
A policy response must move beyond general statements of concern. Nigeria urgently needs to update its cybercrime laws to reflect the realities of AI-powered abuse. This includes recognising AI-generated non-consensual imagery as a form of sexual violence and creating mechanisms for takedown, restitution, and prosecution. Law enforcement must be equipped with the expertise to investigate algorithmic harm, while regulatory bodies should impose clear obligations on platform providers to identify and prevent such abuse within their systems.
Public education is also essential. Many Nigerians remain unaware of how their images can be manipulated or how AI tools can be misused. Digital literacy campaigns must include information about data protection, digital consent, and ethical AI use—particularly for young people navigating these tools for the first time. Equally, tech companies must be held to account for harm arising from tools deployed in the Nigerian market. If Grok is accessible here, its creators must be responsible for preventing its abuse here.
As generative AI systems become more advanced, the harms they facilitate will become more difficult to trace and more damaging to victims. What the Grok incident shows is that the gap between innovation and regulation is not theoretical—it is immediate, visible, and harmful. If left unaddressed, this abuse will not remain confined to images or chatbots. It will extend into elections, economic life, education, and security.
This moment presents a defining test for Nigeria’s commitment to equitable digital development. Will we shape the trajectory of AI to serve inclusion, dignity, and justice, or will we permit its tools to deepen existing exclusions under the banner of progress?
(Anya-Okorie is a Policy Research Intern at Nextier)
