Chat with us, powered by LiveChat

When the Image Isn’t Real: Addressing AI-Generated Explicit Photos

Published on: January 21, 2026

An ATIXA Tip of the Week by Mikiba Morehead, Ed.D., M.A. 

Sexually explicit photos are appearing more often in educational environments. Whether the images are real (authentic) or fake (synthetic, including AI-generated or “deepfake”), the harm they cause can be significant. When authentic or synthetic images are created and distributed without the consent of the individual depicted, they constitute Non-Consensual Intimate Imagery (NCII), a form of Image-Based Sexual Abuse (IBSA).  

The creation and distribution of synthetic NCII is especially insidious because of the realistic nature of the images in question. For individuals depicted in the images, it can be extremely difficult to convince others that the images are, in fact, fake. Therefore, institutional response should center on support, safety, and accountability, not authenticity. 

School administrators and higher education practitioners should approach reports of synthetic NCII with a presumption of credibility, focusing on support and response rather than requiring the individual depicted to first prove that the images are fake, or jumping to an opportunity to discipline them. Requesting the individual to provide comparisons or physical validation is invasive and unnecessary. Instead, school administrators and practitioners can help the complainant preserve and document the digital evidence, including where the images appeared, how they were shared, and on which platforms. This information can help with efforts to trace the origin of the content and requests for potential removal from websites or social media platforms to prevent further harm. This conduct is often illegal, so any forensic examination of the authenticity of the image can rely on law enforcement.  

Institutions should thus take heightened precautions when the sexually explicit images potentially depict minors, and they should ensure that all legal obligations, including mandatory reporting and evidence-handling requirements, are followed. Even if there is a likelihood that the images are AI-generated, synthetic NCII may qualify as Child Sexual Abuse Material (CSAM). Therefore, it is recommended that school administrators and practitioners work closely with their school resource officers (SROs), campus police departments, or local law enforcement agencies to develop appropriate protocols for the handling of CSAM. School administrators and practitioners should avoid viewing, requesting, or retaining these images. Instead, administrators and practitioners should rely on written descriptions of the images and coordinate with law enforcement when appropriate. 

Finally, institutions should guide impacted individuals to takedown and advocacy resources that provide specialized, expert support, including: 

A Framework for Evaluating NCII  

NCII is a form of sexual exploitation. When NCII occurs within a school’s educational program or activity, Title IX, as well as other institutional policies, may be implicated. In these situations, Title IX Coordinators will need to evaluate if the distribution of NCII has resulted in a hostile environment for the individual depicted or potentially constitutes sexual exploitation under institutional policy. Title IX Coordinators will need to conduct an initial assessment that evaluates SPOO (severe, pervasive, and objectively offensive), as well as indicators of consent, and any elements of a separate sexual exploitation policy, if one is in place. 

Severity is a measure of the egregiousness of an incident, either in isolation or in aggregate. Typically, physical conduct is more likely to be severe; therefore, an evaluation of the totality of the circumstances should be considered when evaluating nonphysical conduct. If the conduct was non-consensual or coerced, that could indicate severity. Severity may be dependent on the level of realism, audience reach, and the intent behind the conduct. If the images are abusive, degrading, or humiliating, that could indicate severity. 

Passing off AI-generated images as real could increase the level of harm, especially when done with the intent to deceive others or purposefully humiliate or degrade the individual depicted. Therefore, Title IX Coordinators should assess impact, intent, and reach, not just the authenticity of the images. 

Pervasiveness is a measure of the widespread nature of conduct or its impact. In an age of instant sharing, a single message can go viral (one share can lead to widespread exposure). Whether an image was shown to one person rather than sent to one matters, as sending allows immediate re-sharing and loss of control of the image by the individual depicted. Pervasiveness under Title IX can be assessed by the act, itself, or potentially by its impact.  

When determining pervasiveness, it is important for Title IX Coordinators to avoid focusing solely on numbers or repetition and instead examine the following: 

  • How the image was shared (shown, sent, or posted) 
  • Who received it (trusted peer vs. public audience) 
  • Potential for redistribution (ability to forward or repost) 
  • Impact on the complainant (emotional, social, reputational, subjected to subsequent harassment or bullying) 

Even limited sharing may meet the threshold for a policy violation if the harm is substantial. Title IX Coordinators should ground pervasiveness analyses in risk and harm, as well as volume and reach. 

Several factors weigh into an evaluation of the objective offense of misconduct. The relative ages and relationship of the complainant and respondent (if known), as well as the welcomeness, pervasiveness, and severity of the conduct, should be evaluated. Conduct that is sexual in nature, threatening, humiliating, intimidating, ridiculing, or abusive may be determined to be objectively offensive. Additionally, Title IX Coordinators will want to assess objective offense from the perspective of a reasonable person who is similarly situated to (in the shoes of) the complainant. Conduct that is determined to be severe, pervasive, and objectively offensive may result in an effective denial of the individual’s ability to access the school or campus’s education program or activities. 

Building a Safer Digital Future 

To develop a best-practice response to NCII incidents, institutions must evolve their policies and procedures. 

A. Policy Updates 

  • Revise definitions of Sexual Exploitation and nonconsensual image sharing to include synthetic or AI-generated imagery. 
  • Clarify that the nonconsensual distribution of such sexually explicit material violates policy regardless of authenticity, and that even consensual distribution can still be criminal conduct. 
  • Provide specific examples of prohibited AI-related conduct in policy language. 

B. Investigative Practices 

  • Prohibit invasive verification techniques such as live comparisons. 
  • Support complainants in documenting evidence safely. 
  • Use metadata, digital forensics, or reverse image searches to identify image sources. 

C. Supportive Measures 

  • Offer advocacy, counseling, and safety planning resources. 
  • Provide resources for content removal and online reporting. 
  • Maintain strict privacy protections throughout the process. 

D. Prevention and Education 

  • Integrate AI literacy and digital ethics into training. 
  • Educate students and employees on the consequences of creating or sharing synthetic sexual content. 
  • Encourage bystander intervention and community responsibility. 

Address Harm, Not Authenticity 

The creation of AI-generated sexually explicit images challenges traditional understandings of consent and exploitation. Although the use of technology has created a sense of novelty, NCII is a form of gender-based violence. Therefore, the core principles of a best practice response remain the same: assume complaints are made in good faith, preserve dignity, and pursue accountability. By addressing impact and harm rather than getting mired in debates about authenticity, institutions can effectively respond to and remedy both authentic and synthetic forms of NCII. 

Through proactive policy revisions, updated investigation practices, and inclusive prevention and education efforts, schools and campuses can reinforce the message that NCII is prohibited and that technology should not be used to violate consent or as a tool for discrimination and harassment. 

ATIXA offers consulting to help you improve policies and procedures for responding to NCII incidents and to create a safer digital future for your community. Contact our team at inquiry@tngconsulting.com to learn more.