In a significant ruling, the U.S. Army expelled a former West Point cadet after he was found guilty of extortion and indecent conduct. The cadet, Cayden Cork, utilized generative artificial intelligence to produce fake nude images of a woman, involving himself in a troubling case that underscores the challenges posed by emerging technologies in legal contexts.
Cork faced accusations of threatening to share these deepfake images publicly unless the woman sent him real nude photographs. After pleading guilty on February 10, he was sentenced by a military judge to receive a reprimand, forfeit all pay, and be dismissed from military service.
Furthermore, Judge Col. Trevor Barna imposed a 10-day confinement sentence. However, an academy spokesperson confirmed that Cork was credited with those days, meaning he served no actual time in custody.
Details regarding the specific AI software used by Cork remain undisclosed, as the academy chose not to release further information.
“This case demonstrates how the military justice system can adapt to the rapid advancements in technology. While these issues can pose unique challenges for investigators and prosecutors, the fundamental principles of accountability and justice remain steadfast,” stated Capt. Anthony Williamson, a prosecutor for the Army Office of Special Trial Counsel’s First Circuit.
The FBI has warned that the use of AI to produce illegal deepfakes is becoming more widespread as media manipulation grows simpler. Last year, Congress passed the Take It Down Act, a legislative measure aimed at criminalizing the creation of non-consensual sexualized deepfakes.
Additionally, in January, more than 100 individuals filed a lawsuit against xAI — a company founded by Elon Musk — claiming that its AI model, Grok, allowed users to create sexually exploitative deepfake images of women and girls. Musk had previously stated on X that he was “not aware of any naked underage images generated by Grok.”
Grok is anticipated to be incorporated into the Defense Department’s AI models as part of the GenAI.mil platform.
Between 2024 and 2025, Cork manipulated a publicly available photo of a woman (whose identity was protected in court documents) to create sexualized images. He allegedly contacted her using multiple phone numbers and, in September 2024, threatened to release the altered images unless she sent him a specific type of photo.
Cork also questioned her about the image’s accuracy, asking, “how accurate is this?” and “is this you?” Attempts to reach Cork for comment were unsuccessful, while Stars and Stripes reported that he is a 20-year-old from Florida.
“Ultimately, this prosecution highlights that personal accountability does not diminish when a crime is committed with the aid of artificial intelligence,” Williamson remarked. “When service members misuse emerging technologies to commit serious offenses against their peers, the Army will take the necessary action to protect victims and maintain discipline.”
“In this case, that resulted in the dismissal of cadet Cork from the U.S. Military Academy and the Army following his guilty plea to felony charges, reflecting the gravity of his actions,” he added.