A California appellate court docket lately affirmed a $4 million jury verdict in favor of a police captain after a sexually express, AI-generated picture resembling her was circulated amongst her colleagues. In Washington state, a trooper filed swimsuit alleging a supervisor used AI to create and distribute a deepfake video depicting him kissing a co-worker. Each instances made headlines and each have been filed beneath office regulation.
However Bradford Kelley, a shareholder at Littler Mendelson who focuses on AI and employment regulation, describes these cases in a latest temporary. He says HR leaders danger lacking the larger image in the event that they take into account this a cybersecurity problem and transfer on.
Deepfakes aren’t only a cybersecurity menace
“It’s not simply deepfakes,” Kelley instructed HR Government in an interview. “If someone makes use of a generative AI software to generate a tune that reveals they’re romantically thinking about a colleague, that’s not essentially a deepfake problem, but it surely’s positively a difficulty the place AI could possibly be weaponized.”
The wave of AI insurance policies HR groups drafted over latest years was largely targeted on a unique drawback set, together with defending confidential information, managing IP danger and guaranteeing accuracy in AI-assisted work. Many might not have included methods to deal with an worker utilizing a available AI software to harass, humiliate or intimidate a co-worker.
The excellence issues as a result of the barrier to entry for crafting AI supplies is now basically zero. Producing a harassing tune, a romantic story involving an actual colleague, a fabricated dialog or a mocking picture not requires technical ability or vital effort. This might alter the danger calculus for HR leaders in ways in which haven’t been broadly mentioned.
EEOC and regulatory dangers
As deepfake expertise turns into more and more obtainable, office dangers additionally ramp up. “The potential penalties span a big selection of authorized domains, together with employment discrimination, privateness regulation violations, intentional infliction of emotional misery and even legal legal responsibility,” in line with Littler’s temporary.

The U.S. Equal Employment Alternative Fee has already moved to deal with AI-generated harassment explicitly. Its enforcement steerage on office harassment identifies the sharing of “AI-generated and deepfake pictures and movies” as examples of conduct that may represent illegal harassment primarily based on protected traits.
And the authorized publicity extends past sexual content material. Attorneys at Littler word that AI instruments can be utilized to generate manipulated pictures concentrating on an worker’s race, incapacity, faith or nationwide origin. That’s a Title VII drawback, a possible People with Disabilities Act drawback, and a hostile work atmosphere declare no matter whether or not anybody referred to as it a deepfake.
New laws can also be shifting shortly. The federal TAKE IT DOWN Act and Florida’s Brooke’s Regulation each mandate the elimination of nonconsensual intimate AI-generated content material inside 48 hours, signaling that the legislative atmosphere round this problem is tightening quick.
Past coverage gaps, HR leaders want to consider what occurs when a criticism lands on their desk. The usual investigation playbook was constructed for a world the place authorship was comparatively simple. AI complicates that. When a harasser can blame AI, HR now faces attribution questions that current frameworks weren’t designed to deal with.
Recommendation for HR leaders
Kelley and his colleagues at Littler advocate that employers start treating AI-generated content material with the identical rigor as bodily proof. That’s a significant shift from how most HR investigations function right now.
Replace the coverage language
Current anti-harassment insurance policies ought to explicitly prohibit the creation or distribution of AI-generated content material that demeans or harasses staff primarily based on protected traits. The language ought to be particular sufficient that staff can’t moderately declare ambiguity.
Retool coaching
Normal harassment coaching doesn’t handle this state of affairs. HR leaders ought to take into account including concrete examples of AI-facilitated harassment (the romantic tune, the fabricated dialog, the altered picture) so staff perceive that utilizing an AI software isn’t an excuse.
Put together the investigation infrastructure
HR groups and their authorized counsel ought to assume by way of now, earlier than a criticism arrives, how they may deal with digital proof, assess credibility and doc findings in instances the place AI is concerned.

