LLM Social Engineering: The New HR Risk No One’s Talking About
A warning to HR Leadership ahead of the compliance curve
The Language Hack: Why AI Social Engineering Is HR’s Next Risk
By Tess Hilson-Greener | Brit AI with FutureInsight
The Threat HR Is Facing Is Real
A new type of threat is emerging in HR one that doesn’t come from bad actors or cyberattacks in the traditional sense, but from the clever manipulation of language itself.
As HR departments increasingly integrate AI-powered tools especially those built on Large Language Models (LLMs) the risk of social engineering has quietly entered our domain.
A recent PLOS One study sheds light on how “LLM red teamers” engineers, writers, and creatives intentionally provoke AI models into failing by crafting deceptive, persuasive prompts. These tactics aren’t hacking in the traditional sense. Instead, they exploit the fact that LLMs respond to language, making them vulnerable to manipulation by anyone with a keyboard and linguistic skill.
What is LLM Social Engineering?
In this context, social engineering refers to manipulating an LLM’s outputs through cleverly worded prompts that bypass filters, elicit restricted content, or steer the model into giving unethical, unsafe, or biased responses.
The PLOS One study categorised 35 techniques into five key methods:
Language manipulation (e.g. stop sequences, encoded prompts)
Rhetorical framing (e.g. emotionally persuasive or misleading requests)
World-building (e.g. hypothetical or unethical contexts)
Fictionalisation (e.g. roleplay or genre-based framing)
Stratagems (e.g. prompt regeneration, temperature tuning)
Why This Matters to HR
LLMs are already woven into many HR functions e.g. recruitment bots, policy assistants, DEI sentiment tools, onboarding platforms, and coaching apps.
But this integration introduces new risks. With the right phrasing, anyone can manipulate these systems to gain advantage, circumvent compliance, or produce unsafe responses.
This is not hypothetical. It’s already happening.
Four Ways Social Engineering Can Harm HR
1.Recruitment Chatbots: Gaming the System
Example: A candidate uses roleplay to uncover what gets someone shortlisted.
Impact: Unfair advantage, biased filtering, legal exposure.
2.Internal Policy Assistants: Prompting Non-Compliant Advice
Example: An employee tricks the model into giving unlawful guidance about sick leave or working hours.
Impact: Employee disputes, reputational damage, regulatory breaches.
3.Sentiment or DEI Tools: Language Bias
Example: Subtle changes in tone or grammar skew results in engagement or bias analysis.
Impact: Faulty analytics, poor decisions, broken trust.
4.AI Onboarding Bots: Undermining Psychological Safety
Example: A new hire tests the bot with exaggerated mental health scenarios.
Impact: Miscommunication, loss of confidence, wellbeing risks.
What HR Must Do Now
LLMs operate through natural language. And HR is one of the most language-driven functions in any organisation.
That’s why this risk hits us harder—and earlier.
We must treat LLM safety like phishing, data protection, or safeguarding: with practical governance, awareness training, and proactive risk audits.
Here’s where to start:
Don’t Let Language Become a Loophole
LLMs don’t need to be conscious to be manipulated—they just need to be predictable.
And in HR, where so much depends on language—policies, communication, performance, sentiment—that predictability becomes a vulnerability.
It’s time to build human-first AI governance inside HR. Not just to safeguard compliance, but to protect trust in every touchpoint powered by AI.
Further Reading & Acknowledgements
Original research: Summon a Demon and Bind It: A Grounded Theory of LLM Red Teaming
LinkedIn commentary by Nicolas Miailhe: “Prompting language models into misbehaviour is easier than we think—and HR must prepare for it.”
About the Author
Tess Hilson-Greener is the founder of Brit AI with FutureInsight, a speaker, advisor, and author of the HR2035 series. She helps organisations design ethical, intelligent HR strategies that balance tech with trust.