It shouldn’t come as a surprise that LLMs are incredibly good at tricking people into believing pretty much anything – including nefarious use cases such as spear phishing.
A recent study from Fred Heiding et al. shows that AI-powered spear phishing attacks yielded a >50% click-through rate (which, to be frank, is astronomical and scary as hell…).
TL;DR: We ran a human subject study on whether language models can successfully spear-phish people. We use AI agents built from GPT-4o and Claude 3.5 Sonnet to search the web for available information on a target and use this for highly personalized phishing messages. We achieved a click-through rate of above 50% for our AI-generated phishing emails.
(*) Spearfishing is a targeted attempt to steal sensitive information such as account credentials or financial information from a specific individual or organization. Attackers typically gather information about their targets to craft tailored emails or messages, increasing the likelihood of success.