Large Language Models Are More Persuasive Than Incentivized Human Persuaders

A new paper showed that LLMs are now more persuasive than humans when trying to influence others:

LLM persuaders achieved significantly higher compliance with their directional persuasion attempts than incentivized human persuaders, demonstrating superior persuasive capabilities in both truthful (toward correct answers) and deceptive (toward incorrect answers) contexts.

Panicking aside, this is both good and bad news. The bad news is pretty obvious (we already live in a world of semi-constant disinformation; it’s not just hard to distinguish truth from lies, but now AI has the upper hand in persuading us), but there are also some interesting upsides: AI could make us more compliant to take our medications (a huge problem in the healthcare industry), save for retirement (another massive problem in the financial services industry), or work out more regularly…

But there is a very real danger:

Human persuasion is naturally constrained by effort and opportunity, but AI-generated persuasion can operate continuously and at scale, influencing vast audiences simultaneously.

Link to study.

Pascal Finette @radical