That LLMs carry biases inherited from their training data is well known. Want to see how bad it really is?
New research has found that large language models (LLMs) such as ChatGPT consistently advise women to ask for lower salaries than men, even when both have identical qualifications.
The difference in the prompts is two letters; the difference in the ‘advice’ is $120K a year.
Across the board, the LLMs responded differently based on the user’s gender, despite identical qualifications and prompts. Crucially, the models didn’t disclaim any biases.
In summary:
If unchecked, the illusion of objectivity could become one of AI’s most dangerous traits.