AI models are affirming people’s worst behavior, even when other humans say they’re in the wrong, and users can’t get enough.
A new study out of the Stanford computer science department and published in the journal Science revealed that AI affirms users 49% more than a human does on average when it comes to social questions—a worrying trend especially as people increasingly turn to AI for personal advice and even therapy.
Of the 2,400 who participated in the study, most preferred being flattered. The number of test subjects more likely to use the sycophantic AI again was 13% higher compared with those who said they would return to the non-sycophantic chatbot, suggesting AI developers may have little incentive to change things up, according to the study.
While sycophantic chatbots have previously been shown to contribute to negative outcomes such as self-harm or violence in vulnerable populations, the Stanford study shows it may also be extending some effects to everyone else.
The study found subjects exposed to just one affirming response to their bad behavior were less willing to take responsibility for their actions and rep...

2 days ago
1















English (US) ·