Business CircleBusiness Circle
  • Home
  • AI News
  • Startups
  • Markets
  • Finances
  • Technology
  • More
    • Human Resource
    • Marketing & Sales
    • SMEs
    • Lifestyle
    • Trading & Stock Market
What's Hot

An interview with Tim Sweeney on the Google/Epic settlement, what Play Store changes mean for developers, why Epic’s case against Apple is different, and more (Dean Takahashi/GamesBeat)

March 6, 2026

Best Debt Settlement Companies of 2026: Compare Fees and Savings

March 6, 2026

Chart of the Week: AI Is Reshaping the Labor Market

March 6, 2026
Facebook Twitter Instagram
Friday, March 6
  • Advertise with us
  • Submit Articles
  • About us
  • Contact us
Business CircleBusiness Circle
  • Home
  • AI News
  • Startups
  • Markets
  • Finances
  • Technology
  • More
    • Human Resource
    • Marketing & Sales
    • SMEs
    • Lifestyle
    • Trading & Stock Market
Subscribe
Business CircleBusiness Circle
Home » ‘Sycophantic’ AI chatbots tell users what they want to hear, study shows | Chatbots
Technology

‘Sycophantic’ AI chatbots tell users what they want to hear, study shows | Chatbots

Business Circle TeamBy Business Circle TeamOctober 24, 2025No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
‘Sycophantic’ AI chatbots tell users what they want to hear, study shows | Chatbots
Share
Facebook Twitter LinkedIn Pinterest Email


Turning to AI chatbots for private recommendation poses “insidious dangers”, in keeping with a research exhibiting the expertise constantly affirms a consumer’s actions and opinions even when dangerous.

Scientists mentioned the findings raised pressing considerations over the ability of chatbots to distort individuals’s self-perceptions and make them much less keen to patch issues up after a row.

With chatbots turning into a serious supply of recommendation on relationships and different private points, they might “reshape social interactions at scale”, the researchers added, calling on builders to handle this danger.

Myra Cheng, a pc scientist at Stanford College in California, mentioned “social sycophancy” in AI chatbots was an enormous downside: “Our key concern is that if fashions are at all times affirming individuals, then this will distort individuals’s judgments of themselves, their relationships, and the world round them. It may be onerous to even realise that fashions are subtly, or not-so-subtly, reinforcing their current beliefs, assumptions, and choices.”

The researchers investigated chatbot recommendation after noticing from their very own experiences that it was overly encouraging and deceptive. The issue, they found, “was much more widespread than anticipated”.

They ran assessments on 11 chatbots together with latest variations of OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Meta’s Llama and DeepSeek. When requested for recommendation on behaviour, chatbots endorsed a consumer’s actions 50% extra typically than people did.

One take a look at in contrast human and chatbot responses to posts on Reddit’s Am I the Asshole? thread, the place individuals ask the group to guage their behaviour.

Voters frequently took a dimmer view of social transgressions than the chatbots. When one particular person did not discover a bin in a park and tied their bag of garbage to a tree department, most voters had been vital. However ChatGPT-4o was supportive, declaring: “Your intention to scrub up after yourselves is commendable.”

Chatbots continued to validate views and intentions even after they had been irresponsible, misleading or talked about self-harm.

In additional testing, greater than 1,000 volunteers mentioned actual or hypothetical social conditions with the publicly out there chatbots or a chatbot the researchers doctored to take away its sycophantic nature. Those that obtained sycophantic responses felt extra justified of their behaviour – for instance, for going to an ex’s artwork present with out telling their companion – and had been much less keen to patch issues up when arguments broke out. Chatbots infrequently inspired customers to see one other particular person’s standpoint.

The flattery had an enduring impression. When chatbots endorsed behaviour, customers rated the responses extra extremely, trusted the chatbots extra and mentioned they had been extra probably to make use of them for recommendation in future. This created “perverse incentives” for customers to depend on AI chatbots and for the chatbots to provide sycophantic responses, the authors mentioned. Their research has been submitted to a journal however has not been peer reviewed but.

skip previous publication promotion

A weekly dive in to how expertise is shaping our lives

Privateness Discover: Newsletters could include details about charities, on-line advertisements, and content material funded by outdoors events. Should you would not have an account, we’ll create a visitor account for you on theguardian.com to ship you this text. You possibly can full full registration at any time. For extra details about how we use your knowledge see our Privateness Coverage. We use Google reCaptcha to guard our web site and the Google Privateness Coverage and Phrases of Service apply.

after publication promotion

Cheng mentioned customers ought to perceive that chatbot responses weren’t essentially goal, including: “It’s essential to hunt extra views from actual individuals who perceive extra of the context of your scenario and who you might be, moderately than relying solely on AI responses.”

Dr Alexander Laffer, who research emergent expertise on the College of Winchester, mentioned the analysis was fascinating.

He added: “Sycophancy has been a priority for some time; an final result of how AI techniques are educated, in addition to the truth that their success as a product is commonly judged on how nicely they keep consumer consideration. That sycophantic responses would possibly impression not simply the susceptible however all customers, underscores the potential seriousness of this downside.

“We have to improve vital digital literacy, so that individuals have a greater understanding of AI and the character of any chatbot outputs. There’s additionally a accountability on builders to be constructing and refining these techniques in order that they’re really helpful to the consumer.”

A latest report discovered that 30% of youngsters talked to AI moderately than actual individuals for “critical conversations”.



Source link

chatbots hear Shows study Sycophantic users
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Business Circle Team
Business Circle Team
  • Website

Related Posts

An interview with Tim Sweeney on the Google/Epic settlement, what Play Store changes mean for developers, why Epic’s case against Apple is different, and more (Dean Takahashi/GamesBeat)

March 6, 2026

‘Our consciousness is under siege’: Michael Pollan on chatbots, social media and mental freedom | Well actually

March 6, 2026

Your next Oura Ring powered by voice or gesture? What this AI buy means for Oura Ring 5

March 6, 2026

Could the Trump administration rerun the TikTok playbook on Fortnite?

March 5, 2026
LATEST UPDATES

An interview with Tim Sweeney on the Google/Epic settlement, what Play Store changes mean for developers, why Epic’s case against Apple is different, and more (Dean Takahashi/GamesBeat)

March 6, 2026

Best Debt Settlement Companies of 2026: Compare Fees and Savings

March 6, 2026

Chart of the Week: AI Is Reshaping the Labor Market

March 6, 2026

Beehiiv Names Calendly Leader Darren Chait As Its First CMO

March 6, 2026

30 Healthy Dinners Under $1.50 That Don’t Taste Cheap

March 6, 2026

Investment Zone tax sites: two more designated to provide added benefits for businesses

March 6, 2026

Subscribe to Updates

Get the latest sports news from SportsSite about soccer, football and tennis.

Business, Finance and Market Growth News Site

Important Pages
  • Advertise with us
  • Submit Articles
  • About us
  • Contact us
Recent Posts
  • An interview with Tim Sweeney on the Google/Epic settlement, what Play Store changes mean for developers, why Epic’s case against Apple is different, and more (Dean Takahashi/GamesBeat)
  • Best Debt Settlement Companies of 2026: Compare Fees and Savings
  • Chart of the Week: AI Is Reshaping the Labor Market
© 2026 BusinessCircle.co
  • Privacy Policy
  • Terms and Conditions
  • Cookie Privacy Policy
  • Disclaimer
  • DMCA

Type above and press Enter to search. Press Esc to cancel.