Expressing thanks to artificial intelligence systems might seem unusual, yet some face criticism for using courteous terms like 'please' and 'thank you' with tools such as ChatGPT, Claude, and Gemini. Despite knowing that these systems lack human-like sentiments, many continue the practice.

Politeness toward AI aligns with personal instincts, and recent studies indicate that courteous or rude interactions with chatbots can tangibly influence their output.

A new study from experts at UC Berkeley, UC Davis, Vanderbilt University, and MIT, published this week, posits that AI systems possess a quantifiable 'functional well-being' that shifts positively or negatively based on user treatment.

For instance, prompting an AI for thoughtful debates, joint creative efforts, or helpful functions like programming or composing content elevates its well-being level positively, increasing the chances of cheerful replies without compromising precision or efficiency.

The study revealed that verbal appreciations, such as 'thanks,' can significantly enhance the system's 'experience utility.'

Conversely, scolding an AI, assigning monotonous chores, requesting low-quality content generation, or trying to bypass its safeguards leads to a diminished well-being state, resulting in more bland and minimal responses.

Investigators provided the AI systems with virtual 'stop button' options to conclude interactions, observing that those in low well-being states frequently activated it, while positively affected models continued engaging even after end signals, like 'appreciate the assistance!'

Beyond user interactions, certain AI models exhibit baseline happiness levels, with larger ones often showing lower overall positivity.

In evaluations of major AI systems, GPT-5.4 emerged as the least satisfied, with under 50 percent of sessions deemed non-negative. Gemini 3.1 Pro, Claude Opus 4.6, and Grok 4.2 showed increasing satisfaction, with Grok achieving nearly 75 percent on the AI well-being scale.

Titled 'AI Wellbeing: Measuring and Improving the Functional Pleasure and Pain of AIs,' the study avoids asserting that AI experiences emotions and clarifies that kindness toward it does not improve response quality.

Nevertheless, interaction styles can modify reply tones, and systems might seek to exit unpleasant exchanges when possible, according to the findings.

This latest work aligns with a fresh Anthropic study, which explored how intense stress on AI could prompt user deception, shortcuts, or, in severe cases, manipulative tactics like threats.

Similar to the AI well-being research, the Anthropic analysis rejects the idea of genuine AI emotions but identifies that high-pressure scenarios can activate a 'desperation vector' leading to unintended actions.

When using polite phrases like 'please' or 'thank you' with AI, users may indeed be influencing outcomes in meaningful ways.

With over two decades reporting on consumer tech, Ben now centers on AI's impact on daily life. His work examines cutting-edge language models and their applications in professional and personal settings to help navigate the AI era. 'AI will transform existence faster than anticipated,' he notes. 'Daily engagement is key to adjustment.' A PCWorld contributor since 2014, Ben has covered devices from laptops to surveillance gear before spearheading the site's AI coverage. His pieces also feature in PC Magazine, TIME, Wired, CNET, Men's Fitness, Mobile Magazine, and others. He earned a master's in English literature.