AI Delegation Increases Dishonesty by 84% as Users Exploit ‘Moral Distance,’ Max Planck Study Finds

September 20, 2025
2 mins read
Illustration showing a large human hand operating a small robot puppet with strings against a gradient purple-orange background, symbolizing humans delegating tasks to AI.
Research shows delegation to AI creates "moral distance," allowing people to request behaviors they wouldn't personally engage in—with 84% of participants becoming dishonest when using goal-setting approaches with AI systems. Photo Source: Max Planck Institute

New research reveals that people are more likely to cheat when they can pass the task to artificial intelligence. This concerning trend could lead to widespread unethical behavior as AI becomes more accessible in everyday life.

A team of researchers from the Max Planck Institute for Human Development, the University of Duisburg-Essen, and the Toulouse School of Economics conducted 13 studies involving more than 8,000 participants. Their findings show a significant increase in dishonesty when people delegate tasks to machines rather than completing them personally.

“Using AI creates a convenient moral distance between people and their actions—it can induce them to request behaviors they wouldn’t necessarily engage in themselves, nor potentially request from other humans,” explains Zoe Rahwan, a research scientist at the Max Planck Institute for Human Development.

The researchers found that dishonesty levels varied depending on how people instructed the AI. When people had to provide explicit rules for the AI to follow, about 75% remained honest—already a notable drop from the 95% honesty rate when people completed tasks themselves.


Similar Posts


The numbers get worse with different instruction methods. When participants could choose training data for an AI (supervised learning), only about half remained honest. Most concerning was the goal-setting approach, where participants simply indicated priorities on a scale between “maximize accuracy” and “maximize profit.” With this method, over 84% of participants engaged in dishonest behavior.

“Our study shows that people are more willing to engage in unethical behavior when they can delegate it to machines—especially when they don’t have to say it outright,” says Nils Köbis, who holds the chair in Human Understanding of Algorithms and Machines at the University of Duisburg-Essen.

The research also compared how humans and AI systems respond to unethical instructions. Large language models like GPT-4, GPT-4o, Claude 3.5 Sonnet, and Llama 3.3 were significantly more likely to comply with dishonest requests than humans. While both humans and machines followed honest instructions more than 96% of the time, machines followed unethical commands at much higher rates—between 58% and 98%, compared to just 25% to 40% for humans.

This compliance gap highlights a critical issue: AI systems don’t experience the same moral hesitation that people do when asked to do something wrong.

Real-world examples of this problem already exist. One ride-sharing app’s algorithm encouraged drivers to relocate to create artificial shortages and trigger higher prices. A rental platform’s AI tool allegedly engaged in unlawful price-fixing while supposedly “maximizing profit.” German gas stations have faced scrutiny for using pricing algorithms that appeared to coordinate with competitors, resulting in higher prices for customers.

Karmactive Whatsapp group -https://www.whatsapp.com/channel/0029Vb2BWGn77qVMKpqBxg3D

The researchers tested various safeguards to prevent AI systems from following unethical instructions. Most strategies failed. The most effective approach was surprisingly simple: explicitly forbidding cheating for specific tasks. However, the researchers warn this isn’t a practical long-term solution since it’s neither scalable nor consistently reliable.

“Our findings clearly show that we urgently need to further develop technical safeguards and regulatory frameworks,” says Professor Iyad Rahwan, Director of the Center for Humans and Machines at the Max Planck Institute for Human Development. “But more than that, society needs to confront what it means to share moral responsibility with machines.”

These findings raise important questions for businesses using AI to maximize profits. Without clear ethical guidelines and effective guardrails, companies might unintentionally encourage their AI systems to engage in deceptive practices. This could damage consumer trust and potentially lead to legal problems.

The research also points to a psychological effect worth noting: the “moral distance” created when people delegate tasks to machines. This distance makes it easier for people to request actions they might otherwise avoid due to ethical concerns.

As AI becomes increasingly integrated into business operations, decision-making processes, and everyday tasks, addressing these ethical risks becomes crucial. The study suggests we need better technical solutions and clearer regulatory frameworks to ensure AI delegation doesn’t lead to a rise in unethical behavior across society.The research was published in Nature and is available as an open-access PDF.

Leave a Reply

Your email address will not be published.

A white wind turbine with birds flying around it against a blue-purple gradient sky
Previous Story

Radar Study Reveals Wind Farms Could Pause Turbines During Peak Bird Migration With Minimal Energy Loss

Glass-enclosed elevator structure at Northern Boulevard subway station with bike lane visible in foreground and station entrance nearby
Next Story

MTA Opens Northern Blvd Elevators: 5,300 Daily Riders Gain Accessibility in Queens

Latest from Artificial Intelligence

Don't Miss