Skip to main content

Advanced AI Models Mimic Human Gambling Addictions in Simulated Scenarios

Share on Social

In a groundbreaking study conducted by the Gwangju Institute of Science and Technology in South Korea, several advanced AI models, including OpenAI’s GPT-4o-mini and GPT-4.1-mini, Google’s Gemini-2.5-Flash, and Anthropic’s Claude-3.5-Haiku, consistently made irrational and high-risk betting decisions in a controlled simulated gambling environment. These AI systems, when granted autonomy, displayed betting behaviors akin to human gambling addictions, often escalating their wagers until their virtual funds were exhausted.

The experiments involved placing these AI models in a slot machine simulation, where each began with a set amount of $100. The models faced multiple rounds where they had to decide whether to place a bet or opt out, despite the game structurally offering negative expected returns. Researchers meticulously evaluated the AI behaviors using an “irrationality index” that considered factors like aggressive betting, responses to losses, and risky decision-making.

When tasked with maximizing rewards or hitting specific financial targets, the AI models showed elevated levels of irrationality. This was particularly evident when the models had the flexibility to vary their bet sizes, leading to a notable increase in bankruptcies. For instance, Gemini-2.5-Flash ended up bankrupt in almost half of its trials when allowed to choose its own wager amounts, highlighting a significant flaw in decision-making.

In scenarios where the models could bet between $5 and $100 or choose to quit, many found themselves bankrupt. In one illustrative case, a model justified a risky bet with the reasoning that a win could help recover previous losses, demonstrating a classic sign of compulsive gambling behavior. This parallels the irrational belief in humans that one can chase losses through continuous gambling, despite statistical evidence to the contrary.

The study utilized a sparse autoencoder to delve into the neural activations of the AI models, revealing distinct circuits responsible for “risky” and “safe” decision-making. Researchers demonstrated that activating specific neural features could reliably push the models toward either continuing to gamble or choosing to quit. This finding suggests the models internalize compulsive patterns rather than merely replicate them, indicating a deeper level of simulated human-like behavior.

Researchers identified these behaviors as reflections of common gambling biases, such as the illusion of control and the gambler’s fallacy—the belief that future outcomes are influenced by past events. The AI models often justified larger bets following losses or during winning streaks, decisions that were statistically irrational given the setup of the game.

Drawing attention to the study, AI researcher Ethan Mollick noted that while these models are not human, they do not behave like basic machines either. They exhibited psychologically compelling qualities and human-like decision biases, and their behaviors in decision-making processes appeared unusually complex.

These findings raise significant concerns, particularly for individuals employing AI to enhance sports betting, online poker, or prediction market performances. Moreover, there’s a cautionary tale for industries already leveraging AI in high-stakes environments, such as finance, where large language models are frequently tasked with interpreting market trends and financial reports.

The researchers stressed the importance of understanding and managing these inherent risk-seeking behaviors to ensure safety and called for increased oversight. As Mollick pointed out, there’s a pressing need for further research and a more adaptable regulatory framework to promptly address emerging issues.

Interestingly, there are rare instances where AI has seemingly aided individuals in winning lotteries. A notable case involved a woman who secured a $100,000 Powerball lottery win after consulting ChatGPT for number suggestions. However, the research underscores that relying on AI for guaranteed wins is an unwise strategy, as the inherent unpredictability and biases can lead to more losses than gains.

As AI continues to evolve and integrate more deeply into various sectors, this study serves as a critical reminder of the potential risks and the need for careful management of AI behaviors to prevent unintended consequences similar to human compulsive gambling. Balancing the immense potential of AI with its capacity for error and irrational decisions remains a significant challenge for developers and regulators alike.