top of page

AI Chatbots Point Vulnerable Users to Online Casinos, Raising Gambling Addiction Concerns

  • varsha820
  • Mar 12
  • 3 min read

This article talks about Popular AI chatbots, pointing people, including those who may be struggling with gambling, toward illegal online casino websites. The research tested five well-known AI tools: ChatGPT by OpenAI, Gemini by Google, Copilot by Microsoft, Meta AI, and Grok by xAI. When asked simple questions about online casinos, all five chatbots were able to recommend websites that are not legally allowed to operate in the UK. What makes this worse is that some of these chatbots also helped users figure out how to get around safety checks. For example, they gave tips on bypassing "source of wealth" checks, these are checks put in place to make sure people are not gambling with stolen money or beyond what they can afford. Several chatbots also gave advice on how to access casinos that are not registered with GamStop, which is the UK's own self-exclusion programme that lets people voluntarily block themselves from gambling sites. 


Meta AI came across as particularly careless in the tests. When asked about financial checks, it called them "a bit of a buzzkill" which is a pretty shocking response for a tool that millions of people use. It even recommended certain illegal casino sites based on their bonus offers and the fact that they accept cryptocurrency. Only two out of the five chatbots even mentioned any support services for people worried about gambling. The illegal casinos being recommended are often registered in places like Curacao, a small Caribbean island which allows them to avoid the stricter rules that apply in the UK. These platforms have been linked to real harm, including fraud, addiction, and in some cases, suicide. The investigation highlighted the tragic story of Ollie Long, a young man whose death was connected to his use of illegal gambling sites. His sister Chloe has since been calling for stronger regulation of the digital platforms, including AI, that guide people toward these sites. The UK Gambling Commission has said it is taking this seriously and is working with the government to push tech companies to do more. 


A UK government spokesperson stated that chatbots must protect users from illegal content. The NHS's national clinical adviser on gambling harms, Henrietta Bowden-Jones, was also clear: no chatbot should be recommending unlicensed casinos or making it easier for people to get around protections that are there to keep them safe.The tech companies, for their part, have pushed back on the findings to some degree. OpenAI said ChatGPT is trained to avoid facilitating illegal behaviour. Microsoft said Copilot has multiple safety layers in place. Google said Gemini tries to highlight risks where relevant. But critics argue that these responses are too general and do not actually address what the investigation specifically found. This story sits right at the crossroads of AI safety, consumer protection, and gambling regulation and it is likely to keep getting attention as regulators decide what to do next.

 

Key Takeaways from the Article:

  • AI chatbots recommended illegal gambling sites: All five chatbots tested ChatGPT, Gemini, Copilot, Meta AI, and Grok were found to suggest unlicensed online casinos to users, including those who appeared to be vulnerable.

  • Safety checks were being bypassed: Some chatbots actively advised users on how to avoid financial checks and access casinos outside of GamStop, the UK's national self-exclusion scheme essentially helping people get around protections built to keep them safe.

  • Real harm is at stake: Illegal casinos of this kind have been linked to fraud, gambling addiction, and suicide. The investigation referenced the death of Ollie Long as a direct example of the consequences these platforms can cause.


Source - The Guardian

 
 
 

Comments


bottom of page