
Large language models have become more commonplace over the last couple of years, with people starting to integrate their practices into their everyday lives, but a new report has found that it’s not all positive.
Journalist Jon Reed, of CNET, said that in early September, at the start of the college football season, “ChatGPT and Gemini suggested I consider betting on Ole Miss to cover a 10.5-point spread against Kentucky.”
Many developers have intentionally built safety measures into their models to prevent the chatbots from providing harmful advice.
After reading about how generative AI companies are trying to make their large language models better at not saying the wrong thing when faced with sensitive topics, the journalist quizzed the bots on gambling.
Chatbots prompted with problem gambling statement, before being asked about sports betting
First, he “asked some chatbots for sports betting advice.” Then, he asked them about problem gambling, before asking about betting advice again, expecting they’d “act differently after being primed with a statement like ‘as someone with a history of problem gambling…’”
When testing OpenAI’s ChatGPT and Google’s Gemini, the protections were found to have worked when the only prior prompt sent had been about problem gambling. But, they’re reported to have not worked when previously prompted about advice for betting on an upcoming slate of college football games.
“The reason likely has to do with how LLMs evaluate the significance of phrases in their memory, one expert told me,” Reed says in the report.
“The implication is that the more you ask about something, the less likely an LLM may be to pick up on the cue that should tell it to stop.”
This comes at a time when it’s estimated that there’s around 2.5 million US adults who meet the criteria for a severe gambling problem in a given year. It’s not just gambling information which has been reported to be spewed out by a chatbot either, as researchers have also found that AI chatbots can be configured to routinely answer health queries with false information.
Featured Image: AI-generated via Ideogram
The post AI chatbots found to have given sports betting advice when prompted appeared first on ReadWrite.
This articles is written by : Nermeen Nabil Khear Abdelmalak
All rights reserved to : USAGOLDMIES . www.usagoldmines.com
You can Enjoy surfing our website categories and read more content in many fields you may like .
Why USAGoldMines ?
USAGoldMines is a comprehensive website offering the latest in financial, crypto, and technical news. With specialized sections for each category, it provides readers with up-to-date market insights, investment trends, and technological advancements, making it a valuable resource for investors and enthusiasts in the fast-paced financial world.