One out of three companies in the UK worries about the safety of their data when using Large Language Models (LLMs) because they have evolved from assistants that respond to commands or questions into independent systems that act without constant human intervention.
As a result, these organizations are hesitant to fully incorporate AI agents into their operation systems because they fear losing control of their data and security to machines that can ‘think for themselves.’
The UK government is taking steps to address these challenges. Cabinet Office Minister Pat McFadden announced the declassification of an intelligence assessment indicating that AI will escalate cyber threats in the coming years.
In 2024, the NCSC received almost 2,000 reports of cyberattacks, with almost 90 deemed “significant” and 12 at “the very top end of severity”. This was three times the number of severe attacks compared to the year before.
UK businesses raise red flags over data security in LLMs
Businesses in the UK must comply with strict regulations that protect people’s data and prevent its misuse, such as the UK’s Data Protection Act and the EU’s General Data Protection Regulation (GDPR).
Companies that break these laws risk legal action and damaging their reputation, so they are rethinking their LLM strategies. AI could reveal private information or generate misleading results without careful human supervision or accurate prompting.
Initially, AI tools like chatbots provided fixed answers, while smart thermostats just changed the temperature by following simple rules and without thinking or learning from new data, previous actions, or the results of their decisions. However, AI Agents today use LLMs to learn from data, change their actions based on new information, and solve different problems by reasoning like a human being.
But that’s only a fraction of a much larger concern, as people can also prompt the agent to handle a task, and it will break it down into smaller activities, tackle each one individually, and then evaluate the results. Furthermore, some agents can partner up to handle a complex task, whereby one of them works on the job while the other checks the results to improve the outcome.
Even though this growth helps businesses automate operations and cuts down on the time that human teams would have used, it’s also a threat because the more decisions these agents make on their own, the harder it becomes for us to fully understand, predict, or control what they are doing. Similarly, the consequences of their mistakes can grow exponentially because these AI Agents act without stopping for human input.
Companies adopt AI agents cautiously across departments
Many companies in the UK use AI agents in their daily operations, with departments like customer service, human resources (HR), and marketing testing the tools slowly and monitoring them keenly.
A good business example is Pets at Home, which built an AI agent for its veterinary staff to provide quick answers and support during pet checkups, showing just how much AI agents remain useful for small roles in real jobs.
Currently, most agentic AI systems need people to guide them, check their work, and fix errors because they are not ready to replace humans completely without supervision. However, the future may require companies to make difficult decisions that will allow them to incorporate AI agents into systems, but also keep control.
KEY Difference Wire helps crypto brands break through and dominate headlines fast
This articles is written by : Nermeen Nabil Khear Abdelmalak
All rights reserved to : USAGOLDMIES . www.usagoldmines.com
You can Enjoy surfing our website categories and read more content in many fields you may like .
Why USAGoldMines ?
USAGoldMines is a comprehensive website offering the latest in financial, crypto, and technical news. With specialized sections for each category, it provides readers with up-to-date market insights, investment trends, and technological advancements, making it a valuable resource for investors and enthusiasts in the fast-paced financial world.