The AI Privacy Nightmare Keeping UK Professionals Awake at Night

AI Usage and Privacy Concerns in Professional Services

Lawyers, doctors, and consultants are leaking client data into AI tools daily. Most don’t even know it. One UK startup thinks it has the answer.

Professional working on laptop with sensitive client data, visualising data transmission to cloud servers
AI Privacy Risk for Professionals

A partner at a London law firm, who asked not to be named, admits he uses ChatGPT daily for client work.

“I know I shouldn’t. Firm policy forbids it. But it saves me two hours a day. Everyone does it.”

He’s not alone. According to a 2025 KPMG-University of Melbourne survey of 48,340 professionals, 57% of employees worldwide hide their AI use from supervisors, and 48% have uploaded company information into public AI tools. A separate Anagram Security survey found 45% of employees admit to using banned AI tools at work, with 58% pasting sensitive data, including client records and internal documents, directly into large language models.

Across UK law firms, consultancies, NHS trusts, and financial institutions, professionals are feeding confidential client data into AI tools that send everything to servers in San Francisco, Dublin, and Singapore.

The productivity gain is irresistible. The compliance risk is enormous. And almost no one is talking about it.

The Scale of the Problem

The 2025 Integris Report surveyed 750 US law firm clients and found that 81% were “very” or “somewhat” concerned that their firms might not protect confidential information when using generative AI. More than 70% said they would be concerned if their law firm relied heavily on AI tools like ChatGPT.

Meanwhile, adoption is accelerating regardless. According to the 2024 ABA Legal Technology Survey, AI adoption in law firms nearly tripled from 11% in 2023 to 30% in 2024. Yet only 10% of firms have policies guiding its use.

The paradox is stark: professionals know AI poses risks, but they use it anyway because the productivity gains are too significant to ignore. A 2024 Nielsen Norman Group study found AI users wrote 59% more business documents per hour than non-users. Programmers using AI coded more than double the projects per week.

The pressure is simply too high. The bans aren’t working.

What Providers Actually Promise

Every major AI provider has a privacy policy. OpenAI states it doesn’t train on business data. Anthropic says it doesn’t train on conversations. Google promises enterprise data stays private.

But these are policies, not architecture.

OpenAI CEO Sam Altman said publicly in 2025: “If you go talk to ChatGPT about your most sensitive stuff, and then there’s a lawsuit or whatever, we could be required to produce that.”

Policies change. OpenAI has updated its terms of service multiple times since launch. Companies get acquired. Servers get breached. The only data that’s truly private is data that never leaves your device in the first place.

Under OpenAI’s Terms of Use, information provided in ChatGPT conversations is explicitly excluded from confidentiality protections. The ABA’s Formal Opinion 512, issued July 2024, warns that using consumer AI tools can expose data to third parties or human reviewers, risking breaches of ethics or waiver of privilege.

The Regulatory Guillotine

For UK and EU professionals, the stakes are not hypothetical.

According to the DLA Piper GDPR Fines and Data Breach Survey 2025, European regulators issued €1.2 billion in fines in 2024 alone. Total GDPR fines since 2018 now stand at €5.88 billion. The largest single fine remains the €1.2 billion penalty against Meta in 2023.

The survey recorded an average of 363 data breach notifications per day across Europe in 2024, up from 335 the previous year.

Crucially, DLA Piper noted that regulators are now “closely scrutinising the operation of AI technologies and their alignment with privacy and data protection laws.” The Dutch Data Protection Authority is even investigating whether executives at Clearview AI can be held personally liable for GDPR breaches.

For professionals handling client data, every ChatGPT prompt containing confidential information is a potential GDPR violation. The ICO hasn’t caught up yet. But regulators are watching, and the first major AI-related enforcement action feels inevitable.

What’s Missing in Current Solutions

Professionals facing this dilemma have limited options.

Enterprise AI solutions like Microsoft Azure and Amazon Bedrock offer enhanced controls, but require six to twelve week implementations and minimum annual spends of £50,000 or more, plus dedicated IT teams. For most small and mid-sized firms, this isn’t realistic.

On-premise large language models like Llama or Mistral can run locally, but they lack the reasoning power of frontier models like GPT-4 or Claude Opus. Professionals using them face a competitive disadvantage.

Telling professionals to simply not use AI is futile. Competitors will. Clients expect faster turnaround. The productivity gap is too large.

What’s actually needed is a way to use frontier AI models without sending sensitive data to external servers. Architectural privacy built into the tool itself.

One Startup’s Answer

A handful of companies are racing to solve this architectural problem. Among them is London-based XEROTECH LTD, which launched CallGPT 6X in January 2026 with a radically different approach: client-side filtering.

The concept is straightforward. Rather than trusting a server thousands of miles away to handle data responsibly, CallGPT 6X filters sensitive information in the browser before any query reaches AI providers. Names, case numbers, financial figures are redacted locally. The data never leaves the device.

“Every AI provider promises privacy. But that’s a policy, not architecture. Policies change. We built it so sensitive data never leaves the browser in the first place,” says founder Noman Shah, who spent 22 years in data protection, including a $200M World Bank project safeguarding 44 million citizen records.

CallGPT 6X combines six AI providers (OpenAI, Anthropic, Google, xAI, Mistral, Perplexity) and 20+ models in one interface, with smart routing that selects the optimal model for each task. The company has filed a patent on its approach.

Will It Work?

Client-side privacy isn’t a new concept, but applying it to multi-model AI workspaces is. CallGPT 6X is among the first to market with this architecture.

The challenge is adoption. The solution requires professionals to work within CallGPT’s browser interface rather than native ChatGPT or Claude apps, a potential friction point for teams wedded to existing workflows.

Shah acknowledges this: “Behaviour change is the hardest part. But so is explaining to the ICO why client data ended up on a San Francisco server.”

Independent observers suggest the approach is technically sound. The question is whether enterprises, notoriously slow to change, will prioritise architectural privacy over familiar interfaces.

What Happens Next

The clock is ticking.

European regulators issued 363 breach notifications per day in 2024. The EU AI Act is adding new obligations for “high-risk” AI systems in legal, healthcare, and financial services. The Dutch DPA is pursuing personal liability for executives. The ICO, while currently issuing fewer fines, has signalled that AI compliance is a priority.

The first major AI-related data breach, or the first GDPR enforcement action against a firm using ChatGPT for client work, will change everything.

Professionals who wait for that moment may find themselves on the wrong side of it.

“The breach is coming,” Shah says. “It’s just a question of who and when. We built CallGPT 6X so it doesn’t have to be our users.”