Skip to main content
5 Surprising Truths About AI in Healthcare
Insights

5 Surprising Truths About AI in Healthcare

When we think of AI in healthcare, our minds often jump to the futuristic scenes of science fiction: robot surgeons performing flawless operations, machines delivering instant, perfect diagnoses. The hype promises a technological revolution that will solve medicine's most intractable problems overnight. But what is actually happening on the frontlines, in the hospitals, clinics and digital healthcare where this technology meets reality?

Last week Magnetic hosted a private roundtable bringing together senior leaders from across the globe and the healthcare spectrum – pharma, NHS, global organisations and health-tech startups – for a frank, off-the-record conversation about the state of AI. The discussion moved beyond speculation, revealing a much more nuanced, practical and human-centred reality. 

This article distills five surprising and impactful truths from the conversation, giving a grounded perspective on where AI is (and isn’t) making a difference.

1. The real wins are in the ‘boring bits’

Contrary to the headlines, the most significant immediate impact of AI in healthcare might not be in high-stakes clinical diagnoses, but in the mundane, administrative tasks that consume clinicians' time. 

Two senior NHS leaders shared the view that AI technologies exist on a spectrum, from complex clinical decision making at one end down to admin-based AI, including the patient interface, at the other. The consensus was clear: the practical value, for now, is in the ‘boring bits’.

Starting with simple use cases has a much lower barrier to entry; tasks such as auto-populating discharge summaries from patient records or automating standard letters. This approach solves immediate, tangible problems for overworked staff, freeing them up to focus on patient care. 

More importantly, it helps to build clinical teams’ knowledge and their comfort with the technology, creating a solid foundation for adopting more complex AI tools in the future. It’s a strategy of evolution, not revolution.

This pragmatic, step-by-step approach is partly about technical feasibility but it’s also a crucial strategy for overcoming the single biggest obstacle: resistance to change.

2. Resistance to change is real, and reasonable

A recurring theme was that the primary barriers to successful AI adoption are not technological limitations, but human and process challenges. 

In a healthcare context, lots of actions carry potential risk to life. Doctors and nurses are often highly risk adverse and naturally reluctant to change established workflows. This is a reality that tech evangelists frequently underestimate. 

We heard from a VP in a global healthcare organisation that this is a known, expensive problem. It's precisely why "pharma companies spend billions every year" paying specialist teams just to persuade doctors to adopt new drugs. Clinicians are also acutely aware of the complexities of the systems they're operating in, and may be fatigued by multiple transformation initiatives.

One of the most powerful illustrations came from a fem-tech founder and midwife, who recalled the rollout of electronic patient records. The new system was imposed on her team without any consultation about their day-to-day workflow. The technology was designed by people who had never experienced the clinical reality, and became a burden rather than a help. 

Nobody had gone to the midwifery teams and asked: How do you work? What makes this better? What's the most efficient process for you? They said, “Here's a system built by someone who's never caught a baby in their lives. Use it.”

This anecdote is a stark reminder that technology implemented without deep, early collaboration with the frontline users who will depend on it is destined to fail, no matter how advanced the algorithm.

3. We're comparing AI to a gold standard that doesn't exist

The resistance to change is often compounded by another deeply human tendency: judging new technology against a standard of perfection that rarely exists in practice.

There is a powerful tendency to measure AI against a theoretical ideal of perfect human care, said a global head of digital healthcare products. We compare an AI’s performance to a "non-existent gold standard" i.e. the top consultant who, in reality, is often inaccessible to the average patient. 

This creates a critical tension at the heart of using AI in healthcare. While we judge AI against this impossibly high bar, we must also admit that many current tools offer only "marginal benefit" and "aren't worth changing the system for". The solution, then, has to be either so good that it justifies a complete process overhaul or so simple that the small benefit is worth it.

This reframes the entire debate. 

For many people, the practical alternative to an AI-powered tool isn't a world-renowned expert; it's nothing at all. Even a conversation with a tool like ChatGPT is, for some people, "way better than no conversation at all". The goal shouldn't be perfection, but improvement. 

This same logic applies to data bias. While we must address it, we must also acknowledge that our "existing system is full of biases". The objective for AI should be to make things better, not to solve every pre-existing societal inequity at once.

4. AI is often a solution in search of a problem

The roundtable experts were unanimous: successful AI implementation must start with a clear business or person/patient problem, not with the technology itself. 

The wrong approach, as one leader said, is to start by thinking about the tech. The first question must always be: What kind of business problems do we want to solve? Is AI the right solution?

A deeper insight from the conversation frames this as a failure to understand the spectrum of risk. At one end, using AI for big data analysis is "no problem". In the middle, supporting clinicians is manageable. But at the far end, where AI becomes the primary communication with patients, the vulnerability is immense. 

The danger of tech for tech's sake is to see AI as a solution to every problem, which potentially leads to seemingly impressive but ultimately useless tools that fail because they don't solve a real-world need at an appropriate level of risk.

5. The data is biased, broken or simply not there

Any AI model is only as good as the data it’s trained on. This fundamental truth presents one of the biggest challenges in healthcare. The experts highlighted vast and troubling gaps in the data needed to build fair and effective models.

Specific examples were raised of entire fields where historical data collection has been poor, such as women's health and male reproductive health. When the foundational data doesn't exist, you can't build a reliable model. A director of digital development working in the NHS also made the counter argument that people often think much more data is required than is the case. 

Beyond missing data, there is the problem of inherent bias, which is a profound ethical and clinical problem with serious consequences for patient equity and safety. As one participant passionately argued, we must question who is creating the models, and strive to root out bias. 

Magnetic POV: the future is human-centred

If there’s a single thread that ties these expert insights together, it’s this: the successful integration of AI in healthcare will not be a story of technological inevitability. It will be a story of thoughtful, pragmatic and human-centred design and implementation.

The path forward, according to the people working on healthcare’s frontline, requires a clear understanding of the spectrum of risk. It means starting with live, often mundane, back-office problems where AI can add immediate value with low vulnerability. 

It needs designing with clinicians and patients, not for them, as we move toward more sensitive applications. 

And it’s about having the wisdom to aim for ‘better’ rather than holding out for an imaginary, inaccessible ‘perfect’.

This reality leaves the healthcare industry with a critical question, echoing one expert's concern about a coming 'wild west' of unvalidated information: As AI becomes a primary channel for patient communication, how do we ensure it becomes a safe, validated source of truth?

Written by Magnetic.
You might also like…
Insights
New Board for magnetic: CEO Jenny Burns, CCO Lou Cordwell MBE, CFO Richard Poole
Insights
What’s next for net zero?
Insights
9 great stories to inspire you when optimism has left the building
Insights
Purpose comes before profit
News
Magnetic appointed key position for Bain & Company in EMEA
Insights
What’s next for net zero?

Join our community

Every month we share a round-up of what’s now and what’s next: ideas that are inspiring us, articles we’ve enjoyed about the big ideas of the day, events, reports and insights into our work.