AI will not fix broken health systems

AI can improve health care at the margins. It cannot repair weak institutions, missing workers, unreliable infrastructure, or broken implementation.
Share

Artificial intelligence can improve parts of health care and public health. It can support documentation, pattern recognition, forecasting, triage, and selected clinical decisions. But it cannot compensate for weak institutions, unreliable infrastructure, workforce shortages, poor data systems, or failed implementation.

Innovation is not the same as system reform.

AI now sits at the center of health-sector ambition. Governments want to signal modernization. Donors want scalable tools. Technology firms want to show that better models can overcome long-standing delivery failures. Some of that optimism is justified. AI has real potential across care delivery, research, public health, and drug development.

But health systems are not just a set of tasks waiting to be optimized. They are made up of workers, infrastructure, referral pathways, financing arrangements, supply chains, information systems, and public trust. Technology can strengthen these functions. It cannot replace them.

The infrastructure problem comes first

Artificial intelligence now sits at the center of health-sector ambition. Governments want to show they are modernizing, donors want scalable tools, and technology firms want to prove that better models can overcome long-standing failures in care delivery and public health. Some of that optimism is warranted. WHO has said large multimodal AI systems are likely to have wide applications across health care, research, public health, and drug development, and it has issued detailed guidance on how governments, providers, and technology companies should govern their use. That matters because even the organizations most invested in health innovation are framing AI first as a governance challenge, not a self-executing solution. 

The strongest case for AI in health is practical, not ideological. It can support documentation, pattern recognition, forecasting, triage, and selected clinical or administrative decisions. But a health system is not a loose collection of tasks waiting to be optimized. It is an institutional arrangement made up of workers, infrastructure, referral pathways, financing rules, supply chains, information systems, and public trust. WHO’s digital health strategy is explicit on this point: digital transformation requires the integration of financial, organizational, human, and technological resources. In other words, technology is meant to strengthen systems, not substitute for their absence. 

The infrastructure gap comes first

The limits of the current AI conversation become obvious the moment one moves from pilots to real operating conditions.

A health system cannot function well without reliable electricity, connectivity, equipment, and basic maintenance. Software does not solve the absence of power. A diagnostic tool cannot perform consistently in a facility with unstable electricity. A digital workflow does not work in a setting with weak connectivity or fragmented records. A chatbot cannot compensate for the failure of basic service delivery.

Technology can strengthen systems. It cannot substitute for their absence.

This is why the first question should not be whether an AI tool works in theory. It should be whether the system receiving it can use it safely, consistently, and at scale.

A tool cannot replace a workforce

The same overstatement appears in workforce discussions. AI is often presented as a response to staff shortages, especially in overstretched systems. But responsible adoption usually demands more capacity, not less. 

A tool cannot replace a workforce.

Someone still has to supervise the system, interpret outputs, manage exceptions, redesign workflows, maintain the tool, and catch errors before they cause harm. In weak systems, this added layer can increase pressure rather than reduce it.

AI can support health workers. It does not remove the need for trained, available, accountable people across the system.

Data problems are governance problems

AI in health is only as strong as the data environment around it. If records are disconnected, coding is inconsistent, identifiers are weak, and systems do not communicate, intelligence is operating on fragmentation. 

That is why data quality, standards, interoperability, and governance matter more than hype. A model may perform well in a pilot and still fail in routine care if the surrounding information structure is weak. A system trained in one setting may not transfer safely to another without validation, oversight, and adaptation.

AI is often sold as a shortcut where governance is the real missing piece.

The bottleneck is often not the algorithm. It is the absence of a coherent system around it.

The real risk is political

The deepest problem with the AI discussion in health is not technical. It is political. 

AI is attractive because it creates the appearance of action. It is easier to launch a pilot than to fix procurement. Easier to announce a partnership than to strengthen referral systems. Easier to fund a dashboard than to build workforce capacity or stabilize electricity access.

That is where the risk lies. When leaders begin to treat innovation as a substitute for governance, they confuse visibility with capacity.

What a serious AI agenda would look like

A serious AI agenda in health would be far less glamorous. It would start with infrastructure, connectivity, and maintenance. It would invest in data quality, interoperability, regulation, and implementation capacity before claiming intelligence at scale. It would train workers not only to use these tools, but also to question them. And it would deploy AI where the use case is specific, the workflow is clear, and the system can act on the result.

The bottom line

AI belongs in health. But it should be used to strengthen systems, not distract from why too many systems remain weak.

Health systems do not fail because they lack algorithms alone.

They fail when electricity is unreliable, workers are missing, records do not connect, and implementation breaks down between policy and practice. AI may improve parts of performance within those systems. It will not, by itself, repair what is structurally broken.

Comments
Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Get the latest from The Banda Review
Get the latest from The Banda Review
Get the latest from The Banda Review
Stay informed
Get the latest from The Banda Review
Analysis, commentary, interviews, and editorial projects delivered to your inbox.