Over the past year, healthcare has moved from experimenting with AI to actively deploying it across clinical, operational, and product workflows. And yet, a familiar pattern is emerging: organizations can get AI to work in a proof of concept. They demonstrate value. But when it comes time to scale, progress stalls.
The issue isn’t model performance. It’s everything around it.
In digital health, the biggest AI barriers aren’t technical, they’re organizational. The challenge isn’t building models; it’s building an organization that knows how to use them successfully.
AI Is a Cultural Transformation. Not a Technology Upgrade
Most organizations start in the wrong place. They treat AI as a tool: something to evaluate, deploy and optimize. But AI changes how work gets done, how decisions are made, and how value is created across the organization.
If it’s introduced as a cost-cutting mechanism, teams will resist it. If it’s positioned as a capability multiplier, teams will adopt it.
That distinction matters. AI doesn’t just make existing work more efficient. It expands what organizations can take on. Projects that were previously deprioritized due to resource constraints suddenly become viable.
In practice, that means adoption has to start at the top. Starting with board and executive alignment, then AI embedded into existing operating models, using OKRs, training programs, and then performance reviews to reinforce adoption.
Without that level of alignment, AI remains fragmented and fails to scale.
Data Quality Is the Limiting Factor. Not Model Capability
There’s a tendency to focus on models: which LLM to use, how to fine-tune it, how to optimize prompts. But the reality is simpler.
If the data is wrong, the output will be wrong, often in ways that are harder to detect than traditional software failures. In earlier generations of software, bad input produced bad output. In AI systems, bad input can produce unpredictable or amplified errors.
That shifts the problem. AI success depends on high-quality data, clear feedback loops, and human oversight in edge cases. Automation can handle most data matching scenarios, but edge cases still require human judgment. Capturing and learning from those decisions is what allows systems to improve over time.
In other words, data and AI are not separate concerns, they are two sides of the same system.
Speed Is Now a Competitive Requirement
AI is not evolving on a typical enterprise timeline. It is moving at platform speed.
In past technology shifts, organizations had time to observe, evaluate, and adopt. With AI, that window is shrinking. The pace of change is such that waiting introduces risk. One way to think about it: It is now better to move too quickly and adjust than to move too slowly and fall behind. That has direct implications for how AI systems are built.
Organizations that successfully transition from prototype to production typically adopt a disciplined, iterative approach:
- Leveraging external expertise to accelerate early development
- Validating performance in real-world conditions
- Updating underlying architecture as tooling and standards evolve
- Scaling teams only after workflows and use cases are proven
That means success depends less on initial precision and more on the ability to iterate quickly and continuously in response to real-world conditions.
AI Requires a Different Talent Model
One of the more practical challenges in scaling AI is talent. True AI expertise is scarce. Demand significantly exceeds supply, and organizations face a growing challenge in distinguishing experienced practitioners from those with limited exposure.
At the same time, domain expertise remains critical. The most effective AI teams are not composed solely of data scientists or engineers. They are combinations of: AI specialists, domain experts, and product and engineering teams
Each group brings something the others lack. AI experts understand models and optimization. Domain experts understand workflows, data context, and real-world constraints. Together, they can build systems that are both technically sound and operationally relevant.
Without that balance, organizations risk building solutions that are either technically impressive but unusable, or operationally relevant but underpowered.
The Organizations That Win Will Move First and Learn Fastest
AI adoption is no longer optional. The question is not whether organizations will adopt AI, but how (and how quickly) they will do it. Some will move early, invest in infrastructure, and build internal capability. Others will wait, attempting to adopt once the technology stabilizes.
History suggests that the first group will have a significant advantage. AI is not just another tool. It is a platform shift, one that has the potential to reshape how software is built, how workflows operate, and how value is delivered across healthcare.
And like previous platform shifts, it will create both winners and losers. The difference will not be who had access to the best models. It will be who built the right foundation to use them.