The healthcare buzzwords of 2024 are “AI” and “transparency.” Futurists are excited about the potential for AI to improve healthcare, while data privacy advocates and technology skeptics are worried. This difference in philosophy is driving new regulation from the Office of the National Coordinator for Health Information Technology (ONC). A piece of the new Health Data, Technology, and Interoperability: Certification Program (HTI-1) rule aims to strike a balance between technology advancements and privacy concerns.
While the HTI-1 rule deals with multiple aspects of health data, technology, and interoperability, Rhapsody is most concerned with the portion impacting the implementation of AI within the healthcare ecosystem. The ONC is concerned with how the innovation coming from AI applications will impact the push for transparency in healthcare. The technology powering AI can be opaque, and, to some, the outputs appear pulled from thin air. The new rule creates early guidelines to aid healthcare technology development. As healthcare providers and health tech builders develop new applications of AI, ONC’s new rule will, respectively, dictate how companies report usage and provide guardrails for the industry. At Rhapsody, we’re committed to helping our customers navigate these regulations by providing best-in-class integration and record matching, maximizing the accuracy and effectiveness of AI algorithms.
AI classified as providing Decision Support Intervention is being regulated
Before diving too deep into the details, the new HTI-1 final rule specifically applies to tools used as “predictive decision support intervention (DSI),” so we need to understand what this entails. In the rule’s language, ONC defines predictive DSI as “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produce an output that results in prediction, classification, recommendation, evaluation or analysis.” This clearly includes technology using medical records to make treatment recommendations. It avoids lumping all uses of AI in the predictive DSI category. AI can be used for administrative purposes, like the identity resolution and record matching of Rhapsody Autopilot within Rhapsody EMPI, without falling under the purview of the rule.
ONC wrote this regulation to respect privacy rights and advance equity
While the rule is specific to predictive DSI usage, the regulatory definition provides context into larger concerns about the use of AI in healthcare. The transparency and auditability of an AI algorithm is crucial to root out both implicit and structural bias. Training models rely on good data. Inaccurate or incomplete records could cause misguided and even harmful treatment recommendations. The first step to reliable implementation of AI is making sure the inputs to the system are valid. A system trained on models such as a non-representative sample of patients would similarly result in errors. AI isn’t magic, and just like a geometry proof, the algorithm must show its work.
ONC is also worried about ethical, legal, and social concerns related to data collection and use. Collecting unnecessary information or including personal identifiable information (PII) could cause big headaches for technology providers. Developing platforms which respect privacy rights is an important policy goal. Shortcuts won’t be tolerated under the regulatory framework.
The new standards will also ensure decision support interventions aren’t reinforcing common, non-evidence-based practices. In any historic data set, records will include the bad habits of a health system or an entire practice area. Well-informed AI solutions will need to incorporate research and the latest evidence-based research. On a related note, no one wants algorithms to bake in inexplicable differences in health outcomes from anomalies or novel instances. This is the hard part and where demonstrating how AI arrived at a decision is most crucial. What factors did the platform consider? What research was relied on? What were the calculations? Practitioners and health IT professionals expect a computer to be able to spit out an answer. ONC is making sure the underlying logic is sound when it does so users can trust the answers they receive.
Ultimately, AI is only successful if it leads to recommendations that are both effective and safe for the population. The algorithm reaches the right decision if it has the training and information it needs to reach a valid, evidence-based outcome.
Vendors using AI in DSI must regularly submit reports
So, who should worry about providing proof to the ONC?
- Healthcare providers and health plans using AI to support their decision-making in covered programs and activities
- Companies providing medical devices with AI-enabled software
- Developers of certified health IT supplying a predictive DSI as part of a Health IT module
Health tech vendors should be prepared to support their healthcare provider and health plan customers with information about how each AI application works. Health and safety are paramount to the proprietary nature of the platforms. As such, the responsible parties will need to provide reporting to ONC every July starting in 2026.
These reports will provide transparent information about predictive decision support information to clinical customers. ONC will require companies to engage in risk-management practices to prevent the misuse of information and AI in healthcare.
Rhapsody has plans to integrate more AI capabilities in future products and features. We’ll be here to support our partners by making sure we act with transparency and compliance at the forefront of our AI offerings.
Interested in learning more? Contact us today.