The Real Cost of NSA Arbitration

An In-Depth Review of More Than 1.25 Million Federal Disputes

Natiuonal NSA Report Flyer

AI in Healthcare: Innovation, Risk, & Regulation

April 2, 2026

By: Nick Bonds, Esq

Artificial intelligence is rapidly transforming the healthcare landscape—touching everything from how drugs are developed to how care is delivered and how claims are processed. For self-funded plans and their partners, AI presents meaningful opportunities to improve efficiency and outcomes. At the same time, it introduces a new layer of regulatory scrutiny and fiduciary risk that cannot be ignored.

On the innovation side, AI is already reshaping drug discovery and development. Advanced models can analyze massive datasets to identify viable compounds, predict treatment outcomes, and accelerate clinical research. Similarly, AI is increasingly embedded in care delivery, including digital therapeutics and behavioral health tools that guide treatment plans and expand access to care. Consumer-facing tools—such as AI-powered health assistants and chatbots—are also becoming more prevalent, helping individuals navigate symptoms, treatment options, and benefits in real time.

But while the technology is advancing quickly, the legal framework governing it is not.

At the federal level, comprehensive AI regulation in healthcare remains nascent. In the absence of clear federal guardrails, states are stepping in to fill the gap—particularly in areas like claims adjudication, clinical decision support, and chatbot transparency. Many of these laws share a common theme: AI can support decision-making, but it cannot replace human oversight. For example, state laws increasingly restrict the use of AI as the sole basis for adverse benefit determinations and require transparency when AI is used in claims processing.

At the same time, enforcement risk is growing—and evolving. The use of AI introduces new potential liability under existing frameworks like the False Claims Act (FCA), particularly where AI is used in care delivery, documentation, or claims adjudication. Regulators have already begun pursuing cases involving AI-enabled fraud schemes, signaling that scrutiny in this area will only increase.

Notably, regulators are not just policing AI—they are using it themselves. Federal agencies, including CMS and the Department of Justice, are leveraging AI to detect fraud, waste, and abuse. For example, CMS’s WISeR Model uses AI and machine learning to flag potentially unnecessary or inappropriate services in the Medicare program, pairing automated review with human clinical oversight. This dual use of AI—both by industry and by regulators—suggests a future where enforcement becomes more sophisticated, data-driven, and proactive.

For plan sponsors and administrators, these developments have direct implications—particularly in the context of claims administration, denials, and appeals.

ERISA requires fiduciaries to act prudently and in the best interest of participants and beneficiaries. Over-reliance on automated systems to make benefit determinations—especially where those systems lack transparency or produce inconsistent outcomes—could raise serious fiduciary concerns. The emerging regulatory trend is clear: AI may assist, but meaningful human involvement remains essential. Plans that delegate decision-making too heavily to automated tools risk not only participant dissatisfaction, but also regulatory exposure.

Accordingly, the path forward is not to avoid AI, but to govern it thoughtfully when incorporating it into a plan’s policies and procedures. This includes implementing internal oversight structures (keep a human “in-the-loop”), monitoring AI outputs for accuracy and bias, ensuring transparency in how decisions are made, and aligning plan operations with evolving state and federal expectations. As regulators increasingly deploy AI to audit and investigate the industry, plans may also benefit from adopting similar tools internally to identify risk before it becomes liability.

Bottom line: AI is here—and its role in healthcare will only expand. For self-funded plans, success will depend on striking the right balance: embracing innovation while maintaining strong governance, clear accountability, and human-centered decision-making.