CMS Has Its Own AI Now
CMS isn’t reviewing charts with paper files and highlighters anymore. The agency disclosed in its January 2026 HPMS memo that it has scaled from roughly 40 certified coders to approximately 2,000, and it’s deploying AI as a “medical coder support tool” to flag suspicious coding patterns across submitted data. The AI identifies anomalies. The coders investigate. The combination processes audit volumes that would have been impossible five years ago.
This changes the math for health plans. When CMS could only audit a handful of contracts each year, the probability of scrutiny was low enough that many plans accepted marginal documentation quality as a cost of doing business. Now CMS is auditing all 550+ MA contracts annually with quarterly cadence. The AI scans population-level data for the exact patterns that one-directional coding programs produce: risk scores that climb without corresponding clinical outcome changes, concentration in high-value HCC categories, and documentation that mentions conditions without proving active management.
If CMS is using AI to find problems, health plans need AI that finds them first.
The Pre-Submission Detection Advantage
The simplest way to avoid an unfavorable audit finding is to catch the problem before the code is submitted. That sounds obvious. In practice, most coding programs don’t do it. Coders identify diagnoses, assign codes, and move to the next chart. Quality assurance checks sample a fraction of the volume. The bulk of codes reach CMS without systematic defensibility validation.
AI-assisted audit simulation changes this. Before any chart package is finalized, the system scores each diagnosis on defensibility. It evaluates the MEAT evidence (Monitoring, Evaluation, Assessment, Treatment) in the clinical note. It identifies codes where the documentation is weak, ambiguous, or absent. It flags the specific diagnoses that CMS’s own AI would likely target based on known audit focus areas: acute stroke, myocardial infarction, cancer diagnoses, and other high-risk categories where OIG audits have found 100% error rates.
Plans that run this detection layer before submission are fixing problems at cents on the dollar. Plans that discover the same problems during a RADV audit are fixing them at the cost of recoupment demands, legal fees, appeals processes, and reputational damage. The OIG’s BCBS Alabama audit estimated $7.06 million in overpayments from just 271 sampled enrollee-years across two payment years. Pre-submission detection would have caught most of those unsupported codes.
What the Technology Needs to Do
Coding software designed for the current environment needs to operate on the assumption that CMS will examine every submitted code with AI-assisted scrutiny. That means every code the system recommends must come with documented evidence: the specific clinical language supporting the diagnosis, the MEAT elements satisfied, and the reasoning connecting evidence to recommendation.
Two-way capability is equally critical. CMS’s AI doesn’t just look for unsupported adds. It analyzes coding patterns for asymmetry. Plans that submit hundreds of additions and zero deletions across review cycles display the exact pattern the agency’s tools are designed to detect. Software that identifies both codes to add and codes to remove produces a balanced submission profile that looks like what it is: a compliance program, not a revenue program.
The explainability requirement ties everything together. When CMS’s AI flags a code and a human auditor follows up, the plan needs to produce evidence that a human can interpret. “Our AI recommended it” doesn’t answer the auditor’s question. “Here’s the clinical note, here’s the sentence documenting active monitoring, here’s the MEAT element it satisfies” does.
Matching the Regulator’s Capability
The enforcement gap has closed. CMS has AI. CMS has 2,000 coders. CMS is auditing every contract annually. Plans that are still running HCC Coding Software built before this capability existed are bringing a manual process to a technology fight. The minimum standard for coding technology in 2026 is a system that validates documentation as rigorously as CMS audits it, catches problems before submission, and produces evidence trails that satisfy both AI-assisted pattern detection and human auditor review.
