logo
Diagnosing Danger: AI-Generated Diagnosis in Medicare Advantage Coding

Coding

Diagnosing Danger: AI-Generated Diagnosis in Medicare Advantage Coding

Date Posted: Wednesday, March 25, 2026

Download Audio:

 

An enforcement area for the United States Government is adding diagnoses that allegedly fraudulently increase Medicare Advantage (MA) plan payments. For example, in July 2023, Martin’s Point Health Care, Inc. resolved False Claims Act (FCA) allegations and agreed to pay $22,485,00 (Justice, U. S. [July 30, 2023], Martin’s Point Health Care Inc. to Pay $22,485,000 to Resolve False Claims Act Allegations).

 

Although the Martin’s Point case did not involve generative artificial intelligence (AI), the practice of skewing codes to increase a risk adjustment score may be increasing due to AI programs that process through an algorithm, data documented in the medical record, and suggest diagnosis codes that might have been missed. 

 

The potential liability could be serious and widespread, especially if such conduct results in higher MA risk adjustment payments.

 

False Claims Act Liability

 

AI-assisted coding that results in the submission of false or unsupported diagnoses may violate the FCA. While there is very limited FCA case law in this area pursuant to longstanding U.S. Department of Justice policy, organizations and individuals (including coders, auditors, and potentially AI vendors) may be held responsible under the FCA for submitting or causing the submission of false and/or fraudulent claims to the government for reimbursement. 

 

For example, AI might suggest a diagnosis that lacks clear evidence in the clinical record. Traditional Medicare and Medicare Advantage have different requirements for the use of AI. Regardless, a human being should check the outputs against the content of the medical record for accuracy before a claim is submitted. Additionally, ensuring an organization’s compliance program is effective and adequate is essential to avoiding potential FCA liability.

 

In one high-profile example, the DOJ intervened in a case against UCHealth over an automated coding rule that allegedly “upcoded” emergency department encounters to the highest emergency department CPT® code (CPT 99285) based on the frequency with which hospital personnel checked a patient’s vitals. Data analysis by the Centers for Medicare and Medicaid Services (CMS) identified UCHealth as a “high outlier” for its use of CPT 99285, and the government traced the issue back to the hospital’s automated rule. The DOJ then took the position that UCHealth’s coding rule did not meet the requirements of the code description, leading to a $23 million settlement (Michael J. Ruttinger, 2025).

 

Corporate Responsibility and Oversight

 

Organizations using AI must ensure that the output of AI generative tools is: 1) clinically valid, 2) auditable, and 3) compliant with CMS guidelines (Guidance for Responsible Use of Artificial Intelligence at CMS, 2025). Failing to properly govern AI tools may be considered negligence or willful ignorance—at least in class action lawsuits such as the pending case against United Health Group, Inc. pending in the U.S. District Court in Minnesota.

 

Human Accountability

 

Even with AI tools, human reviewers are responsible for final code selections. If staff members rely solely on AI’s recommendations as their only source of factual review and do not validate using MEAT (monitor, evaluate, assess, address, treat) indicators, they could be vulnerable to negligent or fraudulent coding (Kevin B. O'Reilly, 2023). AI is helpful when used as a tool to assist (e.g., highlight potential diagnosis codes, identify missing MEAT elements), applied within a process involving human review, and trained with clinical context and current regulatory standards. 

 

However, the application of AI can worsen issues when it assigns codes automatically without review, and organizations prioritize productivity over accuracy. There may be a tendency for overreliance on AI output, rather than clinician documentation. 

 

Proactive Steps to Compliance

 

Implement measures to ensure the ethical use of AI in coding.

 

For example:

 

  • Human-in-the-loop validation: All code recommendations from AI must be reviewed and validated by a certified coder. Regular audits are conducted on AI-generated output to ensure compliance with CMS review standards.

  • Transparency in documentation: Ensure that decisions regarding coding (whether made by AI or a human) are supported by clearly documented clinical evidence. Establish internal policies for AI use in risk adjustment, including accountability, version control, and updates.

 

AI has the potential to either resolve or worsen coding issues, depending entirely on how it is implemented. The benefits of AI are realized when it is used ethically as a supportive tool rather than a decision maker. Coders are trained to verify codes assigned by AI using MEAT criteria. Organizations allocate resources toward transparency, audit trails, and compliance reviews of AI outputs. It can also be used to identify missed or poorly documented diagnoses, enhancing coding completeness and accuracy without the risk of inappropriate adjustments. 

 

For example, AI may identify missing documentation across multiple records that a human might have overlooked, helping capture clinically relevant data, prevent undercoding, and ensure reasonable reimbursement. AI becomes problematic when it is used to manipulate the system by maximizing risk codes regardless of medical necessity, there is no human oversight of the coding process, or when coders are instructed to accept AI suggestions without thorough review. Vendors and providers sometimes use AI to auto-code unsupported diagnoses to achieve higher payments—an area that the DOJ Health and Human Services Working Group flagged.

 

It is also employed in so-called “black box” coding, where coders need explanations for the diagnoses assigned, but the process’s auditability is questionable (Mindy Duffourc and Sara Gerke, Decoding U.S. Tort Liability in Healthcare's Black-Box AI Era, 2024). For example, AI could identify a chronic condition from a previous visit and use it for current-year coding without supporting documentation from the current visit, leading to unsupported Hierarchical Condition Category (HCC) coding. The AI system might not distinguish between actively treated diseases and vaulted conditions unless specifically programmed to do so, which poses a significant issue for accurate coding.

 

Finally, coding practices can proactively promote the ethical use of AI by establishing governance frameworks that include compliance, transparency, and auditing, as well as defining clear policies for AI use.

 

Defining AI's role strictly as assistive, not as an active coding tool, and ensuring all AI-generated coding suggestions are reviewed and verified by humans are two important steps to managing downstream risk. 

 

Prohibit automatic submission of coding work without the coder’s oversight and require AI-specific compliance audits.

 

Recommendations:

 

  • Regularly review coded charts for MEAT and clinical validity and analyze patterns of overcoding of findings or unsupported diagnosis coding.

  • Compare AI outputs with coder reviews to verify coding accuracy and identify biases. Be able to achieve transparency, auditability, and precise process rendering.

  • Use AI systems that offer explanations and traceability for their responses.

 

Require vendors to disclose the AI systems used, algorithmic change history, database references, training databases, and related details. Additionally, specify who reviewed and approved the coding documents to create an audit trail and ensure transparency. Invest in train-the-coder programs with ongoing oversight of coder compliance.   

 

Train the institute on evaluating critical standards, assessing AI functional output, etc.

 

Expose and educate your coders about the pitfalls of unethical AI systems. Some issues will involve legal implications, as well as ethical concerns that must be communicated during train-the-coder sessions. Provide incentives for coders to report discrepancies or AI hallucinations. Consider compliance and other ethical issues when choosing vendors for AI or coding projects. Evaluate vendors based on their technology performance related to AI systems and on the legal aspects of the overall coding process. Require all vendors to prepare risk assessments, audit reports, and similar documentation for review. Include AI compliance clauses in contracts to hold vendors accountable for errors or misuse of AI in coding. Foster a culture of ethical understanding.

 

The focus should be on coding accuracy, not profitability. Encourage ethical practices identified by coders, such as unsupported AI suggestions. There isn't enough coding education that specializes in integrating ethics into coding, with many saying it will be addressed later. This also involves legal considerations, including knowledge of and understanding the consequences of submitting invalid codes.

 

Without human oversight of AI systems and their recommendations that generate patient coding results, there is limited confidence of accuracy. AI use becomes a clear liability for the organization. It's important to understand that the risk isn't eliminated by technology; it is worsened when ethical and governmental controls over its use are missing. 

 

Source: Dannilla Morgan, CPC, CBCS, is a seasoned medical coder specializing in risk adjustment and healthcare compliance, with over 5.5 years of industry experience. As a certified professional coder (CPC), she has a deep understanding of ICD-10-CM guidelines, risk adjustment methodologies, and regulatory compliance. Her expertise spans multiple risk adjustment models, including CMS-HCC, HHS-HCC, and RxHCC, allowing her to interpret clinical documentation and ensure accurate code abstraction.

Beyond her professional expertise, she is a dedicated leader, serving as president of the AAPC Carmel, NY chapter and previously as member development officer. In these roles, she fosters professional growth, networking, and education within the medical coding community, and also manages a risk adjustment Facebook group, "The Risk Adjustment Coder's Lounge," of over 7,500 coding professionals.

Passionate about continuous learning, she is currently pursuing a BSc in Health Information Management to further expand her impact in the healthcare field. With her disciplined work ethic, a keen eye for detail, and a commitment to excellence, she continues to contribute to the integrity and accuracy of medical coding practices and advocate for medical coders.

 

 

Was this article helpful?

Your feedback goes directly to our editorial team and helps us decide what to cover next.

Memory used: 4.03 MB