The government is targeting companies and employees for adding diagnoses that allegedly fraudulently increase Medicare Advantage (MA) plan payments (“Medicare Advantage Provider Independent Health to Pay Up to $98 Million to Settle False Claims Act Suit,” 2024). This practice is happening more often due to AI programs that scan documentation and suggest diagnosis codes that might have been missed. The potential liability could be serious and widespread, especially if such conduct results in higher MA risk adjustment payments.
False Claims Act Liability
AI-assisted coding that leads to the submission of false or unsupported diagnoses violates the False Claims Act (FCA). Both organizations and individuals—including coders, auditors, and potentially AI vendors—can be held responsible under the FCA for submitting or causing the submission of false claims to the government for reimbursement.
For example, AI might suggest a diagnosis that lacks clear evidence in the clinical record and is submitted without human validation. This is a false claim.
Corporate Responsibility and Oversight
Organizations using AI must ensure that the output of AI generative tools is: 1) clinically valid, 2) auditable, and 3) compliant with Centers for Medicare and Medicaid Services (CMS) guidelines. Failing to properly govern AI tools may be considered negligence or willful ignorance (“Class-Action Suit Accuses Another Medicare Insurer of Using AI to Deny Care,” 2023).
Human Accountability
Even with AI tools, human reviewers are responsible for final code selections. If staff members rely solely on AI's recommendations as their only source of factual review and do not validate using MEAT (monitor, evaluate, assess, address, treat) indicators, they could be vulnerable to negligent or fraudulent coding. AI is helpful when used as a tool to assist (e.g., highlight potential diagnosis codes, identify missing MEAT elements), applied within a process involving human review, and trained with clinical context and current regulatory standards. However, the application of AI can worsen issues when it assigns codes automatically without review, and organizations prioritize productivity over accuracy.
There is an overreliance on AI output, rather than clinician documentation (“AI in Healthcare: Opportunities, Enforcement Risks and False Claims, and the Need for AI-Specific Compliance,” 2025).
Take proactive steps to compliance. Implement measures to ensure the ethical use of AI in coding.
For example:
- Human-in-the-loop validation: All code recommendations from the AI must be reviewed and validated by a certified coder.
- Audits: Regular audits are conducted on AI-generated output to ensure compliance with CMS review standards.
- Transparency in documentation: Ensure that decisions regarding coding (whether made by AI or a human) are supported by clearly documented clinical evidence.
- Policy: Establish internal policies for AI use in risk adjustment, including accountability, version control, and updates.
AI has the potential to either resolve or worsen coding issues, depending entirely on how it is implemented. The benefits of AI are realized when it is used ethically as a supportive tool rather than a decision maker. Coders are trained to verify codes assigned by AI using MEAT criteria. Organizations allocate resources toward transparency, audit trails, and compliance reviews of AI outputs. It can also be used to identify missed or poorly documented diagnoses, enhancing coding completeness and accuracy without the risk of inappropriate adjustments.
For example, AI can identify missing documentation across multiple records that a human might have overlooked, helping capture clinically relevant data, prevent undercoding, and ensure reasonable reimbursement. AI becomes problematic when it is used to manipulate the system by maximizing risk codes regardless of medical necessity, there is no human oversight of the coding process, or when coders are instructed to accept AI suggestions without thorough review. Vendors and providers sometimes use AI to auto-code unsupported diagnoses to achieve higher payments (“Artificial Intelligence and False Claims Act Enforcement,” 2025). It is also employed in so-called “black box” coding, where coders need explanations for the diagnoses assigned, but the process's auditability is questionable.
For example, AI could identify a chronic condition from a previous visit and use it for current-year coding without supporting documentation from the current visit, leading to unsupported Hierarchical Condition Category (HCC) coding. The AI system might not distinguish between actively treated diseases and vaulted conditions unless specifically programmed to do so, which poses a significant issue for accurate coding.
Finally, coding practices can proactively promote the ethical use of AI by establishing governance frameworks that include compliance, transparency, and auditing, along with defining clear policies for AI use.
Define AI's role strictly as assistive, not as an active coding tool. Ensure all AI-generated coding suggestions are reviewed and verified by humans.
To do this:
- Prohibit automatic submission of coding work without the coder's oversight. Require AI-specific compliance audits.
- Regularly review coded charts for MEAT and clinical validity, and analyze for patterns of overcoding of findings or unsupported diagnosis coding.
- Compare AI outputs with coder reviews to verify coding accuracy and identify biases. Be able to achieve transparency, auditability, and precise process rendering.
- Use AI systems that offer explanations and traceability for their responses.
Require vendors to disclose the AI systems used, algorithmic change history, database references, training databases, and related details. Additionally, specify who reviewed and approved the coding documents to create an audit trail and ensure transparency. Invest in train-the-coder programs with ongoing oversight of coder compliance.
Train the institute on evaluating critical standards, assessing AI functional output, etc.
Expose and educate your coders about the pitfalls of unethical AI systems. Some issues will involve legal implications, as well as ethical concerns that must be communicated during train-the-coder sessions. Provide incentives for coders to report discrepancies or AI hallucinations. Consider compliance and other ethical issues when choosing vendors for AI or coding projects. Evaluate vendors based on their technology performance related to AI systems and on the legal aspects of the overall coding process. Require all vendors to prepare risk assessments, audit reports, and similar documentation for review. Include AI compliance clauses in contracts to hold vendors accountable for errors or misuse of AI in coding. Foster a culture of ethical understanding.
The focus should be on coding accuracy, not profitability. Encourage ethical practices identified by coders, such as unsupported AI suggestions. There isn't enough coding education that specializes in integrating ethics into coding, with many saying it will be addressed later. This also involves legal considerations, including knowledge of and understanding the consequences of submitting invalid codes.
Violations of the FCA, CMS sanctions, reputational harm, and other issues are just a few examples—risky business. Without human oversight of AI systems and their recommendations that generate patient coding results, there is no transparency into AI analysis or real-time monitoring of the coding process (“Human Oversight in AI Medical Coding,” 2025). AI use becomes a clear liability for the organization. It's important to understand that the risk isn't eliminated by technology; it is actually worsened when ethical and governmental controls over its use are missing (“DevLicOps: A Framework for Mitigating Licensing Risks in AI-Generated Code,” 2025).
Dannilla Morgan, CPC, CBCS, is a seasoned medical coder specializing in risk adjustment and healthcare compliance, with over 5.5 years of industry experience. As a certified professional coder (CPC), she has a deep understanding of ICD-10-CM guidelines, risk adjustment methodologies, and regulatory compliance. Her expertise spans multiple risk adjustment models, including CMS-HCC, HHS-HCC, and RxHCC, allowing her to interpret clinical documentation and ensure accurate code abstraction.
Beyond her professional expertise, she is a dedicated leader, serving as president of the AAPC Carmel, NY chapter and previously as member development officer. In these roles, she fosters professional growth, networking, and education within the medical coding community, and also manages a risk adjustment Facebook group, "The Risk Adjustment Coder's Lounge," of over 7,500 coding professionals.
Passionate about continuous learning, she is currently pursuing a BSc in Health Information Management to further expand her impact in the healthcare field. With her disciplined work ethic, a keen eye for detail, and a commitment to excellence, she continues to contribute to the integrity and accuracy of medical coding practices and advocate for medical coders.