Dangers of Using AI as a Prescriber
AI-based prescribing systems pose substantial risks to patient safety through algorithmic bias, lack of accountability, inadequate explainability, and the potential to worsen health inequities—requiring mandatory human oversight and continuous monitoring to prevent harm. 1
Critical Safety Risks
Algorithmic Bias and Health Inequities
- AI algorithms trained on non-representative datasets can produce worse outcomes for underrepresented populations, directly threatening patient safety and exacerbating existing health disparities 1
- The American Heart Association emphasizes that AI/ML technologies relying on inaccurate or non-representative data may result in inappropriate resource allocation and increased inequities for specific patient groups 1
- Hidden stratification can occur where AI systems perform poorly for certain demographic groups without detection, leading to systematic harm that goes unrecognized without long-term monitoring 1
- Algorithm "auditing" processes must identify groups or individuals for whom AI decisions may be unreliable to prevent biased clinical recommendations 1
Lack of Explainability and Transparency
- Current AI prescribing tools often lack adequate explanations of their reasoning, causing clinicians to doubt their accuracy and safety, potentially leading to either inappropriate trust or complete rejection of valid recommendations 1
- Insufficient explainability prevents clinicians from understanding why specific medications are recommended, making it impossible to identify when AI recommendations are inappropriate for individual patients 1
- Poor quality explanations are perceived by users as "invalid, meaningless, not legit, or a bunch of crap," prompting them to seek secondary confirmation and undermining the utility of the AI system 1
- Conversely, excessive explanations can cause information overload, leading clinicians to ignore critical system warnings or safety alerts 1
High-Risk Decision Errors
- Recent evaluation of generative AI systems revealed that 75% omitted critical contraindications (such as ethambutol in optic neuritis), representing potentially life-threatening prescribing errors 2
- AI systems demonstrate poor localization, with 90% erroneously recommending macrolides for drug-resistant Mycoplasma pneumoniae in high-resistance settings, ignoring regional antimicrobial resistance patterns 2
- Complex reasoning deficits are common—most AI models fail to detect basic logical contradictions (such as prescribing prostate medications for female patients) or regulatory prescription limits 2
- Studies of ChatGPT for cancer treatment showed it provided incorrect treatment recommendations alongside correct ones for breast, prostate, and lung cancer, with overall low accuracy and precision 1
Accountability and Liability Gaps
- Unclear liability frameworks exist for AI prescribing errors, with controversial and insufficient guidance on responsibility distribution between developers, clinicians, and institutions 1
- Some AI application manufacturers make overly general recommendations (such as "recommend emergency care") for nearly every diagnosis, effectively transferring all responsibility to users and clinicians 1
- The absence of human supervision during AI design, development, and deployment fails to ensure anticipated benefits and poses direct risk of patient injury 1
- Assessing liability of AI/ML algorithms is crucial but currently inadequate, requiring engagement of all stakeholders (developers, clinicians, researchers) to continuously evaluate safety and effectiveness 1
System-Level Vulnerabilities
Inadequate Monitoring and Updating
- AI prescribing systems require regular updates as data quality, population characteristics, and clinical practice evolve, but these updates are often delayed or absent 1
- The speed at which AI/ML recommendation systems are updated to reflect medicine label changes (new warnings, drug-drug interactions, or indications) may lag significantly, directly influencing prescribing safety 3
- Patient safety monitoring mechanisms designed for traditional prescribing may be inadequate for AI-based applications, requiring modified institutional metrics and adverse event reporting systems 1
- Continuously learning algorithms that refine their internal models require frequent real-world performance monitoring, but applying regulatory frameworks to these evolving systems remains challenging 1
Data Manipulation and Privacy Concerns
- AI systems can manipulate real-world understanding of medication benefit-risk profiles by limiting or blocking prescriptions to high-risk patients or preventing off-label use, creating artifacts in surveillance data that are difficult to account for 4
- The uneven uptake and temporal availability of AI prescribing tools across healthcare systems and geographies creates systematic biases in pharmacovigilance data 4
- Users express concerns that personal health information will be collected without knowledge, that anonymous data could be re-identified through AI processes, and that health data could be hacked or sold for secondary exploitation 1
- Inappropriate transparency in AI explanations can lead to disclosure of sensitive details and system intrusions, harming both AI service providers and violating patient privacy 1
Lack of Empathy and Clinical Context
- AI prescribing systems are perceived as lacking empathy and being impersonal, particularly problematic for mental health prescribing where understanding emotion-related issues is critical 1
- The information-conveying methods of AI (such as transmitting complex disease information without human presence or explaining from a "how bad it is" perspective) trigger patient frustration, disappointment, and anxiety 1
- This lack of empathy impedes patient acceptance of AI recommendations and can negatively affect subsequent treatment adherence 1
- AI systems often fail to provide actionable information, such as where to seek medical assistance or what specific next steps patients should take 1
Regulatory and Quality Concerns
Insufficient Oversight Framework
- Companies should file FDA applications for AI algorithm marketing with postmarket safety monitoring similar to phase IV drug surveillance, but current oversight is inadequate 1
- The FDA regulatory framework struggles to keep pace with AI systems that continuously learn and update, making it difficult to ensure ongoing safety 1
- International borders are highly porous to web-based AI prescribing tools, creating jurisdictional challenges for regulation and accountability 4
- It remains difficult to estimate the "true impact" that an AI tool had on any individual prescribing decision, complicating benefit-risk assessments 4
High Risk of Bias in Development
- Systematic review of AI models for detecting inappropriate hospital prescriptions found 12 of 13 studies had high risk of bias, indicating fundamental methodological flaws in AI prescribing tool development 5
- Training datasets are extremely heterogeneous, ranging from 31 to over 5.8 million prescription orders with study durations from 2 weeks to 7 years, making generalizability questionable 5
- Even with representative training populations, data collection invariably involves human elements (such as reporter bias) that can compromise AI system reliability 1
Critical Implementation Requirements
Mandatory Human Oversight
- All AI prescribing systems require human-AI co-review with mandatory clinician oversight—autonomous clinical decision-making by AI is currently unsafe and inappropriate 2
- Clinical pharmacists and physicians must verify all AI recommendations against patient-specific factors, local resistance patterns, and current medication labels 2, 3
- Prespecified strategies for monitoring, managing, and reducing risks must be documented before deployment, including post-market surveillance and clinical follow-up when risk of harm differs between patient groups 1
Continuous Evaluation and Mitigation
- Where AI systems show uncertainty or variable performance across patient groups, clinical implications must be clearly stated as risks with documented mitigation strategies 1
- Formal assessments including equality impact assessments, algorithmic impact assessments, and medical algorithmic audits should be conducted and reported for all AI prescribing tools 1
- Long-term monitoring is essential to detect hidden stratification and unintended consequences that emerge only after widespread clinical deployment 1