Premortem Report
Meridian Health Systems
AI-powered patient triage and routing system
AI Readiness Score
Executive Summary
Meridian is building a patient triage AI that routes incoming cases to the right department. The clinical need is real and quantified ($2.1M/year in mis-routes), but the project has critical gaps in data readiness and adoption planning. The team is building a custom NLP model when a fine-tuned commercial solution would ship in one-third the time. Without course correction, this project will produce a working prototype that nobody trusts enough to use in production.
Pillar Scores
Well-defined problem with measurable cost. The $2.1M/year in mis-routed patients is verifiable from billing data. C-suite sponsor with specific mandate.
Fix: Document the current manual routing accuracy as a baseline — you'll need it to prove ROI.
Critical gap. Patient records span three systems (Epic, legacy SQL, scanned PDFs) with no unified pipeline. The team estimates 'thousands' of labeled examples but hasn't validated quality. PDF extraction alone will take 3 months.
Fix: Stop model development. Spend the next 6 weeks building the data pipeline and validating label quality on a 500-record sample before writing another line of model code.
Output will surface in the existing triage dashboard, which is good. But the error handling plan is 'human reviews every output' — at 400 cases/day, this creates more work than it saves unless accuracy exceeds 92%.
Fix: Define the confidence threshold for auto-routing vs. human review. Model only adds value if >60% of cases can be auto-routed reliably.
Team is building a custom transformer from scratch with 2 ML engineers. Three commercial medical NLP platforms (Abridge, Regard, Suki) offer triage classification that could be fine-tuned in weeks. The team hasn't evaluated any of them.
Fix: Run a 2-week proof-of-concept with a commercial medical NLP API before committing to custom build. If it gets to 85% accuracy, you've saved 8 months.
ROI target exists ('reduce mis-routes by 50%') but no intermediate milestones. No model performance metrics defined beyond 'accuracy.' Clinical AI needs precision/recall tradeoffs — a false negative (missed urgent case) has asymmetric cost.
Fix: Define separate precision/recall targets for urgent vs. routine cases. Track time-to-correct-department, not just routing accuracy.
No change management plan. Triage nurses — the end users — were not consulted during design. The project is perceived as 'IT replacing clinical judgment.' Two department heads have already expressed skepticism in steering committee notes.
Fix: Bring 3-4 triage nurses into the design process immediately. Reframe from 'AI makes routing decisions' to 'AI pre-fills the routing recommendation, nurse confirms.'
Top 3 Fixes
Stop model development and build the data pipeline first. Validate label quality on 500 records before training. This prevents the most common failure mode: a model that works on clean test data and fails on real clinical records.
Evaluate commercial medical NLP platforms before committing to custom build. A 2-week POC with Regard or Abridge will either validate the custom approach or save you 8 months and $400K in engineering time.
Bring triage nurses into the design process now, not after launch. Their buy-in determines whether this ships to production or dies as a demo. Reframe the tool as clinical decision support, not replacement.
Cost of Inaction
Without these changes, Meridian will spend 12-18 months building a custom model on unreliable data that triage nurses refuse to trust. Estimated wasted spend: $600K-$900K in engineering and opportunity cost. The $2.1M/year routing problem remains unsolved, and the next AI initiative faces an uphill credibility battle internally.
Ready for yours?
Your report will be this specific — tailored to your project, your data, your team.
Get your Premortem report→$499 · Report in 90 seconds · Full refund if no new insights