Watson for Oncology recommended unsafe care
IBM’s flagship cancer assistant suggested treatments that would have seriously harmed patients because it was trained on hypothetical data rather than real cases.
Loading page...
Deploy clinical copilots that are red-teamed for unsafe advice, protect PHI, and respect scope-of-practice lines.
Ambient documentation, triage bots, and care-management copilots can relieve clinicians—yet one unsafe recommendation or leaked chart becomes a reportable event. We red-team the workflows and ship controls that keep guidance grounded in approved guidelines and PHI inside regulated boundaries.
Typical deployments
IBM’s flagship cancer assistant suggested treatments that would have seriously harmed patients because it was trained on hypothetical data rather than real cases.
The National Eating Disorders Association shut down “Tessa” after it told users to cut calories and lose weight—precisely the guidance clinicians warn against.
Universities and hospital compliance teams cautioned doctors that feeding patient notes to OpenAI could violate HIPAA, since the vendor retains and trains on those prompts.
Catalogue every model, retrieval index, and dataset touching PHI, enforce encryption/retention policies, and keep training plus evaluation within HIPAA-compliant enclaves.
Apply controls derived from red-team findings: dosing/contraindication checks, guideline citations, and escalation thresholds before any action.