It is believed that approximately 250k people die each year due to medical errors, which makes this the third leading cause of death in the United States, according to data from Johns Hopkins University and the National Library of Medicine. The National Practitioner Data Bank tallied an annual average of 12.4k cases filed for medical malpractice between 2009 – 2018. Nearly $9.8 billion of direct medical professional liability insurance premium was written in 2019, according to the National Association of Insurance Commissioners, in order to insure against all of these malpractice claims. These are significant numbers describing significant issues confronting the healthcare system.
Much of the promise of technology in healthcare (artificial intelligence (AI), predictive algorithms, clinical decision support, robotic process automation (RPA), etc) is to standardize and automate the practice of medicine, thereby making it “better” – more efficient, less costly, more responsive, and hopefully, safer. Important elements of the healthcare delivery system are being automated with exciting advances in AI and RPA in order to usher in the great promise of precision medicine, which is expected to be a massive market opportunity. A recent analysis in Nature Biotechnology sized the global precision medicine market in 2028 at $217 billion.

Here is what I am struggling to reconcile: all of these advances and yet errors are still rampant. And with these advances come a roster of nettlesome legal and ethical issues, that are only now beginning to be raised, much less answered. The movement to a more intelligent, always-on, virtual care delivery model challenges even the definition of what is deemed healthcare data (video feeds from someone’s home?). A greater respect for social determinants of health introduces new insights to advance whole person care models while expanding the definition of who is a care giver – and whether they are bound or covered by HIPPA.
Algorithms can be deterministic yet have been shown to have certain biases based on the training sets of data used to create those algorithms. A number of issues are revealed when new AI algorithms are asked to coexist with long-established clinical guidelines based on empirical evidence and codified by regulatory approvals. If the premise that AI actually can improve upon existing standards of care, how then does the clinician reconcile opposing or differing recommendations between guidelines and AI tools? Does this introduce new liabilities?
The standard to avoid malpractice claims is for a provider to simply deliver care consistent with similarly trained providers (and that there not be an injury). Issues and legal exposure start to arise when the provider deviates from standards of care in favor of recommendations or insights provided by AI tools, even if they are thought to be superior. If the AI tools, in fact, are inferior or misapplied, and the provider has deviated from standard of care presuming the tools to be of acceptable quality, now the legal exposure is potentially significant, to say nothing of the risks to the patient.
It is this scenario – or the specter that it is even remotely possible – which has created reticence among many clinicians to embrace AI tools. Settled case law is considered quite conservative in this jurisdiction which has limited the clinical adoption even of proven healthcare technologies.
The actual medical liability costs are understandably hard to ascertain. A detailed Harvard University study in 2010 calculated the total cost to be $55.6 billion. The staggering size of this issue continues to motivate entrepreneurs to develop innovative solutions to whittle away of the problem. According to Rock Health data, just the clinical decision support sector received $2.0 billion (35 companies) and $647 million (8 companies) of investment in 2020 and 2021, respectively. This does not even begin to reflect the extraordinary amount of funding into general purpose AI companies. Perhaps not surprisingly, the American Medical Association (AMA) sees the problem as even more significant pegging the costs associated with medical liability between $84 billion – $151 billion.

The AMA recently studied “ambient intelligence” platform technologies in hospitals and cited that video surveillance and transcription systems risk capturing novel data without patient, much less worker, consents. Video of common spaces inside hospitals risk identifying patients and compromising expectations of privacy. Furthermore, if certain clinical issues are identified (in an unstructured format) and nothing is done, does that now introduce liability.
The costs to society of not using novel technologies developed in other industries for healthcare applications is hard to measure but likely quite significant. Law enforcement has struggled with facial recognition, leading to stricter regulations and limits on its use, but similar image algorithms are powerful in the detection of certain skin cancers. Amazon has received a fair bit of public scorn for the way it monitors and evaluates employees’ productivity, but similar technologies power remote patient monitoring solutions that dramatically improve virtual care. The utility of these technologies ultimately should push adoption and acceptance.
It is near-impossible to code for a person’s ability to decipher nuance. The electronic vehicle (EV) industry is struggling with similar issues. Obviously, each of us as drivers are regulated at the state level yet the federal government oversees EVs. This patchwork has created confusion as this industry comes of age. In the event of a driverless EV accident, where does the liability lie? When faced with a terrible dilemma such as to either run over a pedestrian resulting in near-certain death or crash into a school bus, how does the EV make that decision? Who is responsible for the outcome?
As other industries sort out these intractable legal and ethical questions, the healthcare industry may set certain precedents.
The frequencies of medical errors not only endangers the patients and adds to the economic burden, but sabotages accuracy of medical data used for training AI models.
This subsequently, amplified by algorithms and makes the problem of medical errors impossible to eradicate.
terrific observation. really appreciate the nuance this adds.
Thanks, Michael.
Greetings from Israel!
I’ve rarely seen “obvious” medical errors in practice…e.g. misdiagnosis when the data is obvious, wrong site surgeries, pushing the wrong meds etc.. Don’t get me wrong, errors happen and doctors make bad decisions, but the obvious stuff is pretty infrequent.
Much more common are things in the “grey”…a diagnosis that could have been made earlier, lab results and symptoms pointing to a rare diagnosis, an uncommon side effect of a medication. Almost all of medicine is probabilistic, and humans are pretty good but not perfect of weighting probabilities. The opportunity and challenge with AI is that its also probabilistic…it might be able to spot rare conditions/issues in the data that humans can’t, but those probabilities will be ranked low vs more standard / mundane explanations.
AI will open the predictive aperture for potential diagnoses / side effects / surgical complications. Human physicians will still need help interpret the probabilities (and include patient preferences) and act in the physical world…which will ironically open up the potential for “errors” since the AI’s output might include those low probability events.
Medicine will need to fully move from the macro (organ / system level) to the micro (genetic / biomarker) in diagnosis and treatment, before the power of AI can be realized. At that level, probabilities will still drive AI, but it will look much more deterministic.
thnx Jack – really important nuance and clarification. as always, it is never simply black and white. expect health tech industry will continue to iterate and improve performance – may just take many years to optimize
Our current dependence on the current “standard of care” as the gold standard of best medical practice is a deep, and profound disorder in medicine. Comprehensive standards of care for managing clinical information are woefully incomplete.
Trailing behind most other industries, medicine relies on the cognitive capacities of autonomous providers (personal knowledge, judgement, habits, intellect) to generate these standards. In order to generate decisions that reflect true “standards of care,” the use of tools that integrate detailed patient data with comprehensive medical knowledge must precede clinical decision-making. The best use of information tools will deduce and stratify all relevant diagnostic and therapeutic possibilities, maximizing standards of objectivity, completeness and accuracy. Clinical judgement, experience and critical thinking then add precision to care delivery.
Many data-sets relevant to provider decision making will need to be made available at the point-of-care: detailed personal historical information trended across populations, claims-related outcome data, comprehensive, analyzed and cross-referenced scientific data, etc.
So, the conundrum of training set data bias applies not just to AI and RPA but to clinical guidelines based on clinical evidence as well. Embracing AI tools will depend on incorporating the full range of medical information into clinical decision making, using the best available tools to manage it (AI, RPA), applying critical thinking and architectural choice upon informed consent, and upgrading the cultural landscape of medical practice.
really great insights – appreciate the note to clarify the nuances around how this market evolves. always nice to hear first-hand complexities.
As the next generation of AI/ML infused EMR’s enter the market, it offers an opportunity for these technology/revenue cycle management companies to go further in surfacing differential diagnosis and highlighting high-probability diagnostic errors. Additionally, companies in this space are uniquely situated to measure provider quality and in turn, could be partners in underwriting lower priced medical malpractice policies that are quantified based on years of actual performance.
great to hear from you, my friend. hope you have settled in and appreciate the insightful comment!