It is believed that approximately 250k people die each year due to medical errors, which makes this the third leading cause of death in the United States, according to data from Johns Hopkins University and the National Library of Medicine. The National Practitioner Data Bank tallied an annual average of 12.4k cases filed for medical malpractice between 2009 – 2018. Nearly $9.8 billion of direct medical professional liability insurance premium was written in 2019, according to the National Association of Insurance Commissioners, in order to insure against all of these malpractice claims. These are significant numbers describing significant issues confronting the healthcare system.
Much of the promise of technology in healthcare (artificial intelligence (AI), predictive algorithms, clinical decision support, robotic process automation (RPA), etc) is to standardize and automate the practice of medicine, thereby making it “better” – more efficient, less costly, more responsive, and hopefully, safer. Important elements of the healthcare delivery system are being automated with exciting advances in AI and RPA in order to usher in the great promise of precision medicine, which is expected to be a massive market opportunity. A recent analysis in Nature Biotechnology sized the global precision medicine market in 2028 at $217 billion.
Here is what I am struggling to reconcile: all of these advances and yet errors are still rampant. And with these advances come a roster of nettlesome legal and ethical issues, that are only now beginning to be raised, much less answered. The movement to a more intelligent, always-on, virtual care delivery model challenges even the definition of what is deemed healthcare data (video feeds from someone’s home?). A greater respect for social determinants of health introduces new insights to advance whole person care models while expanding the definition of who is a care giver – and whether they are bound or covered by HIPPA.
Algorithms can be deterministic yet have been shown to have certain biases based on the training sets of data used to create those algorithms. A number of issues are revealed when new AI algorithms are asked to coexist with long-established clinical guidelines based on empirical evidence and codified by regulatory approvals. If the premise that AI actually can improve upon existing standards of care, how then does the clinician reconcile opposing or differing recommendations between guidelines and AI tools? Does this introduce new liabilities?
The standard to avoid malpractice claims is for a provider to simply deliver care consistent with similarly trained providers (and that there not be an injury). Issues and legal exposure start to arise when the provider deviates from standards of care in favor of recommendations or insights provided by AI tools, even if they are thought to be superior. If the AI tools, in fact, are inferior or misapplied, and the provider has deviated from standard of care presuming the tools to be of acceptable quality, now the legal exposure is potentially significant, to say nothing of the risks to the patient.
It is this scenario – or the specter that it is even remotely possible – which has created reticence among many clinicians to embrace AI tools. Settled case law is considered quite conservative in this jurisdiction which has limited the clinical adoption even of proven healthcare technologies.
The actual medical liability costs are understandably hard to ascertain. A detailed Harvard University study in 2010 calculated the total cost to be $55.6 billion. The staggering size of this issue continues to motivate entrepreneurs to develop innovative solutions to whittle away of the problem. According to Rock Health data, just the clinical decision support sector received $2.0 billion (35 companies) and $647 million (8 companies) of investment in 2020 and 2021, respectively. This does not even begin to reflect the extraordinary amount of funding into general purpose AI companies. Perhaps not surprisingly, the American Medical Association (AMA) sees the problem as even more significant pegging the costs associated with medical liability between $84 billion – $151 billion.
The AMA recently studied “ambient intelligence” platform technologies in hospitals and cited that video surveillance and transcription systems risk capturing novel data without patient, much less worker, consents. Video of common spaces inside hospitals risk identifying patients and compromising expectations of privacy. Furthermore, if certain clinical issues are identified (in an unstructured format) and nothing is done, does that now introduce liability.
The costs to society of not using novel technologies developed in other industries for healthcare applications is hard to measure but likely quite significant. Law enforcement has struggled with facial recognition, leading to stricter regulations and limits on its use, but similar image algorithms are powerful in the detection of certain skin cancers. Amazon has received a fair bit of public scorn for the way it monitors and evaluates employees’ productivity, but similar technologies power remote patient monitoring solutions that dramatically improve virtual care. The utility of these technologies ultimately should push adoption and acceptance.
It is near-impossible to code for a person’s ability to decipher nuance. The electronic vehicle (EV) industry is struggling with similar issues. Obviously, each of us as drivers are regulated at the state level yet the federal government oversees EVs. This patchwork has created confusion as this industry comes of age. In the event of a driverless EV accident, where does the liability lie? When faced with a terrible dilemma such as to either run over a pedestrian resulting in near-certain death or crash into a school bus, how does the EV make that decision? Who is responsible for the outcome?
As other industries sort out these intractable legal and ethical questions, the healthcare industry may set certain precedents.