Physician vs technology: Who takes the blame?

>>> 9-1-1, we have an emergency…

AI is not just appearing in our homes with the use of Amazon’s Alexa nor just helping us in transportation with self-driving Tesla cars, but also in healthcare. As we shift from all manual based medicine to an AI-human interactive healthcare, the question in this title is one of the many that needs to be addressed (See Blog Post)

I would like to discuss the view of a fellow blogger, Shailin Thomas, in his Harvard Law Bill of Health blog. He argues that since the algorithm has a higher accuracy rate than the average doctor and going with the suggestion of the algorithm is the statistically best option to follow, then it “seems wrong to place blame on the physician”.

He then focuses on the fact that shifting this blame away from the doctor would lead to a decrease in the exercise of medical malpractice laws , which protect patients by providing a way to police their doctors  final diagnosis, and gives two reasons why this could be a good news.  First is that strict malpractice liability laws do not necessarily mean a better result for the patient as suggested by research done at Northwestern University reported by FoxNews. Second, that taking malpractice liability away from the physician would decrease the overspending in healthcare resulting from the practice of defensive medicine  (ordering more diagnostic tests to avoid potential lawsuits).

Although he certainly makes good use of potential positive outcome of lessening medical malpractice laws on physicians, I strongly disagree that tempering with the serious responsibility of an AI-based diagnostic mistake is the way to effect this change.

The doctor is the healthcare professional, not the AI. Regardless of the use of the AI algorithm, at the end of the day, the algorithm continues to remain a tool for the physician to improve the accuracy and delivery of her diagnosis. As pediatric Rahul Parikh mentions in his MIT technology review, “AI can’t replace doctors. But it can make them better”.  After all, AI is not replacing their jobs, but changing them. Take, for example, if you require a certain complex program to do an aspect of your job you had to do manually before. At first you are sceptic of its use, but as time passes, you become familiar with the program. So much that there comes a point where you decide to fully trust it, bypassing any double inspections. But then you make a mistake. You realize the program made it without crossing your mind to second guess. Will your employer fire you, or the program? In healthcare there is seldom firing, usually there is death.

“There is currently no safety body that can oversee or accredit any of these medical AI technologies, and we generally don’t teach people to be safe users of the technology. We keep on unleashing new genies and we need to keep ahead of it.”

-Enrico Coiera, Director of the Centre for Health Informatics at Macquarie University

The healthcare industry is already good at failing to prevent mistakes. In 2016, medical errors were the 3rd leading cause of death in the U.S only, where only heart disease and cancer exceeded it. There have been many commentators comparing AI like a black box, as it is nearly impossible to know how the deep learning algorithms come up with their conclusions. But healthcare industries can also be classified as a black box, as Matthew Syed references in his Black Box Thinking book, where everyone trust doctors and hospitals and yet they have let more people die each year than in traffic accidents while the process is overall sorrowful but accepting, with limited post hoc oversight .

Industry Deaths (2016)
Aviation 325
Traffic accidents 40,000
Healthcare industry (preventable) 250, 000

 

If the physician will not be responsible, who will? The first option would be to hold the AI responsible. However, the AI is inanimate where the affected patient would have no compensation and will end extremely unsatisfied. A second option would be to shift the blame to the developers. This might be difficult as the software can be accurate on its initial design and implementation, for example, IBM Watson in clinical medicine where it was designed to compete in Jeopardy. This would also decrease interest in AI development. A third option would be to hold the organization behind the AI responsible. However, if the AI does not have design failures it would be hard for this to work, like holding Tesla responsible for an accident done by a Tesla car user.

To further develop AI implementation in healthcare, the question of responsibility needs to be addressed. But in your case, who do you think should be held responsible?

 

 

Machine Learning joins our fight against cancer

>>> Let’s classify

Histology is used to do microscopic analyses of cancer tissue. It remains as the core technique of classifying many rare tumors as there is a lack of molecular identifiers compared to common types of tumors in which the abundance of identifiers allow technological developments to asses them without needing visual appraisal of cellular alterations.

The problem with histology

As it depends on visual observations, these can vary between different individuals leading to different classifications based on different assessments, thus introducing bias. Along with this human variation, it faces more challenges, i.e.  the fact that despite having similar histology, many tumors can still progress in different ways, and so the other way around, where tumors with different microscopic characteristics can progress the same way.

In previous research studies (1, 2) for example, this inter-observer variability in histopathological diagnosis has been reported in Central Nervous System (CNS) tumors like diffuse gliomas (brain tumors initiating in the glial cells), ependymomas ( brain tumors initiating in the ependymoma), and supratentorial primitive neuroectodermal tumors ( occurring mostly in children starting in the cerebrum). To try to address this problem, some molecular groupings have been updated into the World Health Organization (WHO) classification, but only for selected tumors such as medulloblastoma.

This diagnostic variation and uncertainty provide a challenge to decision-making in clinical practice that can have a major effect on the survival of a cancer patient. Therefore, Capper and colleagues decided to train their machine learning algorithm focusing not on complex visual assesments, but on the most studied epigenetic event in cancer, DNA methylation.

Histology vs DNA methylation

Epigenetic modifications do not affect the DNA sequence that encode how our cell will function, but it alters the expression of genes and the fate of the cell. In DNA methylation, a chemical group called a methyl group is bound to the DNA, and this feature is diverse in specific cancers which allow for innovative diagnostics to classify them. Compared with histology, epigenome analysis of DNA methylation in cancer allows for an unbiased diagnostic approach, and  thus Capper et al. (2018) fed their innovative cancer diagnostic computer genome-wide methylation data from samples of  almost all CNS tumour typed under WHO classification.

Machine Learning + DNA methylation

Capper et al. (2018) used the machine learning algorithm Random Forest (RF), as it combines several weak classifiers to improve the accuracy of the prediction, and trained it to recognize methylation patterns in the provided already histological-classified samples via supervised machine learning and find naturally occurring tumor patterns by itself to assign the samples based on this pattern category. Capper and his colleagues then used the computer to classify 1,104 test cases which has been diagnosed by pathologists using standard histological and molecular way. An overview of their findings showcases their interesting results:

rf

 

In 12.6% of the cases, the computer and pathologist diagnosis did not match, but after further laboratory testing involving a technique called gene sequencing that allows to see DNA changes at the genetic level, 92.8% of these unmatched tumors were found to correctly match the computers and not the pathologist’s assessment. Furthermore, 71% of these were computationally assigned a different tumor grade, which affect treatment delivery.

The Future

Despite this machine learning innovation, today histology remains as the indispensable method for accessible and universal tumor classification. However, the approach developed by Capper et al. (2018) complements and, in some cases such as rare tumor classification, outrivals histological microscopic examination. As this platform further develops in present laboratories, the future of cancer classification might prove one of utmost accuracy and unbiased approach by the combination of visual inspection and molecular analysis.