Physician vs technology: Who takes the blame?

>>> 9-1-1, we have an emergency…

AI is not just appearing in our homes with the use of Amazon’s Alexa nor just helping us in transportation with self-driving Tesla cars, but also in healthcare. As we shift from all manual based medicine to an AI-human interactive healthcare, the question in this title is one of the many that needs to be addressed (See Blog Post)

I would like to discuss the view of a fellow blogger, Shailin Thomas, in his Harvard Law Bill of Health blog. He argues that since the algorithm has a higher accuracy rate than the average doctor and going with the suggestion of the algorithm is the statistically best option to follow, then it “seems wrong to place blame on the physician”.

He then focuses on the fact that shifting this blame away from the doctor would lead to a decrease in the exercise of medical malpractice laws , which protect patients by providing a way to police their doctors  final diagnosis, and gives two reasons why this could be a good news.  First is that strict malpractice liability laws do not necessarily mean a better result for the patient as suggested by research done at Northwestern University reported by FoxNews. Second, that taking malpractice liability away from the physician would decrease the overspending in healthcare resulting from the practice of defensive medicine  (ordering more diagnostic tests to avoid potential lawsuits).

Although he certainly makes good use of potential positive outcome of lessening medical malpractice laws on physicians, I strongly disagree that tempering with the serious responsibility of an AI-based diagnostic mistake is the way to effect this change.

The doctor is the healthcare professional, not the AI. Regardless of the use of the AI algorithm, at the end of the day, the algorithm continues to remain a tool for the physician to improve the accuracy and delivery of her diagnosis. As pediatric Rahul Parikh mentions in his MIT technology review, “AI can’t replace doctors. But it can make them better”.  After all, AI is not replacing their jobs, but changing them. Take, for example, if you require a certain complex program to do an aspect of your job you had to do manually before. At first you are sceptic of its use, but as time passes, you become familiar with the program. So much that there comes a point where you decide to fully trust it, bypassing any double inspections. But then you make a mistake. You realize the program made it without crossing your mind to second guess. Will your employer fire you, or the program? In healthcare there is seldom firing, usually there is death.

“There is currently no safety body that can oversee or accredit any of these medical AI technologies, and we generally don’t teach people to be safe users of the technology. We keep on unleashing new genies and we need to keep ahead of it.”

-Enrico Coiera, Director of the Centre for Health Informatics at Macquarie University

The healthcare industry is already good at failing to prevent mistakes. In 2016, medical errors were the 3rd leading cause of death in the U.S only, where only heart disease and cancer exceeded it. There have been many commentators comparing AI like a black box, as it is nearly impossible to know how the deep learning algorithms come up with their conclusions. But healthcare industries can also be classified as a black box, as Matthew Syed references in his Black Box Thinking book, where everyone trust doctors and hospitals and yet they have let more people die each year than in traffic accidents while the process is overall sorrowful but accepting, with limited post hoc oversight .

Industry Deaths (2016)
Aviation 325
Traffic accidents 40,000
Healthcare industry (preventable) 250, 000

 

If the physician will not be responsible, who will? The first option would be to hold the AI responsible. However, the AI is inanimate where the affected patient would have no compensation and will end extremely unsatisfied. A second option would be to shift the blame to the developers. This might be difficult as the software can be accurate on its initial design and implementation, for example, IBM Watson in clinical medicine where it was designed to compete in Jeopardy. This would also decrease interest in AI development. A third option would be to hold the organization behind the AI responsible. However, if the AI does not have design failures it would be hard for this to work, like holding Tesla responsible for an accident done by a Tesla car user.

To further develop AI implementation in healthcare, the question of responsibility needs to be addressed. But in your case, who do you think should be held responsible?

 

 

Can AI discriminate against minorities?

>> Hello, World!

Much has been going on since I recently made my first post, i.e.,  the use of the new genome editing technique CRISPR-Cas9 by He Jiankui, scientist at the Southern University of Science and Technology of China, to alter the DNA of embryos of seven couples leading to the birth of two genetically modified female twins. This research has been called “monstrous”, “crazy”, and “a grave abuse of human rights” by the scientific community worldwide, and the universities involved have proclaimed no awareness of the performance of this research under their institutions.

This research, however, has a positive. It highlights the inefficient regulation of some novel technological innovations that need to urgently be addressed for the benefit of society and advancement of science and technology.

Currently, there is also the rise of Artificial Intelligence (AI) being implemented in many fields, specially healthcare. Similar to CRISPR-Cas9, the developers and users of this technology need to take a moment to step back from the technological upheaval and look at their innovation through ethical lenses, to see, address, and prepare for the potential negatives and ethical conflicts.

This post is the first on a two post series covering major ethical issues around AI use on healthcare that need to be taken into consideration.

ethics_cte

 

AI algorithms can discriminate against minorities

The food that enables AI algorithms to function, specially machine learning and deep learning,  is large data sets, which are taken as input, processed, and used to deliver conclusions based solely on these data. For example, a company could use AI to make recommendations on the best candidate to hire by feeding the algorithm data about successful candidates for it to make a conclusion.

Applying these algorithms to make a decision over a human matter is tricky, however, as the data needs to reflect our diversity. If it does not, this is one way the recommendation by the algorithm can be biased, others are human biases inherent in the data and an intentional embedding of bias into the algorithm by a prejudiced developer.

Already in non-medical fields AI has shown to reflect biases in training. For example, AI algorithms designed to help American judges make sentences by predicting an offender’s tendency to re-offend have shown an alarming amount of bias against African-Americans.

reoffend
Bernard Parker, left, was rated high risk; Dylan Fugett was rated low risk. (source)

With regards to healthcare delivery, it varies by race, and an algorithm designed to make a healthcare decision will be biased if there has been few (or no) genetic studies done in certain populations. An example of this is demonstrated in the attempts to use data from the Framing Heart Study to predict cardiovascular disease risks in non-white populations. These have led to biased results with overestimations and underestimations of risk.

Not everyone might benefit equally from AI in healthcare as AI might be inefficient where data is scarce. As a result, this might affect people with rare medical conditions or other underrepresented in clinical trials and research data, such as Black, Asian, and minority ethnic populations.

As the House of Lords Select Committee on AI cautions, datasets used to train AI systems usually do a poor job in representing the wider population, which can potentially make unjust decisions reflecting societal prejudice.

AI algorithms can be malicious   

In addition, there is the ethical issue that developers of AI may have negative or malicious  intentions when making the software. After all, if everybody had good intentions the world would certainly be a better place.

Take, for example, the recent high profile examples of Uber and Volkswagen. Uber’s machine learning software tool Greyball allowed the prediction of ride hailers that might be undercover law-enforcement officers, allowing the company to bypass local regulations. In the case of Volkswagen, they developed an AI algorithm in which their vehicles would reduce their nitrogen oxide emissions to pass emission tests.

hackjer

AI private companies working with healthcare institutions might create an algorithm that is better suited for the monetary interest of the institutions instead of the monetary and care interest of the patient. Particularly in the USA, where there is a continuous tension between improving health versus generating profit, since the makers of the algorithms are unlikely to be the ones delivering bedside care. In addition, AI could be used for cyber-attacks, robbery, and revealing information about a person’s health without their knowledge.

These potential negatives need to be acknowledged and addressed in the implementation of AI  in any field, specially healthcare. In the upcoming post, I will discuss the effects of AI on patients and healthcare professionals, breach of patient data privacy and AI reliability and safety.