Physician vs technology: Who takes the blame?

>>> 9-1-1, we have an emergency…

AI is not just appearing in our homes with the use of Amazon’s Alexa nor just helping us in transportation with self-driving Tesla cars, but also in healthcare. As we shift from all manual based medicine to an AI-human interactive healthcare, the question in this title is one of the many that needs to be addressed (See Blog Post)

I would like to discuss the view of a fellow blogger, Shailin Thomas, in his Harvard Law Bill of Health blog. He argues that since the algorithm has a higher accuracy rate than the average doctor and going with the suggestion of the algorithm is the statistically best option to follow, then it “seems wrong to place blame on the physician”.

He then focuses on the fact that shifting this blame away from the doctor would lead to a decrease in the exercise of medical malpractice laws , which protect patients by providing a way to police their doctors  final diagnosis, and gives two reasons why this could be a good news.  First is that strict malpractice liability laws do not necessarily mean a better result for the patient as suggested by research done at Northwestern University reported by FoxNews. Second, that taking malpractice liability away from the physician would decrease the overspending in healthcare resulting from the practice of defensive medicine  (ordering more diagnostic tests to avoid potential lawsuits).

Although he certainly makes good use of potential positive outcome of lessening medical malpractice laws on physicians, I strongly disagree that tempering with the serious responsibility of an AI-based diagnostic mistake is the way to effect this change.

The doctor is the healthcare professional, not the AI. Regardless of the use of the AI algorithm, at the end of the day, the algorithm continues to remain a tool for the physician to improve the accuracy and delivery of her diagnosis. As pediatric Rahul Parikh mentions in his MIT technology review, “AI can’t replace doctors. But it can make them better”.  After all, AI is not replacing their jobs, but changing them. Take, for example, if you require a certain complex program to do an aspect of your job you had to do manually before. At first you are sceptic of its use, but as time passes, you become familiar with the program. So much that there comes a point where you decide to fully trust it, bypassing any double inspections. But then you make a mistake. You realize the program made it without crossing your mind to second guess. Will your employer fire you, or the program? In healthcare there is seldom firing, usually there is death.

“There is currently no safety body that can oversee or accredit any of these medical AI technologies, and we generally don’t teach people to be safe users of the technology. We keep on unleashing new genies and we need to keep ahead of it.”

-Enrico Coiera, Director of the Centre for Health Informatics at Macquarie University

The healthcare industry is already good at failing to prevent mistakes. In 2016, medical errors were the 3rd leading cause of death in the U.S only, where only heart disease and cancer exceeded it. There have been many commentators comparing AI like a black box, as it is nearly impossible to know how the deep learning algorithms come up with their conclusions. But healthcare industries can also be classified as a black box, as Matthew Syed references in his Black Box Thinking book, where everyone trust doctors and hospitals and yet they have let more people die each year than in traffic accidents while the process is overall sorrowful but accepting, with limited post hoc oversight .

Industry Deaths (2016)
Aviation 325
Traffic accidents 40,000
Healthcare industry (preventable) 250, 000

 

If the physician will not be responsible, who will? The first option would be to hold the AI responsible. However, the AI is inanimate where the affected patient would have no compensation and will end extremely unsatisfied. A second option would be to shift the blame to the developers. This might be difficult as the software can be accurate on its initial design and implementation, for example, IBM Watson in clinical medicine where it was designed to compete in Jeopardy. This would also decrease interest in AI development. A third option would be to hold the organization behind the AI responsible. However, if the AI does not have design failures it would be hard for this to work, like holding Tesla responsible for an accident done by a Tesla car user.

To further develop AI implementation in healthcare, the question of responsibility needs to be addressed. But in your case, who do you think should be held responsible?

 

 

Are YOU living in an infectious zoonotic location?

>>> Are you infected?

Humans cannot see when They play hide and seek

Invisible to the eye, but not to Health

And how does she know how to treat the ones she meets? 

They will play with someone else after her slow, incurable death.

As you may know from reading my About Me section, I had previously worked on drug discovery research for tuberculosis (TB), a leading infectious disease that is ranked 10th in the leading causes of death worldwide. In humans, the main TB bacterial type, or strain,  causing the disease is Mycobacterium tuberculosis, but there exist other TB bacterial types arising from animals that get transmitted to humans such as Mycobacterium bovis, commonly transmitted by infected dairy products, seals, rhinoceros , and elk . This classifies TB as a zoonotic disease, where zoonosis is any infectious disease that can be naturally transmitted between vertebrate animals and humans.

Within the 1, 415 pathogens (causative agents of the disease i.e. viruses, bacteria) known by 2001 to infect humans, 61% were zoonotic. Deadly pandemic diseases that can be found in history such as the 1918 Spanish flu and the 2009 swine flu, as well as modern deadly diseases like Malaria, the Ebola and Zika viruses, Anthrax, and Rabies have been, and are, zoonoses, or have strains that are zoonotic. In terms of treatments, many human cases caused by these zoonoses, especially the ones from TB and Malaria arising from either bacteria or a parasite, have no cure as their pathogenic strains continue becoming increasingly resistant to current treatments.

Since these diseases start in animals and then get transmitted onto humans, imagine if we could predict in advance which animals are going to carry a disease (called reservoirs), and in which geographical location they would arise. This was the focus of the 2015 rodent study done by Barbara A. Han and colleagues (et al.) from Princeton University.

“By combining ecological and biomedical data into a common database, Barbara was able to use machine learning to find patterns that can inform an early warning system for rodent-borne disease outbreaks.”

-John Drake, co-author in Hahn et al. (2015)

disease_map
Figure 1. Map portraying hotspots for (A) rodent reservoir diversity and (B) predicted geographical location of future hotspots. (source)

Han et al. (2015) used rodent data from PanTHERIA and applied generalized boost regressions, which builds a prediction model by building up an ensemble of weak prediction models/decision trees to categorize the most important variables for predictions. In this case, predicting zoonotic reservoir status and geographical location. They examined intrinsic traits ( postnatal growth rate, relative age to sexual maturity, relative age at first birth, and production) along with ecological and geographical ones where current zoonotic host carriers have come from.

Among the highlights of the results were that:

  • Not every rodent has the equal probability of transmitting the disease, only those that mature quickly, reproduce faster, and live in northern temperature areas with low biodiversity
  • North America, the Atlantic coast of South America, Europe, Russia, and parts of Central and East Asia have the majority of zoonotic reservoir happening in upper latitudes (Figure 1)
  • Predicted future hotspots include China, Kazakhstan, and the Midwestern USA
  • Many rodent reservoir hotspots are within the geographical locations were infectious diseases happen most often (both zoonotic and nonzoonotic)
  • The majority of etiologic agents infecting rodents are viruses (Figure 2)
reservoir_pathogens
Figure 2. Type of pathogens and parasites infecting the rodent species in the wild, where viruses are the major infecting agents. (source)

Overall, these predictions achieved by machine learning shows that it is possible to predict with accuracy the wild species that carry zoonotic infections, and that machine learning can play a key role in improving how we tackle current incurable infectious disease and future emerging ones.

Machine Learning joins our fight against cancer

>>> Let’s classify

Histology is used to do microscopic analyses of cancer tissue. It remains as the core technique of classifying many rare tumors as there is a lack of molecular identifiers compared to common types of tumors in which the abundance of identifiers allow technological developments to asses them without needing visual appraisal of cellular alterations.

The problem with histology

As it depends on visual observations, these can vary between different individuals leading to different classifications based on different assessments, thus introducing bias. Along with this human variation, it faces more challenges, i.e.  the fact that despite having similar histology, many tumors can still progress in different ways, and so the other way around, where tumors with different microscopic characteristics can progress the same way.

In previous research studies (1, 2) for example, this inter-observer variability in histopathological diagnosis has been reported in Central Nervous System (CNS) tumors like diffuse gliomas (brain tumors initiating in the glial cells), ependymomas ( brain tumors initiating in the ependymoma), and supratentorial primitive neuroectodermal tumors ( occurring mostly in children starting in the cerebrum). To try to address this problem, some molecular groupings have been updated into the World Health Organization (WHO) classification, but only for selected tumors such as medulloblastoma.

This diagnostic variation and uncertainty provide a challenge to decision-making in clinical practice that can have a major effect on the survival of a cancer patient. Therefore, Capper and colleagues decided to train their machine learning algorithm focusing not on complex visual assesments, but on the most studied epigenetic event in cancer, DNA methylation.

Histology vs DNA methylation

Epigenetic modifications do not affect the DNA sequence that encode how our cell will function, but it alters the expression of genes and the fate of the cell. In DNA methylation, a chemical group called a methyl group is bound to the DNA, and this feature is diverse in specific cancers which allow for innovative diagnostics to classify them. Compared with histology, epigenome analysis of DNA methylation in cancer allows for an unbiased diagnostic approach, and  thus Capper et al. (2018) fed their innovative cancer diagnostic computer genome-wide methylation data from samples of  almost all CNS tumour typed under WHO classification.

Machine Learning + DNA methylation

Capper et al. (2018) used the machine learning algorithm Random Forest (RF), as it combines several weak classifiers to improve the accuracy of the prediction, and trained it to recognize methylation patterns in the provided already histological-classified samples via supervised machine learning and find naturally occurring tumor patterns by itself to assign the samples based on this pattern category. Capper and his colleagues then used the computer to classify 1,104 test cases which has been diagnosed by pathologists using standard histological and molecular way. An overview of their findings showcases their interesting results:

rf

 

In 12.6% of the cases, the computer and pathologist diagnosis did not match, but after further laboratory testing involving a technique called gene sequencing that allows to see DNA changes at the genetic level, 92.8% of these unmatched tumors were found to correctly match the computers and not the pathologist’s assessment. Furthermore, 71% of these were computationally assigned a different tumor grade, which affect treatment delivery.

The Future

Despite this machine learning innovation, today histology remains as the indispensable method for accessible and universal tumor classification. However, the approach developed by Capper et al. (2018) complements and, in some cases such as rare tumor classification, outrivals histological microscopic examination. As this platform further develops in present laboratories, the future of cancer classification might prove one of utmost accuracy and unbiased approach by the combination of visual inspection and molecular analysis.

 

AI + QSAR: helping drug discovery efforts

>>> Welcome to Club QSAR

Many believe that bringing pharmaceuticals to the market is a quick and easy process that require seldom regulation and experimentation. This is not the case. The drug development process is a long, arduous, and costly one further explained using the figure below.the-drug-discovery-processIn this post, I will be focusing on the drug discovery ( research & development) stage, which focuses on identifying the perfect drug candidate from many molecules able to have the desired therapeutic effect on a biological target of interest ( i.e., a protein).

This drug candidate identification is done by performing many in vitro ( in glass) experiments that although necessary, consume scientists plenty of costly resources and time that could potentially be saved by using computational instead of experimental means.

Quantitative structure-activity relationships ( QSAR) modelling is the main chemistry-informatics approach used to discover small chemical compounds (drug candidates) having the desired activity against a therapeutic target (usually a protein playing a vital role in the disease) while minimizing the likelihood of off target effects which can cause toxicity. Such predictions help prioritize drug discovery experiments reducing work and resources cost. QSAR usually works using ligand-based models where the protein is ignored due to its complex structure (See Blog Post) while only the small molecule is modeled.

drug_2In its simplistic form, the measured activities of many small molecules against a single protein is obtained experimentally, then the model from the small molecules specific features ( fingerprints), i.e., count and arrangements of atoms and functional groups within the molecule. Alternatively, the model can learn these fingertips by deriving them from chemical structures using an auto-encoder.

The most promising QSAR methods prior to deep learning were variations of Random Forests (RF) and support vector machine algorithms. That was before Merck, one of the leading biotech companies sponsored a Kaggle competition to examine which machine learning combinations can provide the most efficient solutions to QSAR problems. The winning entry outperformed RF by using an ensemble Gaussian process (GP) regression, where the primary factor were Deep Neural Networks (DNN) (see insight into DNN).

Use of multi-task DNN in QSAR, for example, have improved the single protein approach mentioned, by allowing the analysis of compounds across multiple proteins. Conceptually, it allows to learn from fewer data by using the fact that molecules having similar features behave similarly across multiple proteins.

Deep learning can also solve a key limitation of both single and multitask models, which is that the activity of molecules against proteins most in need of prediction are the hardest to predict because of scarce data sets.

A promising approach is the use of  Deep Convolutional Neural Network (DCNN) using AtomNet to directly model both the molecule and the structure of the protein to predict bioactivity in novel (new) proteins with no experimental biological activity data for drug discovery applications. AtomNet is the first deep neural network made specifically for structure-based binding affinity prediction.

 

Insight into DNN—————————————————————————————————

dnnDNN are a class of deep learning algorithms made of a network composed of “neurons”. A neuron (a) has many inputs reflected as the input arrows and one output (output arrow). Each of the input arrows is associated with a weight wi. An example to understand weights is if we were to train a model to identify pedestrians in an image but these always appeared in the centre of the image, the model would not be able to recognize pedestrians in other positions of the image as each part of the image would be a different weight. The neuron also has an activation function, f(z), and a default bias term b. A row of neurons forms a layer of neuronal network and DNN has several layers(b), where each output neuron produces a prediction for a separate end point (e.g. assay result)

——————————————————————————————————————————

 

 

 

 

Need a Sherlock Holmes to solve proteins 3D structure? Ask AlphaFold

>>> Proteins, proteins everywhere…

Proteins are the employees of the cell, working to maintain its survival, where their specific function is determined by their structural shape derived from instructions coming from the amino acid (AA) sequence encoded in our genes. For example, antibody proteins are Y-shaped, and this similitude to hooks allows them to hook to pathogens ( i.e. viruses, bacteria) and detect and tag them for extermination.

To understand how these employees go from AA sequence to their energy efficient 3D structure, the following video will be helpful. In summary, biochemists use 4 distinct aspects to describe a protein structure: A primary structure which consist of AA sequence, a secondary structure consisting of repeating local structures from this AA sequence held together by chemical bonds called hydrogen bonding forming α-helix, β-sheet and turns that are local, a tertiary structure consisting of the overall shape of a single polypeptide chain (long AA chain) by non-local chemical interactions, and a possibly quaternary structure if the protein is made by more than one polypeptide chain.

Elucidating the shape of the protein is an important scientific challenge because diseases like diabetes, Alzheimer’s, and cystic fibrosis arise by the misfolding of specific protein structures. The protein folding problem is to try and solve the right protein structure amidst many structural possibilities. A knowledge of protein structure will allow to combat deadly human diseases and use this knowledge within biotechnology to produce new proteins with functions such as plastic degradation.

Currently, the accurate experimental methods to determine the protein shape rely on laborious, lengthy, and costly processes (Figure 1). Therefore, biologists are turning to AI to help diminish these factors and speed up scientific discoveries with the potential of saving lives and bettering the environment.

protein_experimentals
Figure 1. Experimental Techniques to Determine Protein 3D Structure. (A) X-Ray Crystallography. Consists of shooting an x-ray beam through the protein crystal obtained through the use of specific chemical conditions, and uses the resulting diffraction pattern to analyse the location of electrons and decipher the protein model (image); (B) Cryo-Electron Microscopy (Cryo-EM). When biomolecules (i.e. proteins) do not want to crystallize, cryo-EM allows for the visualization of small-large biomolecule and its specific function although in a costly manner (image); (C) Nuclear Magnetic Resonance (NMR).  NMR allows to analyse the structure and conformational changes but is limited to small and soluble proteins (image).

 

“The success of our first foray into protein folding is indicative of how machine learning systems can integrate diverse sources of information to help scientists come up with creative solutions to complex problems at speed”

These were the words of Google’s AI DeepMind developers after project AlphaFold, which aims to use machine learning to predict 3D protein structure solely from amino acid sequence (from scratch), won the biennial global Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP) competition on 2018. CASP is used as a gold standard for assessing new methods for protein structure prediction, and AlphaFold showed “unprecedented progress” by accurately predicting 25 out of the 43 proteins in the set (proteins which 3D structures had been obtained by conventional experimental means but not made public), compared to the second team only predicting 3 out of the 43 proteins.

Deep learning efforts attempting to do what AlphaFold  have focused on secondary structure predictions using recurrent neural networks  that does not predict the tertiary and/or quaternary structure needed to for the 3D protein shape due to the complexity of predicting the tertiary structure from scratch.

AlphaFold is composed of deep neural networks trained to 1) predict protein properties, namely the distances between AA and angles made by chemical bonds connecting these AA, and 2) predict the distances between every pair of protein residues and combine the probabilities into a score and use gradient descent, a mathematical method used widely in machine learning to make small incremental improvements, to estimate the most accurate prediction (Figure 2).

deepmind
Figure 2. DeepMind AlphaFold Methodology (source).

Even though there is much more work to do for a precise accurate AI use to try and solve the protein folding problem and speed up solutions to some of our world’s most grave problems, AlphaFold is undoubtedly a step in the right direction.

You with Alzheimer’s 6 years from now?

>>> tic, toc, time’s up… /n

Alzheimer’s is the most common type of dementia, a set of brain disorders that result in the loss of brain function. To throw some statistics highlighting the problem we face, 1 in 3 UK citizens will develop dementia during their lifetime where there is a 62% chance it will be Alzheimer’s, and it is the 6th leading cause of death in the USA.

The problem is that it is a multi-factorial disease as there are many factors influencing its development, i.e. reactive oxygen species, plaque aggregation, and protein malfunction. But these are just the tip of the iceberg, as at the heart of these activities leading to Alzheimer’s, there is a dysregulation (dyshomeostasis) of key biological transition metals such as Cu2+ and Zn 2+  that are vital to maintaining regular brain function and preventing dementia.  These factors contribute to the fact that there is no cure, and thus we are in competition against the clock to try and diagnose it as fast as possible to slow its progress.

ad_brains
Alzheimer’s (left) versus normal brain (right). Source.

Radiologist use Positron Emission Tomography (PET) scans to try and detect Alzheimer’s. PET allows the monitoring of molecular events as the disease evolves through the detection of positron emission using radioactive isotopes such as 18F. This isotope is attached to a version of glucose(18F-FDG), as glucose is the primary source of energy for brain cells, allowing their visualization. As brain cells become diseased, the amount of glucose decreases compared to normal brain cells. To aid in the war against time, Dr. Jae Ho Sohn combined machine learning with neuroimaging in the following article.

“One of the difficulties of Alzheimer’s disease is that by the time all the clinical symptoms manifest and we can make a definitive diagnosis, too many neurons have died, making it essentially irreversible. “

Jae Ho Sohn, MD,MS 

 

Debriefing the Article “A Deep Learning Model to Predict a Diagnosis  of Alzheimer’s Disease by Using 18F-FDG PET of the Brain” by Sohn et al.  

Objective. To develop a deep learning algorithm to forecast the diagnosis of Alzheimer’s disease (AD), mild cognitive impairment (MCI), or neither (non-AD/MCI)  of patients undergoing 18F-FDG PET brain imaging, and compare the results with that of conventional radiologic readers.

Reasoning. Due to the inefficacy of humans to detect slow, global imaging changes, and the awareness that deep learning may help address the complexity of imaging data as deep learning has been applied to help the detection of breast cancer using mammography, pulmonary nodule using CT, and hip osteoarthritis using radiography.

Methodology.  Sohn et al. trained the convolutional neural network of Inception V3 architecture using 90% (1921 imaging studies, 899 patients) of the total imaging studies from patients who had either AD, MCI, or neither enrolled in the Alzheimer’s Disease Neuroimaging Initiative (ADNI). This trained algorithm was then used for testing on the remaining 10% (188 imaging studies, 103 patients) of the ADNI images (labelled ADNI test set) , and on an independent set from 40 patients not in ADNI. To further asses the proficiency of this method, results from the trained algorithm were compared to radiological readers.

Results. The algorithm was able to predict with high ability those patients who were diagnosed with AD ( 92% in ADNI test set and 98% in the independent test set),  with MCI ( 63% in ADNI test set and 52% in the independent test set), and with non-AD/MCI (73% in ADNI test set and 84% in the independent test set). It outperformed three radiology readers in ROC space in forecasting the final AD diagnosis.

Limitations. The independent test data was small (n=40), not from a clinical trial, and also excluded data from patients with non-AD neurodegenerative cases and disorders like stroke that can affect memory function. The training of the algorithm was solely based on ADNI information and thus is limited by the ADNI patient population, which did not include patients with non-AD neurodegenerative diseases. The algorithm performed its predictions distinctly from human expert approaches, and the MCI and non-AD/MCI were unstable compared to AD diagnosis and their accuracy depends on the follow up time.

Conclusion. The trained deep learning algorithm using 18F-FDG PET images achieved 82% specificity with 100% sensitivity in predicting AD specifically, an average of 75.8 months(~6 years) before final diagnosis. It has the potential to diagnose Alzheimer’s 6 years in advance at the clinic, but further validation and analysis is needed per mentioned limitations.

 

Can AI discriminate against minorities?

>> Hello, World!

Much has been going on since I recently made my first post, i.e.,  the use of the new genome editing technique CRISPR-Cas9 by He Jiankui, scientist at the Southern University of Science and Technology of China, to alter the DNA of embryos of seven couples leading to the birth of two genetically modified female twins. This research has been called “monstrous”, “crazy”, and “a grave abuse of human rights” by the scientific community worldwide, and the universities involved have proclaimed no awareness of the performance of this research under their institutions.

This research, however, has a positive. It highlights the inefficient regulation of some novel technological innovations that need to urgently be addressed for the benefit of society and advancement of science and technology.

Currently, there is also the rise of Artificial Intelligence (AI) being implemented in many fields, specially healthcare. Similar to CRISPR-Cas9, the developers and users of this technology need to take a moment to step back from the technological upheaval and look at their innovation through ethical lenses, to see, address, and prepare for the potential negatives and ethical conflicts.

This post is the first on a two post series covering major ethical issues around AI use on healthcare that need to be taken into consideration.

ethics_cte

 

AI algorithms can discriminate against minorities

The food that enables AI algorithms to function, specially machine learning and deep learning,  is large data sets, which are taken as input, processed, and used to deliver conclusions based solely on these data. For example, a company could use AI to make recommendations on the best candidate to hire by feeding the algorithm data about successful candidates for it to make a conclusion.

Applying these algorithms to make a decision over a human matter is tricky, however, as the data needs to reflect our diversity. If it does not, this is one way the recommendation by the algorithm can be biased, others are human biases inherent in the data and an intentional embedding of bias into the algorithm by a prejudiced developer.

Already in non-medical fields AI has shown to reflect biases in training. For example, AI algorithms designed to help American judges make sentences by predicting an offender’s tendency to re-offend have shown an alarming amount of bias against African-Americans.

reoffend
Bernard Parker, left, was rated high risk; Dylan Fugett was rated low risk. (source)

With regards to healthcare delivery, it varies by race, and an algorithm designed to make a healthcare decision will be biased if there has been few (or no) genetic studies done in certain populations. An example of this is demonstrated in the attempts to use data from the Framing Heart Study to predict cardiovascular disease risks in non-white populations. These have led to biased results with overestimations and underestimations of risk.

Not everyone might benefit equally from AI in healthcare as AI might be inefficient where data is scarce. As a result, this might affect people with rare medical conditions or other underrepresented in clinical trials and research data, such as Black, Asian, and minority ethnic populations.

As the House of Lords Select Committee on AI cautions, datasets used to train AI systems usually do a poor job in representing the wider population, which can potentially make unjust decisions reflecting societal prejudice.

AI algorithms can be malicious   

In addition, there is the ethical issue that developers of AI may have negative or malicious  intentions when making the software. After all, if everybody had good intentions the world would certainly be a better place.

Take, for example, the recent high profile examples of Uber and Volkswagen. Uber’s machine learning software tool Greyball allowed the prediction of ride hailers that might be undercover law-enforcement officers, allowing the company to bypass local regulations. In the case of Volkswagen, they developed an AI algorithm in which their vehicles would reduce their nitrogen oxide emissions to pass emission tests.

hackjer

AI private companies working with healthcare institutions might create an algorithm that is better suited for the monetary interest of the institutions instead of the monetary and care interest of the patient. Particularly in the USA, where there is a continuous tension between improving health versus generating profit, since the makers of the algorithms are unlikely to be the ones delivering bedside care. In addition, AI could be used for cyber-attacks, robbery, and revealing information about a person’s health without their knowledge.

These potential negatives need to be acknowledged and addressed in the implementation of AI  in any field, specially healthcare. In the upcoming post, I will discuss the effects of AI on patients and healthcare professionals, breach of patient data privacy and AI reliability and safety.

 

AI in healthcare: for better or for worse?

>>> Hello, World!

In this century of technological advancement, there has been much hype over the recent emerging field of artificial intelligence (AI), defined as the intelligence applied by computational means instead of the natural world, i.e. humans.

AI has gained popularity following innovative applications in fields such as the automotive, finance, military, and healthcare industry.

However, as with any emerging technology, ethical and controversial issues arise. Questions over whether artificial intelligence will “take over the world” by, for example,  replacing industry sectors with robotics or the uncontrolled use of AI for military purpose are current hot topics of debate.

The Media, literature, and particularly the film industry, with movies such as “I, Robot” and “The Terminator”, have certainly expanded our imaginations as to the potential negatives in the field.

Adding fuel to the fire, recent comments from Tesla and SpaceX CEO Elon Musk stating that “A.I. is far more dangerous than nukes” and thus need to be proactively regulated ignite reasonable worries over the use of AI applications.

In healthcare and medical research, however, far from robots replacing human physicians in the foreseeable future, AI devices have been helping physicians and scientists save lives and develop new medical treatments.

AI is going to lead to the full understanding of human biology and give us the means to fully address human disease.

–Thomas Chittenden, VP of Statistical Sciences at WuXi NextCODE

A shift in the use of AI in medical research occurred in 12 June 2007 with Adam, a scientific robot developed by researchers in the UK universities of Aberystwyth and Cambridge, able to produce hypotheses about which genes provide information to develop key enzymes able to speed up (catalyse) reactions in the Brewer’s yeast Saccharomyces cerevisiae and experimentally test them robotically. Researchers then individually tested Adam’s hypotheses about the role of 19 genes and discovered that 9 were new and accurate while only 1 was incorrect.

Adam set the precedent for the team to develop a more advanced scientific robot called Eve, which helped identify triclosan, an ingredient found in toothpaste, as a potential anti-malarial drug against drug resistant malaria parasites which contribute to an estimate malaria mortality of 1.2 million annually.

Eve screened thousands of compounds against specific yeast strains that had their essential genes for growth replaced with equivalent ones either from malaria parasites or humans to find compounds that decreased or stopped the growth of strains dependent on malaria genes but not human genes ( to avoid human toxicity). As a result, triclosan was identified to halt the activity of the DHFR enzyme necessary for malaria survival even in pyrimethamine drug-resistant malaria strains.

Eve_robot
The scientific robot “Eve” (source)

Without Eve, it is likely that the research would have still been in progress at this stage and taken years to arrive at the published result, which is what usually happens in the drug discovery field.

To make a drug, on average it takes at least 10 years of arduous research and an estimate of US $2.6 billion with a high percentage of this money spent on drug therapies that fail. AI has the potential to lessen these time, money, and research inefficiency factors.

In the clinic, AI tools can use algorithms to assist physicians with the high volume of patient data, provide updated medical information, reduce therapeutic error, and use this information to provide clinical assistance and diagnosis with over 90% accuracy. The depicted diagram below provides some insight into the structural function of AI and examples of applications in medicine based on the detailed published information found in Jiang et al.

AI_paint
Insight into AI structure and examples of medical applications

Just as advantages, applying AI in healthcare rings the alarm for ethical issues and analytical concerns which will be discussed in future posts.

However, far being from robotic disaster, AI has proved valuable for the development of human medicine and health.

As Suchi Saria, a professor of computer science and director of the Machine Learning and Health Lab at John Hopkins University, explains in her TEDx talk, AI is already saving lives by detecting symptoms 12-24 hours before a doctor could.

AI in healthcare undoubtedly sets the precedent for a new future in medicine.

 

Level 0: About

Blog Description

This blog is an attempt to engage people in the field of artificial intelligence (AI) in healthcare and medical research. Posts will discuss issues such as AI in healthcare, ethical issues, applications in specific research, and scientific views. Discussions in the comments on the specific topics, supported by recent news/publications, are thoroughly encouraged as it is my desire to see people’s perspective on the issues at hand.

About Me

Academically…

earned a B.S in Chemistry with a biological track and  currently pursue a Masters in Biotechnology and Bio-engineering. I have 3 years of experience in tuberculosis drug discovery research using chemical, biochemical, and biotechnological skills. I also have experience in synthetic chemistry for the synthesis of PET probes for pancreatic cancer research.

Socially…

have been the President and Professional Liaison in a student-led pharmaceutical and biotechnology organisation and have participated in many volunteering events for the dissemination of science.

Passionately

strive to become a leader in the pharmaceutical and biotechnology field, and am motivated by drug development research and biotechnology innovation. It is my desire to use this blog to scientifically communicate and engage with you,  the public.

I have no special talent. I am only passionately curious.

-Albert Einstein