•  

    Artificial Intelligence in Healthcare Systems

    Artificial intelligence (AI) has been widely established in many sectors, and there has been increasing interest in integrating AI in healthcare systems. AI is commonly defined as the ability for a computer to perform tasks generally associated with human beings. AI is being used in space exploration, aerospace, business industries, and social media. Examples of how AI is prevalent in our daily lives include web-searching, email-filtering, maps, fraud prevention, music and product recommendations, Google predictive searches, chatbots, email communications, and text to speech. AI enables advances in precision medicine, a widely-accepted medical approach to improve the accuracy of diagnosis.  This article aims to investigate the types of AI used in healthcare systems, their potential uses in diagnostics, and their limitations and ethical implications. 

     

    Short History 

    AI gained a lot of interest by scientists, mathematicians, and philosophers in the 1950s. Alan Turing, a British computer scientist and mathematician explored the possibility of machines to carry out human tasks—particularly digital computers. In his research paper titled “Computing Machinery and Intelligence,” he proposed the Imitation Game to test the ability of a computer to exhibit human intelligence.1 It is now widely known as the “Turing Test,” which includes a man (A), a woman (B), and an interrogator (C), where the interrogator (C) is put into a different room. The question Turing asked was, “What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?” The interrogator (C) would ask a series of questions which would be responded by text, and if C cannot recognize which one is the human candidate, then it is said that the computer passed the Turing Test. The Turing Test is still in question about whether it is a completely valid or reliable judgement for testing computer intelligence.2

     

    Five years later, in 1955, the first artificial intelligence program called the “Logic Theorist” was engineered to mimic the problem-solving skills of a human being. Hebert A. Simon, an economist, psychologist, and political scientist, was consulting RAND Corporation where he met Allen Newell, a researcher in cognitive and computer science. Simon and Newell created a program where machines could prove mathematical theorems used in Principia of Mathematica. In 1956, the program was presented at the Dartmouth Summer Research Project on Artificial Intelligence. It was hosted by John McCarthy and Marvin Minsky, which brought together researchers for a collaborative effort to accelerate research in AI and was when the term “artificial intelligence” was first coined by McCarthy. John Clifford Shaw, a computer programmer at RAND Corporation helped develop the program in 1991. Using Information Processing Language (IPL), Shaw coded the Logistic Theorist in a research facility at RAND Corporation. A detailed report of the Logistic Theorist can be accessed through the RAND memorandum.3

     

    Throughout the 1950s to 1970s, the biggest challenge in AI was limited storage capacity. In 1986, the term “deep learning” was coined by Rina Dechter and was then used by Yann LeChun, Yoshua Bengio, and Geoffrey Hinton. In the 1980s, John Hopfield and David Rumelhart explored deep neural networks (DNN), where the computer can “learn” with experience. In 1986, a research paper was published in Nature by David Rumelhart and Geoffrey Hinton on backpropagation which explored the algorithmic method for automatic error connection in neural networks.4 Hinton is a British-Canadian computer scientist and cognitive psychologist who now devotes his time as a professor emeritus at the University of Toronto, a VP Engineering fellow at Google, and Chief Scientific Advisor at the Vector Institute. Yann LeChun, a former colleague of Hinton, was also credited for his research in convolutional neural networks. Edward Feigenbaum introduced expert systems, which was becoming popularized in many industries. In 1997, Garry Kasparov, a world chess champion was defeated by IBMs Deep Blue, a chess computer program which used a DNN algorithm. During that time, a speech recognition software was implemented on Windows by Dragon Systems.5 In 2007, ImageNet by Fei-Fei included a large database of 15 million labeled images to be used as a tool for computer vision. Natural-language processing algorithms based on DNN were becoming more prominent for speech recognition at Microsoft and Google. In 2012, at the University of Toronto, Hinton and his colleagues accelerated research in image recognition by developing a system made up of 10 million images that could recognize cats in YouTube videos. By 2014, DeepFace by Facebook was said to have over 97 percent facial recognition accuracy. Chinese Go Champion, Ke Jie, was defeated by Google’s Alpha Go in 2016. By 2017, a research paper published in Nature using DNN matched the accuracy of deramtologists in detecting and diagnosing skin cancer.6

     

    Here’s an overview of the AI timeline by Eric Topol: 

    Source: Topol, Eric. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. 1st Ed. New York: Basic Books: 2019. 

     

    Types of AI in Healthcare

    Machine Learning 

    Machine learning is a common subset of artificial intelligence which enables the development of computer programs through data collection. These types of programs are relevant in healthcare as it could be used to predict diagnosis, treatment, and analyze radiology reports. Supervised Learning (SL) is used as a trial-and-error process where algorithms match inputs to outputs from training with labeled data. Unsupervised learning (UL) allows algorithms to perform more complex tasks through unlabeled data by itself. Along SL & UL, Reinforcement Learning (RL) learns how a series of data (inputs and outputs) interact with the environment and aims to maximize reward. Representation Learning allows the machine to use raw data through sets of procedures to analyze representations required for detection or classification. Transfer Learning is the ability of an AI to perform different tasks through previous learning experiences.  

     

    Neural Networks

    Artificial neural networks (ANN) is a subset of machine learning which has been prominent in health research. ANN aims to mimic the way human neurons process, which uses a series of algorithms (input, hidden, output) to understand large amounts of data.

     

    Deep Learning 

    Another tool used in machine learning are deep learning algorithms, or deep neural networks (DNN) which are more complex than ANN. Deep learning algorithms are based on multiple neural networks and hidden layers where the computer is able to train itself to learn from data. Convolutional Neural Networks (CNN) consist of convolutional layers which are commonly used to recognize the spatial and temporal aspects of an image. The entire data set is broken into overlapping tiles and small neural networks and max pooling.

    Source: Shreyak. “Building a Convolutional Neural Network (CNN) Model for Image Classification.” Medium , 4 June 2020.

     

    Generative Adversarial Networks (GAN) is a form of supervised learning and is another type of DNN which uses two sub-models of neural networks, one is generative which generates new examples and the other is discriminative which tries to classify real and fake images. Recurrent Neural Networks (RNN) contains stored information in sequential loops and can also process inputs from previous learning. Backpropagation is a form of supervised learning algorithm which involves ANN using gradient descent. Individual elements are set with weights called neurons, and when they are loaded, signals are automatically sent back through the network to update weighting values.7 Natural Language Processing (NLP) involves training machines to recognize speech, written language, and translation. NLP systems are based on DNN and can be used to analyze clinical notes and examine radiology reports. 

     

    Rule-Based Expert Systems 

    Rule-based expert systems became prominent in the 1980s which uses rule-based algorithms such as ‘if-then’ statements. They are widely used in healthcare today for clinical decision support, and many electronic health records (EHR) providers integrate rule-based algorithms into their systems.8 Although they are being replaced by more efficient machine learning algorithms as they tend to be time-consuming and have limitations to how many rules can be integrated. 

     

    Healthcare Systems 

    The focus on improving diagnosis and treatment with AI started in the 1970s when MYCIN was developed at Stanford University for diagnosing blood infections.The name was derived from antibiotics since some have the suffix “-mycin.” It involved a backward chaining expert system to identify potentially infectious bacteria such as meningitis and bacteremia, and to recommend antibiotics. It was never used in clinical practice due to ethical and legal concerns for using AI in medicine. 

     

    One example of the rule-based expert system used in MYCIN: 

    IF the infection is primary-bacteremia (a)

    AND the site of the culture is one of the sterile sites (b)

    AND the suspected portal of entry is the gastrointestinal tract (c)

    THEN there is suggestive evidence (0.7) that infection is bacteroid. (c)

     

    The 0.7 (70%) is a rough estimate that the conclusion is true based on the given evidence.10

     

    Bruce G. Buchanan and Edward Shortliffe wrote a detailed book on MYCIN which included detailed examinations of the expert system.11

    Find out about C (Goal) 

    If B, then C (Rule 1) 

    If A, then B (Rule 2) 

    If A, then C (Implicit Rule) 

    Question: Is A true? (Goal) 

    They also showed how the rule can be simplified to an ‘if-then’ statement:

    IF: There is evidence that A and B are true, 

    THEN: Conclude there is evidence that C is true. 

     

    Although in 2010, IBM Watson was introduced as a data analytics processor that uses natural language processing with a focus in precision medicine. Precision medicine is a growing method in clinical practice that considers individual variability in genes, environment, and lifestyle to improve diagnostics. 

     

    Diagnostics

    Cancer

    Radiology scans have improved a lot in recent years with technology like CT, nuclear, PET, and MRI, but the interpretation of these scans could be assisted with AI. A study done using machine learning models found that 30.6 percent of benign breast lesion surgeries could have been avoided.12 By using machine learning for precision medicine, cancer patients could receive the individual care that they need. For example, digital pathology scans can classify tumor composition and spot mutations from radiological and histopathological samples. An algorithm developed by the European Molecular Biology Laboratory Bioinformatics Institute through The Cancer Genome Atlas Program (TCGA) proved to learn computational histopathological features across cancer types and distinguish genomic deletions, duplications, chromosomal aneuploidies, and driver gene mutations.13 TCGA uses applied methods such as RNA sequencing, microRNA sequencing, DNA sequencing, SNP-based platforms, Array-based DNA methylation sequencing, and reverse-phase protein array (RPPA).14 The Pan-Cancer Atlas project used TCGA samples to classify and interpret human tumors based on their molecular composition at the DNA, RNA, protein, and epigenetic levels for various types of cancers.15 Furthermore, machine learning algorithms could be applied to improve interpretation of radiological scans. For example, a DNN-based CT scan could detect KRAS, a tumor-gene-mutation associated with colorectal cancer.16 Researchers at UCSF developed a 3D-convolutional neural network that was able to characterize lung nodules in chest CT scans of patients with confirmed malignant lung tumors.17 The Computer Science and AI Laboratory (CSAIL) at MIT developed a highly accurate machine learning algorithm using a deep network of 27 layers that detected and diagnosed metastatic cancer to lymph nodes.18 Deep learning could also be used to improve diagnostic accuracy of skin cancer. Andre Esteva and his colleagues at Stanford University used a Google CNN algorithm which outperformed 21 board-certified dermatologists for classifying lesions as benign or malignant.19 The database was trained by 129,450 clinical images which consisted of 2,032 different diseases. 
     

    Skin cancers classified by a deep neural network and dermatologists. Source: Esteva, Andre, et al. "Dermatologist-level classification of skin cancer with deep neural networks." Nature 542.7639 (2017): 115-118. 

     

    Microsoft Research developed an algorithmic approach called Elevation which proved to predict the optimal place to edit a strand of DNA while showing the effects of the human genome when attempting to edit DNA with CRISPR.20 It is clear that cancer genomics and machine learning would pave a way to improve the quality of interpreting cancer cases and would therefore improve the quality of clinical design and care. 

     

    Neurology 

    A study funded by Geisinger Health in Pennsylvania using 37,000 CT studies had a 95 percent accuracy for the diagnosis of brain hemorrhage.21 Viz.ai, an applied AI healthcare company for diagnosing strokes received FDA clearance in 2018. AI could also measure biomarkers such as speech recordings and cognitive test scores to detect neurodegenerative diseases such as Alzheimer’s Disease (AD).22 Similarly, a study done by Belic et al developed an algorithm using artificial neural networks (ANN) to detect early onset of Parkinson’s disease which achieved a 95 percent accuracy.23 AI techniques could also manage the onset of strokes through image-based diagnosis.24 An observational study on a machine learning algorithm proved to achieve high performance for detecting traumatic brain injury (TBI). The study compared a series of predictive models of mortality in early TBI using data from patients in the main trauma hospital in Brazil which had 90 percent accuracy.25 In the future, these models could assist in treatment decisions. There is also the possibility that AI could interpretate the body’s movement, direction, and speed to model spatial navigation. 

     

    Fractures 

    Deep learning algorithms could be applied to x-ray and CT scans to lead to accurate diagnoses. An automated deep learning system developed using a convolutional neural network with 172 layers and trained with over 6000 x-rays was found to be over 99 percent accurate.26 They were able to detect hip fractures from frontal pelvic x-rays with an accuracy similar to radiologists. Imagen Tech, a medical AI company received FDA approval for its algorithm processing bone films. Zebra Medical Vision, which aims to provide accessible AI for radiology had a 93 percent accuracy in detecting compression fractures which are indicative of osteoporosis.27

     

    Cardiology 

    Moreover, a study using deep convolutional neural networks demonstrated a 94.6 percent accuracy when identifying chest radiographs as normal or abnormal.28 Arterys, a medical imaging cloud AI specialized in heart MRI received the first FDA clearance in 2017. Zebra Medical Vision developed a convolutional deep learning system to detect coronary calcium score prediction from chest CT scans.29 AI-based systems could also be applied to electrocardiograms (ECG) to measure electrical activity of the heartbeat. An FDA approved system in 2017 was developed by AliveCor with a DNN algorithm for diagnosing atrial fibrillation. It monitors heart rate with an ECG single-lead sensor that connects with an Apple Watch. Similarly, the iRhythm Zio Patch developed at Stanford University using 34-layers of CNN stores ECG heartbeats for analysis and diagnosis of various heart arrythmias.30 It was found to exceed the performance in sensitivity and precision of board-certified cardiologists. 

     

    Ophthalmic Imaging

    Clinical ophthalmology techniques have created new ways to detect various eye diseases, and AI neuronal systems have shown progress. Diabetic retinopathy is one cause of vision loss in patients and AI-based systems made for detecting retinal diseases have been really accurate. Researchers at Google used deep CNN which have been trained with 128,175 images to detect diabetic retinopathy and diabetic macular edema, which had a sensitivity of 87-90 percent and a 98 percent specificity.31 Kavya Kopparapu, a 16-year old developed an algorithm using Microsoft’s ResNet-50 which was trained with 34,000 images from the National Eye Institute. Kopparapu, her brother, and a classmate formed the company Eyeagnosis, a smartphone app with 3D-printed lens to recognize signs of diabetic retinopathy.32 DeepMind and Moorfield trained an algorithm with only 14,884 scans which demonstrated performance levels that surpassed experts at detecting threatening retinal diseases.33

     

    Mental Health

    There are a variety of metrics that could be used in AI-based systems for the assessment, prediction, and treatment of mental health. There have also been studies on the significant differences in fMRI scans of depressed and healthy individuals.34 Digital phenotyping is a moment-by-moment quantification of the individual-level human phenotype in situ using data from personal digital devices.35 This method could be used to possibly detect early warning signs of mental illness or relapse. A machine outperformed clinical ratings using features of speech such as word choice, tone, or muddling to predict whether schizophrenia patients would transition to psychosis.36 IBM’s Computational Psychiatry developed an automated linguistic analytic method to discriminate speech in psychosis from normal speech.37 It had an 83 percent accuracy at predicting the onset of psychosis. Mindstrong even uses a smartphone app to analyze the way people use their keyboard as a biomarker that correlates with cognitive function, clinical symptoms, and measure of brain activity.38 DeepMood had a 90 percent prediction accuracy on the depression score in their pilot study which uses keyboard data.39

    Depression biomarker correlation with cognitive function, clinical symptoms, and brain connectivity. Source: Mindstrong Health

     

    Alex (Sandy) Pentland, an MIT professor, studied “honest signals” which are subtle patterns in how we interact with other people such as tone or fluidity that can reveal truths about ourselves. He developed the Cogito Companion app which uses deep learning algorithms and honest signals. It’s being used by psychologists, nurses, and social workers. The Cogito Companion app has also been used by the US Department of Veterans Affairs to detect PTSD and mitigate negative psychiatric consequences such as suicide.40 The increasing use of social media has also provided a gateway to track mental health. Instagram photos were revealed to show predictive biomarkers of depression. Andrew G Reece and Christopher M Danforth developed a model with data from 166 individuals of which 71 had depression. The model assessed 43,950 participants using colour analysis, metadata components, and algorithmic face detection with an accuracy of 70 percent for detecting depression.41

    Instagram filter usage among depressed and healthy participants. Bars indicate difference between observed and expected usage frequencies, based on a Chi-squared analysis of independence. Blue bars indicate disproportionate use of a filter by depressed compared to healthy participants, orange bars indicate the reverse. All-data results are displayed, see Additional file 1 for Pre-diagnosis plot. Source: Reece, Andrew G., and Christopher M. Danforth. "Instagram photos reveal predictive markers of depression." EPJ Data Science 6.1 (2017): 1-12.

     

    Sonde’s Health detects vocal biomarkers for the diagnosis of depression and heart disease.42 A study done on 253 individuals by Scherer and his colleagues showed that individuals with depression and PTSD had significantly reduced vowel space.43 Machine learning algorithms could also detect those at-risk for suicide and help prevent it from happening. For instance, natural-language processing (NLP) could evaluate acoustic features of speech that are associated with suicidal behaviours.44 AI could do more than just assess biomarkers for mental health, it could also assist in treatment. CBT, a common therapy method to help people change negative thought patterns could be delivered through digital platforms. A meta-analysis of 18 randomized control trials with outcome data of 3,414 participants using 22 smartphone apps found that the efficacy of CBT apps showed significant reduction in depressive symptoms.45 There has always been stigma surrounding mental health, but there is no doubt that AI could revolutionize the way we view mental health in healthcare systems.  

     

    Health Assistants and Medical Literature

    Electronic Health Records (EHR) are common clinical data sources for physicians. AI could improve EHR clinical use by reducing physician burn-out, but the quality of EHR data would need to improve. EHR are regarded as inaccurate and incomplete—a study pointed out that improving the quality of EHR data would be useful for assessing ophthalmic care through machine learning algorithms.46 Furthermore, there has been increasing interest in developing virtual medical assistants that include all of the persons data.

    A schematic of a deep neural network with all of a person’s data inputted, along with the medical literature, to provide the output of health coaching. Source:Topol, Eric. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. 1st Ed. New York: Basic Books: 2019. 

     

    This has been the focal point of iCarbonX, a Chinese based AI founded by Jun Wang. The data collection plan includes genomic and physiological data to provide health advice and prevent disease. AI could also be used to review large amounts of relevant medical literature that physicians would benefit from. IRIS.AI, an AI engine was developed using natural language processing (NLP) for scientific text understanding. These tools could be applied for data extraction that would normally require hundreds of hours for physicians to read.

     

    Ethical and Legal Implications

    AI Barriers; What Makes Us Humans

    One of the biggest barriers that AI faces is the lack of understanding and empathy. While we’ve seen that machine learning algorithms can perform certain tasks accurately, it still lacks basic human qualities. After all, empathy is one thing that differentiates humans from machines. Yunn LeChun, a computer scientist and AI director of Facebook says, “None of the AI techniques we have can build representations of the world, whether through structure or through learning, that are anywhere near what we observe in animals and humans.”47 Furthermore, machines need large datasets and training to learn, while children can learn with little input. François Chollet, a deep-learning expert at Google says, “There is no evidence that the brain implements anything like the learning mechanisms in use in modern deep learning models.” While AI can analyze datasets and perform tasks, they don’t actually “read” or understand them like humans do. Andrea Beam and Isaac Kohane from Zebra Medical Vision claims that computers can scan 260 million images a day at a much lower cost than radiologists. Although at the moment, there is still a lot of effort to make these algorithms achieve diagnostic accuracy that surpasses radiologists. 

    Data Invasion and Privacy 

    Even if AI-based healthcare algorithms prove to be highly accurate, there is fear of data invasion and privacy. Developing a highly accurate AI-based healthcare algorithm requires large and diverse amounts of data from each individual. Algorithms can be susceptible to hacking which could reveal all of the persons data such as their medical history, biological information, and family history. As a result of their privacy being violated, it could create negative consequences for the individual such as increased care insurance. There could also be a breach of privacy between patients and healthcare companies who could control and sell their data. As Topol puts it, owning your medical data should be a civil right. 

    Technical Mistakes

    Achieving high accuracy for diagnostics with machine learning algorithms is quite the achievement. We’ve seen these algorithms achieve 98 percent accuracy, but who will be held accountable for errors that proliferate among that 2 percent? An example provided by Eric Topol in his book is worthy of mention, “If we used a machine learning algorithm with multi-layered data of glucose levels, a glitch or hack could recommend the wrong dose of insulin. If a human made that mistake, it could lead to hypoglycemic coma or death in one patient.” If this was through an AI system, it could lead to thousands of injuries and deaths. Canada’s Personal Genome Project report conducted a detailed analysis of whole genome sequences of 56 Canadians, which involved 53 researchers at the University of Toronto and the Hospital for Sick Children.48 The project leaders found that it had significantly inaccurate medical information. For example, the program analyzing the genes of a healthy 67-year-old said that they had aortic stenosis, a lethal heart defect prevalent before birth. A healthy 54-year-old woman was analyzed to have mosaic Turner syndrome, a rare disorder that only affects females, and claimed to have missing X sex chromosomes in 70 percent of her red blood cells.49 A convolutional neural network developed by Google to identify invasive tumor from 200 cases from The Cancer Genome Atlas had a significant amount of false positives.50 Mistakes are bound to be made and there are yet to be solutions on how we will take them accountable.  

    Biases

    AI is also susceptible to biases and inequities—after all, humans are the ones who create these algorithms. They have the potential to amplify biases and inequities in regard to socioeconomic status, race, and gender. They could determine greater risk of a disease based solely on gender or race when these factors should not have any input. For example, research for detecting skin cancer has only been done on few skin-of-colour patients.51 Furthermore, a study by MIT Media Lab in 2018 found that facial-recognition systems from IBM and Microsoft were 11-19 percent more accurate on lighter-skinned individuals and 10-20 percent more accurate on male participants.52 If these algorithms aren’t trained with a diverse set of data, they will be inequitable for clinical use.

    Winterlight Labs, a Toronto-based startup developed a program to analyze linguistic features indicative of Alzheimer’s Disease.53 While the research is quite interesting, it was found to only work on English speakers with a Canadian dialect. Winterlight co-founder Frank Rudsicz says, “When you actually talk to real doctors and patients, suddenly the things that weren’t apparent to computer scientists working in a basement with data become more evident.”54 Moreover, a study done in 2014 tracking cancer mortality over the course of 20 years found that African Americans had greater estimates of cancer mortality and underwent definitive therapy less often than whites.55 

    There is also a lot of racial disparity in studies of human genomics. A 2016 meta-analysis published in Nature looking at 2,511 studies involving 35 million samples found that 81 percent of the data were from European ancestry.55 It’s quite a reduction from the 91 percent that was found in the 2009 study, but it still shows that there are significant biases in scientific research. In Canada’s Deep Genomes Project, 51 of the 56 participants identified as white. There was a lack of funding for the research so 25 participants had to pay around $4000, which resulted in a select group of volunteers. The researchers are aware of the lack of diversity and aim to expand their analysis. The AI Now Institute founded by Kate Crawford and Meredith Whittaker emphasizes the social and ethical implications of AI systems. Crawford says, "Biases and blind spots exist in big data as much as they do in individual perceptions and experiences. Yet there is a problematic belief that bigger data is always better data and that correlation is as good as causation." As machine learning algorithms “learn” from data, they could learn deep-rooted social, historical, and political conditions of the world we live in. 

     

    Conclusion 

    AI plays an important role in healthcare and the potential uses of machine learning algorithms in healthcare provide promising outcomes. AI enables advances in precision medicine, an important field for diagnostics. Radiology and pathology scans would probably be analyzed by machines to assist physicians in detection and diagnosis. One problem that AI faces is their integration in healthcare systems. Before they are integrated, they must be approved by regulators. They must be continuously updated and checked for any malfunctions. Although it seems clear that AI would not replace human physicians anytime soon but would rather assist clinical practices. After all, the one thing that differentiates us from machines is empathy.  

     

    References

    [1] A. M. Turing, “Computing Machinery and Intelligence,” Mind 49 (1950): 433-460. [Article] [Google Scholar]

    [2] Aron, Jacob. “Forget the Turing Test – There Are Better Ways of Judging AI.” New Scientist, 21 Sept. 2015. [Article

    [3] Stefferud, Einar. The logic theory machine: a model heuristic program. No. RM 373l CC. RAND CORP SANTA MONICA CALIF, 1963. [Article] [Google Scholar]

    [4] Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. "Learning representations by back-propagating errors." Nature 323.6088 (1986): 533-536. [Article] [Google Scholar]

    [5] Anyoha, Rockwell. “The History of Artificial Intelligence.” SITN. Harvard University, 2017. [Article

    [6] Esteva, Andre, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau, and Sebastian Thrun. "Dermatologist-level classification of skin cancer with deep neural networks." Nature 542, no. 7639 (2017): 115-118. [Article] [Google Scholar]

    [7] Topol, Eric. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. 1st Ed. New York: Basic Books: 2019. [Google Scholar]

    [8] Davenport, Thomas, and Ravi Kalakota. "The potential for artificial intelligence in healthcare." Future healthcare journal 6.2 (2019): 94. [Article] [Google Scholar]

    [9] Copeland, B.J. “MYCIN.” Encyclopædia Britannica, 21 Nov. 2018. [Article

    [10] Gelbukh, Alexander, and Ángel Fernando Kuri Morales, eds. MICAI 2007: Advances in Artificial Intelligence: 6th Mexican International Conference on Artificial Intelligence, Aguascalientes, Mexico, November 4-10, 2007, Proceedings. Vol. 4827. Springer, 2007. [Article] [Google Scholar]

    [11] Buchanan, Bruce G., and Edward H. Shortliffe. "Rule-based expert systems: the MYCIN experiments of the Stanford Heuristic Programming Project." (1984). [Article] [Google Scholar]

    [12] Bahl, Manisha, et al. "High-risk breast lesions: a machine learning model to predict pathologic upgrade and reduce unnecessary surgical excision." Radiology 286.3 (2018): 810-818. [Article] [Google Scholar]

    [13] Fu, Yu, et al. "Pan-cancer computational histopathology reveals mutations, tumor composition and prognosis." Nature Cancer 1.8 (2020): 800-810. [Article] [Google Scholar]

    [14] Tomczak, Katarzyna, Patrycja Czerwińska, and Maciej Wiznerowicz. "The Cancer Genome Atlas (TCGA): an immeasurable source of knowledge." Contemporary oncology 19.1A (2015): A68. [Article] [Google Scholar]

    [15] Weinstein, John N., et al. "The cancer genome atlas pan-cancer analysis project." Nature genetics 45.10 (2013): 1113. [Article] [Google Scholar]

    [16] Taguchi, Narumi, et al. "CT texture analysis for the prediction of KRAS mutation status in colorectal cancer via a machine learning approach." European journal of radiology 118 (2019): 38-43. [Article] [Google Scholar]

    [17] Hussein, Sarfaraz, et al. "Risk stratification of lung nodules using 3D CNN-based multi-task learning." International conference on information processing in medical imaging. Springer, Cham, 2017. [Article] [Google Scholar]

    [18] Wang, Dayong, et al. "Deep learning for identifying metastatic breast cancer." arXiv preprint arXiv:1606.05718 (2016). [Article] [Google Scholar]

    [19] Esteva, Andre, et al. "Dermatologist-level classification of skin cancer with deep neural networks." Nature 542.7639 (2017): 115-118. [Article] [Google Scholar]

    [20] Roach, John. “Researchers Use AI to Improve Accuracy of Gene Editing with CRISPR.” The AI Blog, 10 Jan. 2018 [Article

    [21] Arbabshirani, Mohammad R., et al. "Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration." NPJ digital medicine 1.1 (2018): 1-7. [Article] [Google Scholar]

    [22] Blamire, Andrew M. "MR approaches in neurodegenerative disorders." Progress in nuclear magnetic resonance spectroscopy 108 (2018): 1-16. [Article] [Google Scholar]

    [23] Belić, Minja, et al. "Artificial intelligence for assisting diagnostics and assessment of Parkinson’s disease—A review." Clinical neurology and neurosurgery 184 (2019): 105442. [Article] [Google Scholar]

    [24] Yedavalli, Vivek, et al. "Artificial intelligence in stroke imaging: Current and future perspectives." Clinical Imaging (2020). [Article] [Google Scholar]

    [25] Amorim, Robson Luis, et al. "Prediction of early TBI mortality using a machine learning approach in a LMIC population." Frontiers in neurology 10 (2020): 1366. [Article] [Google Scholar]

    [26] Gale, William, et al. "Detecting hip fractures with radiologist-level performance using deep neural networks." arXiv preprint arXiv:1711.06504 (2017). [Article] [Google Scholar]

    [27] Bar, Amir, et al. "Compression fractures detection on CT." Medical Imaging 2017: Computer-Aided Diagnosis. Vol. 10134. International Society for Optics and Photonics, 2017. [Article] [Google Scholar]

    [28] Yates, E. J., L. C. Yates, and H. Harvey. "Machine learning “red dot”: open-source, cloud, deep convolutional neural networks in chest radiograph binary normality classification." Clinical radiology 73.9 (2018): 827-831. [Article] [Google Scholar]

    [29] Shadmi, Ran, et al. "Fully-convolutional deep-learning based system for coronary calcium score prediction from non-contrast chest CT." 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE, 2018. [Article] [Google Scholar]

    [30] Rajpurkar, Pranav, et al. "Cardiologist-level arrhythmia detection with convolutional neural networks." arXiv preprint arXiv:1707.01836 (2017). [Article] [Google Scholar]

    [31] Gulshan, Varun, et al. "Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs." Jama 316.22 (2016): 2402-2410. [Article] [Google Scholar]

    [32] Bleicher, A. "Teenage whiz kid invents an ai system to diagnose her grandfather’s eye disease." IEEE Spectrum (2019) [Article] [Google Scholar]

    [33] De Fauw, Jeffrey, et al. "Clinically applicable deep learning for diagnosis and referral in retinal disease." Nature medicine 24.9 (2018): 1342-1350. [Article] [Google Scholar]

    [34] Drysdale, Andrew T., et al. "Resting-state connectivity biomarkers define neurophysiological subtypes of depression." Nature medicine 23.1 (2017): 28-38. [Article] [Google Scholar]

    [35] Huckvale, Kit, Svetha Venkatesh, and Helen Christensen. "Toward clinical digital phenotyping: a timely opportunity to consider purpose, quality, and safety." NPJ Digital Medicine 2.1 (2019): 1-11. [Article] [Google Scholar]

    [36] Bedi, Gillinder, et al. "Automated analysis of free speech predicts psychosis onset in high-risk youths." NPJ Schizophrenia 1.1 (2015): 1-7. [Article] [Google Scholar]

    [37] Corcoran, Cheryl M., et al. "Prediction of psychosis across protocols and risk cohorts using automated language analysis." World Psychiatry 17.1 (2018): 67-75. [Article] [Google Scholar]

    [38] Dagum, Paul. "Digital biomarkers of cognitive function." NPJ digital medicine 1.1 (2018): 1-3. [Article] [Google Scholar]

    [39] Cao, Bokai, et al. "Deepmood: modeling mobile phone typing dynamics for mood detection." Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2017. [Article] [Google Scholar]

    [40] Betthauser, Lisa M., et al. "Mobile App for Mental Health Monitoring and Clinical Outreach in Veterans: Mixed Methods Feasibility and Acceptability Study." Journal of Medical Internet Research 22.8 (2020): e15506. [Article] [Google Scholar]

    [41] Reece, Andrew G., and Christopher M. Danforth. "Instagram photos reveal predictive markers of depression." EPJ Data Science 6.1 (2017): 1-12. [Article] [Google Scholar]

    [42] “Voice Biomarker Device Could Diagnose Depression, Heart Disease.” PMLive, PMGroup Worldwide Limited, 11 Apr. 2019. [Article

    [43] Scherer, Stefan, et al. "Self-reported symptoms of depression and PTSD are associated with reduced vowel space in screening interviews." IEEE Transactions on Affective Computing 7.1 (2015): 59-73. [Article] [Google Scholar]

    [44] Bernert, Rebecca A., et al. "Artificial intelligence and suicide prevention: a systematic review of machine learning investigations." International journal of environmental research and public health 17.16 (2020): 5929. [Article] [Google Scholar]

    [45] Firth, Joseph, et al. "The efficacy of smartphone‐based mental health interventions for depressive symptoms: a meta‐analysis of randomized controlled trials." World Psychiatry 16.3 (2017): 287-298. [Article] [Google Scholar]

    [46] Lin, Wei-Chun, et al. "Applications of artificial intelligence to electronic health record data in ophthalmology." Translational Vision Science & Technology 9.2 (2020): 13-13. [Article] [Google Scholar]

    [47] Reuter, Miriam S., et al. "The Personal Genome Project Canada: findings from whole genome sequences of the inaugural 56 participants." Cmaj 190.5 (2018): E126-E136. [Article] [Google Scholar]

    [48] Fred Lum/the Globe And Mail. “Cracks in the Code: Why Mapping Your DNA May Be Less Reliable than You Think.” The Globe and Mail, 3 Feb. 2018 [Article

    [49] Cruz-Roa, Angel, et al. "Accurate and reproducible invasive breast cancer detection in whole-slide images: A Deep Learning approach for quantifying tumor extent." Scientific Reports 7 (2017): 46450. [Article] [Google Scholar]

    [50] Adamson, Adewole S., and Avery Smith. "Machine learning and health care disparities in dermatology." JAMA dermatology 154.11 (2018): 1247-1248. [Article] [Google Scholar]

    [51] Buolamwini, Joy, and Timnit Gebru. "Gender shades: Intersectional accuracy disparities in commercial gender classification." Conference on fairness, accountability and transparency. 2018. [Article] [Google Scholar] [Results

    [52] Fraser, Kathleen C., Jed A. Meltzer, and Frank Rudzicz. "Linguistic features identify Alzheimer’s disease in narrative speech." Journal of Alzheimer's Disease 49.2 (2016): 407-422. [Article] [Google Scholar]

    [53] Gershgorn, Dave. "If AI is going to be the world’s doctor, it needs better textbooks." September 6 (2018). [Article] [Google Scholar]

    [54] Aizer, Ayal A., et al. "Lack of reduction in racial disparities in cancer‐specific mortality over a 20‐year period." Cancer 120.10 (2014): 1532-1539. [Article] [Google Scholar]

    [55] Popejoy, Alice B., and Stephanie M. Fullerton. "Genomics is failing on diversity." Nature News 538.7624 (2016): 161. [Article] [Google Scholar]

     

Comments

  • (no comments)