29 Nov Interview with Daniela Ushizima of Lawrence Berkeley National Lab
Dani Ushizima aims to aggregate value to scientific data by constructing models, algorithms and software that leverage unlabeled massive datasets and curations by scientists, embedding prior knowledge of specific science areas. Knowledge has been prospected in three ways: (a) inclusion of domain experts for a fully immersive collaboration; (b) mining of massive datasets, including image and text; (c) exploration of advanced algorithms in machine learning, e.g. convolutional neural networks. Ushizima is the Image Processing Team Leader for the Center of Advanced Mathematics for Energy Research Applications (CAMERA) at the Lawrence Berkeley National Laboratory, and a data scientist at the Berkeley Institute for Data Sciences at UC Berkeley. Read her full bio.
Interview with Daniela Ushizima of Lawrence Berkeley National Lab
Q: Artificial intelligence (AI) techniques have sent vast waves across healthcare, even fueling an active discussion of whether AI doctors will eventually replace human physicians in the future. Do you believe that human physicians will be replaced by machines in the foreseeable future? What are your thoughts?
A: I really hope that human physicians will not be replaced by machines in the foreseeable future, but I do believe that the instruments human physicians will be able to use will change dramatically the modus operandi. It is clear that the we need to scale healthcare to ever growing populations, and AI-enhanced instruments may help professionals bridge this gap.
Probably I’m rather optimistic about job maintenance and even job creation, here is why: First, people won’t be replaced: instead, we will need to equip the growing number of people involved with healthcare with new tools, especially in light of growing regulations and verification requirements. Second, new jobs will be created for people that regulate how these mathematical models are performing, and verify if they are improving by positive feed-back loops using humans and data – only by having people in the loop we will be able to avoid algorithms that perpetuate inequalities and are able to hold people responsible for creating or using models accountable. We need independent institutions to audit algorithms: A “Good Housekeeping” or “Consumer Reports” seal of approval. Third, digital democracy can help educate more people about how to extract information from big data, how to aggregate value to data collected by others, and how to generate more companies and more jobs.
As an example, a friend of mine believes that not everybody is born to analyze data while I insist that data science must be a second nature, what we need is education in the area. If you think about us as humans, only a few hundred years ago (Renaissance), that’s exactly how people felt about reading, and look at us today, reading while we walk and while we eat! Another example was the “human computers”, an activity attributed to women, nicely depicted in the movie Hidden Figures – today huge masses look for jobs in computing. Besides disruptive technology, we need to be prepared for disruptive behavior.
Q: Can you provide some use cases that have already successfully demonstrate the value of AI/Machine Learning in healthcare?
A: There are several use cases that demonstrate the value of AI/Machine Learning in healthcare, from drug discovery to robotic surgery. Although AI is a bit hyped nowadays, I remember my software engineer work from 1998 at Dixtal Biomedica in Sao Paulo, when my team and I designed a knowledge-based system for the intensive care unit: that AI system provided tools to support decision-making for healthcare providers to prevent bad interactions given multiple drug prescriptions. Lately, I’ve been watching and participating in the revolution of computer-aided screening of cells and/or tissues in oncology. One example of successful use of deep learning is in cell detection and recognition, in collaboration with Dr. Grinberg (Memory and Aging Center, UCSF). In addition, there are two key resources that became widely available to support recent advancements: (a) data collections with enough variability so that models can generalize to more cases; (b) hardware advances and their availability to compute billions of comparisons and calculations using sophisticated numerical schemes with reasonable feedback time.
Q: What areas in healthcare will benefit the most from AI/Machine Learning applications and when will that be?
A: The areas in healthcare that will benefit the most are those characterized by large amounts of data, and that contain repeating patterns, e.g. blobs, fibers, in conjunction with human documentation and metadata. Areas in healthcare that depend upon image analysis, such as cytology, histology and optometry have already demonstrated potential for automation (e.g. pre-screening) using AI/Machine Learning. As an example, the recent talk by members of the Google Brain Team at the Annual Meeting of the American Association for Cancer Research (AACR) showed evidences that real-time automated detection of cancer during cytology scrutiny is possible [link] .
Q: What are some of the challenges to realize AI/Machine learning in healthcare?
A: The major challenges to realizing AI/Machine learning in healthcare are data privacy, precision-medicine and accountability. The current rate of personal medical data acquisition is unprecedented, but more data does not mean more information, and there are plenty of controversial use-cases on using individual data to make inferences about entire populations – cookie-cutter medicine alarms me – this is one of the reasons I’m engaged in Precision Medicine. Also, while computer-aided tools to help medical doctors perform their work might be a means to scale healthcare, the fully-automated software that substitutes for professionals might bring huge public concern in terms of patient rights as well as accountability for medical errors.
Q: How close are we with successfully using AI for the purpose of mining big data?
A: Many of us are using AI to mine big data, from images to text to signals, and assigning value to large data collections has been a human task for centuries. While more and more data mining tools are being adopted, the ability to quantify uncertainty with respect to conclusions has lagged behind. Many of the statistical methods will lead humans to draw conclusions based on correlations, possibly without having any or little idea about causation.
Q: What is your outlook or vision for use of AI/Machine Learning in healthcare?
A: So what’s the next technological developments we should wait for? Selfies, yes, pictures and other data that you collect from yourself for all sort of purposes to be interpreted by virtual healthcare assistants. Does it sound funny? How about smart watches collecting your vitals 24-7? There are several cases in which personal monitors save lives. What about using your phone to take pictures for melanoma pre-screening? These are real applications now and I envision a few health care tasks to move to the individuals in the near-future. Again, these new systems bring possibly great pre-screening tools to large populations, but I still believe that human medical doctors should ultimately make the decisions, even if using AI-informed decisions.
Q: If AI is not quite there yet, what is needed to get us there?
A: It is to expect that some form of AI/Machine Learning will permeate every branch of healthcare, from laboratory exams to medical advising and support to diagnostics. My hope is that the invasion of computer programs to support healthcare will also come along with several protocols to regulate the use of such algorithms. The medical community has a major role to play in establishing this safety, and more medical professionals trained in data science could ensure that standards are met.
Q: Is there anything you would like to share with the PMWC audience?
A: Ask, argue, analyze and compare healthcare solutions that encapsulate algorithms, may it use AI/Machine Learning or not. We need better regulations and mechanisms to hold people accountable to the models they create. Paraphrasing the American Mathematician Cathy O’Neil, the writer of Weapons of Math Destruction, “Data scientists need to understand the weight of their influence and the limitations of their wisdom”. In her book, she describes how a silent bureaucracy governed by algorithms and big data is emerging in our society. One of the most terrifying thoughts I had reading her book was that now it’s no one’s fault, blame the algorithm… Simply said: algorithms don’t go to jail and seldom include considerations about fairness, equity and other social mandates. Should we ask a data scientist working in healthcare have some sort of Hippocratic oath? That seems fair to me.