Author — Lokesh Dahiya (LinkedIn)
AI has taken the central stage within various organizations to drive business growth, but researchers have failed to bring anything on the table that could significantly help the world fight COVID-19. Has AI ultimately failed us all during the COVID-19 crisis?
When covid-19 spread worldwide around Mar 2020, the doctors didn’t know how to manage the patients. But there was data coming out of China, which had a four-month head start on the pandemic. A widely used solution is detecting COVID-19 from lung scans. ML techniques can decrease the time required to produce automated analyses and allow AI practitioners to support clinicians. Medical imaging studies show that chest CT scans can be used to detect COVID-19 lesions. If ML algorithms could be trained on that data to help doctors understand what they encountered and make decisions, it might save more lives. It never happened. The AI research community rushed to develop software that many believed would allow hospitals to diagnose or triage patients faster. In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful, as concluded in multiple studies.
The Turing Institute (UK) put out a report summarizing a clear consensus that AI tools had made little, if any, impact in the fight against covid. A review in the British Medical Journal has looked at 232 algorithms for diagnosing patients or predicting how sick those with the disease might get. It found that none of them was fit for clinical use. Another research zoomed in on deep-learning models for diagnosing covid and predicting patient risk from medical images. They looked at 415 published tools and concluded that none were fit for clinical use too. If AI can be incorrect and provides biased results when trained with a colossal amount of cancer data, it’s unlikely that the solutions that claim to determine the disease through analyzing chest scan images where there is already a shortage of COVID-19-related data can be trusted.
The developers repeated the same basic errors in the way they trained or tested their tools. Many of the problems that were uncovered are linked to the poor quality of the data. Information about covid patients, including medical scans, was collected and shared in the middle of a global pandemic, often by the doctors struggling to treat those patients. Researchers wanted to help quickly, and these were the only public data sets available. This meant that many tools were built using mislabeled data or data from unknown sources. Many tools were developed either by AI researchers who lacked medical expertise or by medical researchers who lacked mathematical skills. A more subtle problem is the bias introduced at the plabelleddata set is labeled.
AI has been a game-changer in sectors like finance, e-commerce, and manufacturing, as it was actively embraced to revamp the workflows. However, in healthcare, we did not leverage AI for the development, and it has now exposed our lack of competency in adopting the latest technologies in inpatient care.
Algorithms base themselves on stationary assumptions that the rules haven’t changed or won’t change due to some event in the future. But static assumptions have meant that the data sets used to train ML models haven’t included anything more than elementary “worst-case” information. They didn’t expect a pandemic. This isn’t the first time that the technology around ML has failed. In 2016, sophisticated ML algorithms failed to predict the outcomes of both the Brexit vote and the US presidential election.
What’s the fix? Better data would help, but in times of crisis, that’s a big ask. The World Health Organization is considering an emergency data-sharing contract that would kick in during international health crises to address this issue. It would let researchers move data across borders more easily. The leading scientific groups from G7 nations also called for “data readiness” in preparation for future health emergencies.