According to the recent medical activities, the so-called black box of AI, deep learning, is going to be utilized in the clinics all over the world soon. This tool, subset of machine learning in AI, will be replacing humans in routine medical tasks such as medical image analysis. In deep learning method, the machines will predict results from large data records with the help of self-teaching techniques.
The researchers are actually worried about the decision-making algorithm of this AI method and hence are calling for clarification about its algorithm. They have also demanded a debate to discuss about AI’s demands and its proper utilization whenever necessary. According to them, the way it processes the inputted data mysteriously and delivers the output, cannot be trusted, and especially when the AI is considered for implementing in medical care system, the risk factor increases.
The developers and computer scientists are taking efforts to make this AI less opaque so that it becomes easier for researchers to study thoroughly. This AI is currently used in mammography program by Mass Journal where it detects a risk factor for breast cancer, dense breast tissues. With less opaque system, the radiologists will be able to detect the sensitive areas in mammography image by observing where the model’s decision-making algorithm is activated.
Anna Goldenberg—Senior Scientist, Genetics and Genome Biology, SickKids Research Institute, Toronto—said that the system must be well-built firstly and later analyzed as per its behavior. The clinicians actually want the system to make comparative decisions with themselves in various medical cases. Once the decision-making algorithm gets clarified, the system will be surely adapted without any concern.
Meanwhile, a recent study has raised concern about the use of AI by family physicians that may disrupt current medical system. Though it is potentially beneficial, it may sometimes follow biased answers as programmed and that makes it a reason to worry.