A team of artificial intelligence researchers has developed a new deep-learning method to identify and segment tumours in medical images.
This software makes it possible to automatically analyze several medical imaging modalities. Through a supervised learning process from labeled data inspired by the functioning of the neurons of the brain, it automatically identifies liver tumours, delineates the contours of the prostate for radiation therapy or makes it possible to count the number of cells at the microscopic level with precision.
“We have developed software that could be added to visualization tools to help doctors perform advanced analyses of different medical imaging modalities,” explains Samuel Kadoury, a researcher at the CRCHUM, professor at Polytechnique Montréal and the study’s senior author. “The algorithm makes it possible to automate pre-processing detection and segmentation (delineation) tasks of images, which are currently not done because they are too time-consuming for human beings. Our model is very versatile – it works for CT liver scan images, magnetic resonance images (MRI) of the prostate and electronic microscopic images of cells.”
When a patient has a CT scan, the image will have to be standardized and normalized before being read by the radiologist. It takes an expert eye to quickly and confidently determine what the images represent. And perhaps, a little bit of magic.
“You have to adjust the grey shades because the image is often too dark or too pale to distinguish the tumours,” says Dr. An Tang, a radiologist and researcher at the CRCHUM, professor at Université de Montréal and the study’s co-author. “This adjustment with CAD-type computer-aided diagnosis-assistance techniques is not perfect and lesions can sometimes be missed or incorrectly detected. This is what gave us the idea of improving machine vision. The new deep-learning technique eliminates this pre-processing step by modelling the variability observed on a training database.”
The question is, how did the AI engineers arrive and design this model to view the abnormalities?
“We came up with the idea of combining two types of convolutional neural networks that complemented each another very nicely to create an image segmentation,” says Michal Drozdzal, the study’s first author, formerly a postdoctoral fellow at Polytechnique and presently research scientist at Facebook AI Research in Montréal. “The first network takes as an input a raw biomedical data and learns the optimal data normalization. The second takes the output of the first model and produces segmentation maps.”
A neural network is a complex series of computer operations that allows the computer to learn by itself by feeding it a massive number of examples. Convolutional neural networks (CNNs) work a little like the human visual cortex by stacking several layers of input to then process and produce an output result – an image. There are several types of neural networks, all structured slightly differently. The researchers then combined two neural networks: a fully convolutional network (FCN) and a fully convolutional residual network (FC-ResNet) to create an algorithm that will discover lesions by itself.
“We fed the computer hundreds of labeled samples from lesions and healthy tissue that had been manually identified by humans,” says professor Kadoury, Canada Research Chair in Medical Imaging and Assisted Interventions. “The parameters of the neural networks are adjusted in order to match the gold standard annotations and later recognize the image without any need for further supervision.”
The researchers compared their results of their algorithm with the results of other algorithms to conclude that theirs compares on the same level, or better than the ones previously obtained. The versatility of this new algorithm could make it possible to train it for different pathologies such as lung or brain cancer. Training an algorithm is a very long process, but, once trained, this model could analyze images in fractions of seconds and reach a performance level in detection and classification comparable to that of human beings. Researchers, however, think it will still be many years till AI will fluidly be at work in hospital settings.
“We are at the proof-of-concept stage,” says Tang. “It works with a dataset. If we take images from scans performed with different techniques or colour doses, or from different manufacturers or hospitals, will the algorithm work as well? We still have a number of challenges to deal with to be able to implement these algorithms on a large scale. We’re still in the research and development category. We’ll have to validate it on a large population, in different image-acquisition scenarios, to confirm the robustness of this algorithm.”
Despite AI possibly being many years yet from fully coming to fruition, the advancements are allowing researchers to perform tasks at a speed that can not be done by a human. We are coming into an age of remarkable possibilities – and this new method of deep learning only demonstrates that further.