bestbrk/istock via Getty Images
Artificial Intelligence Boosts Alzheimer’s Disease Classification
An artificial intelligence framework could help providers classify Alzheimer’s disease with enhanced accuracy, potentially leading to earlier treatment.
Artificial intelligence tools were able to process brain images and classify Alzheimer’s disease with improved accuracy, which could lead to the development of better treatments, according to a study published in Alzheimer’s Research & Therapy.
Globally, the population aged 65 and older is growing faster than all other age groups. By 2050, one in six people in the world will be over age 65. The estimated total healthcare costs for the treatment of Alzheimer’s in 2020 was calculated at $305 billion, and it’s expected to increase to more than $1 trillion as the population ages. The significant burden placed on caregivers and patients often results in extreme hardship and distress.
Warning signs of Alzheimer’s disease can appear in the brain years before the first symptoms begin, researchers noted. Spotting these clues could allow for lifestyle changes that may possibly delay the disease’s destruction of the brain.
"Improving the diagnostic accuracy of Alzheimer's disease is an important clinical goal. If we are able to increase the diagnostic accuracy of the models in ways that can leverage existing data such as MRI scans, then that can be hugely beneficial," said corresponding author Vijaya B. Kolachalama, PhD, assistant professor of medicine at Boston University School of Medicine (BUSM).
The team developed an advanced AI framework based on game theory to process both low- and high-quality brain images. Researchers were able to build a model that can classify Alzheimer’s disease with improved accuracy.
Quality of an MRI scan depends on the scanner instrument that is used. For example, a 1.5 Tesla magnet scanner has a slightly lower quality image than an image taken from a 3 Tesla magnet scanner. Magnetic strength is a key parameter associated with a specific scanner.
Researchers obtained brain MRI images from both 1.5 Tesla and the 3 Tesla scanners of the same subjects taken at the same time. The team then developed an AI model that learned from both of these images.
As the model learned from the 1.5 Tesla and 3 Tesla images, it generated images that had improved quality than the 1.5 Tesla scanner. These generated images also better predicted the Alzheimer’s disease status on these individuals than what could possibly be achieved using models that are based on 1.5 Tesla images alone.
"Our model essentially can take 1.5 Tesla scanner derived images and generate images that are of better quality and we can also use the derived images to better predict Alzheimer's disease than what we could possibly do using just 1.5 Tesla-based images alone," said Kolachalama.
The researchers noted that it may be possible to generate images of enhanced quality on disease cohorts that have previously used the 1.5 Tesla scanners, as well as in those centers who continue to rely on 1.5 Tesla scanners.
"This would allow us to reconstruct the earliest phases of AD, and build a more accurate model of predicting Alzheimer's disease status than would otherwise be possible using data from 1.5T scanners alone," said Kolachalama.
The team expects that such advanced AI methods can help the medical imaging community advance care delivery. Clinicians can use similar frameworks to harmonize imaging data across multiple sites and develop and compare models across different populations. This could potentially lead to better ways of diagnosing Alzheimer’s disease.
“Our approach to produce high AD classification performance models using a deep learning framework could transform the way MRI scans are utilized in AD research. Our study implication is that it is possible to generate images of enhanced quality on disease cohorts that have previously used the 1.5-T scanners, and in those centers that continue to rely on 1.5-T scanners,” the team concluded.
“This would allow us to reconstruct the earliest phases of AD, and build a more accurate model of predicting cognitive status than would otherwise be possible using data from 1.5-T scanners alone. Our proposed deep learning framework can also be extended to process other medical imaging datasets and organ systems when relevant data is available for model development.”