Ebook

Chapter 4

AI to Aid Treatment of Diseases and Health Conditions


The ability of AI systems to ingest and analyze large volumes of data and produce an analysis in a very short time makes it a powerful tool for aiding in the treatment of diseases and health conditions. As an example, the incorporation of AI into medical devices that integrate multiple sensors could expedite early detection of a clinical problem or provide insights that improve the quality of treatment. Using AI, the vast and complex physiological data generated by the human body could potentially be interpreted more quickly and accurately to formulate a medical intervention.

A hand grasping a cup and pouring contents into a glass. The person's wrist is wrapped in a sleeve of electrodes.

An AI-based brain-machine interface enables a man with a paralyzed arm to pour items into a cup. (Image credit: Battelle)

Challenge

For patients with advanced amyotrophic lateral sclerosis (ALS), communication becomes increasingly difficult as the disease progresses. In many cases, ALS (also known as Lou Gehrig’s disease) leads to locked-in syndrome, in which a patient is completely paralyzed but remains cognitively intact. Eye tracking devices, and more recently, electroencephalogram (EEG)-based brain-computer interfaces (BCIs), enable ALS patients to communicate by spelling phrases letter by letter, but it can take several minutes to communicate even a simple message.

Solution

Researchers at the University of Texas Austin developed a noninvasive technology that uses wavelets, machine learning, and deep learning neural networks to decode magnetoencephalography (MEG) signals and detect entire phrases as the patient imagines speaking to them. The performance of the algorithm is nearly in real time; when the patient imagines a phrase, it appears immediately.

  • With Wavelet Toolbox™, they denoised and decomposed the MEG signals to specific neural oscillation bands (high gamma, gamma, alpha, beta, theta, and delta brain waves) by using wavelet multiresolution analysis techniques.
  • Initially the researchers then extracted features from the signals and used Statistics and Machine Learning Toolbox to calculate a variety of statistical features. They used the extracted features to train a support vector machine (SVM) classifier and a shallow artificial neural network (ANN) classifier, obtaining an accuracy baseline by classifying neural signals corresponding to five phrases. This method yielded an accuracy about 80% and served as an accuracy baseline.
  • Then the team took wavelet scalograms of MEG signals for representing rich features and used them as inputs to train three customized pretrained deep convolutional neural networks—AlexNet, ResNet, and Inception-ResNet—for speech decoding MEG signals. With combined wavelets and deep learning techniques, the overall accuracy boosted up to 96%.
  • To speed up training, the team conducted the training on a seven-GPU parallel computing server using Parallel Computing Toolbox.

Results

Using MATLAB, the team was able to quickly iterate between the different feature extraction methods and train several machine learning and deep learning models to obtain an overall accuracy of MEG speech decoding of 96%. MATLAB allowed them to combine wavelet techniques with deep learning in a matter of minutes—significantly less than other programming languages. In addition, the team was able to switch to using multiple GPUs for training with the change of only one line of code. Using Parallel Computing Toolbox and a server with seven GPUs resulted in training the networks about 10 times faster.

A four-step left-to-right process shows MEG data collection, data processing into a scalogram, neural network data interpretation, and an output of decoded speech.

Converting brain MEG data into word phrases. (Image credit: UT Austin)