Technical Articles

Visualizing and Diagnosing Reduced Blood Circulation with Augmented Reality and Deep Learning

By Dr. Beril Sirmacek, University of Twente


Peripheral artery disease is a major complication of diabetes that causes narrowing of blood vessels and reduced blood flow to the legs and feet. This reduced blood flow can lead to ulcers and sores that are slow to heal and susceptible to infection. Peripheral neuropathy, another complication of diabetes, impairs sensation, making it difficult for patients to assess the severity of their condition. Left untreated, this chain of complications can result in dead tissue that sometimes requires amputation.

To help patients with diabetes and similar conditions diagnose reduced flood flow before the condition becomes severe, my research group at University of Twente is developing a handheld device that scans tissue and produces real-time, augmented reality (AR) visualizations of blood perfusion (Figure 1). I developed MATLAB® algorithms for this device that use simultaneous localization and mapping (SLAM) to construct a 3D representation of tissue and its underlying blood circulation. The 3D representation is projected on to the skin’s surface in 2D via an AR projector installed on the device.

Figure 1. Augmented reality visualization of blood flow in the wrist and hand.

Figure 1. Augmented reality visualization of blood flow in the wrist and hand.

Currently, clinical systems that perform similar diagnostics cost tens of thousands of euros, require a clinical visit, and must be installed, configured, and operated by trained technicians. Our device will be affordable—early prototypes cost less than 500 euros—and suitable for in-home use.

Acquiring Data and Implementing SLAM Algorithms

SLAM algorithms are commonly used by robotics researchers to map an environment and determine a robot’s location within it. To simplify tracking, these algorithms often use corners, edges, and similar clues—the corners on a doorframe or the edge of sidewalk, for example. I needed SLAM algorithms that could determine the position of the device in relation to the tissue. Because the human body does not have distinct edges and corners, I needed to modify traditional SLAM algorithms, tailoring them to my specific application. For example, I trained deep learning networks to recognize skin features that can be tracked in sequential frames. The algorithms use these features when creating a 3D representation of the skin and when localizing the camera position in relation to the reconstructed surface.

I began developing my algorithms by importing images and data from the device’s stereo camera, thermal imaging camera, and laser perfusion imaging sensor into MATLAB. After preprocessing and filtering the images, I wrote MATLAB code to construct a 3D mesh of the tissue in the images. Computer Vision Toolbox™ greatly simplified this phase of development, providing me with functions for calibrating the device (to establish the relative positions of the cameras), performing point tracking and depth estimation, and generating a 3D point cloud. 

I extended the algorithms to incorporate data from the other two sensors and overlay it onto the mesh. Specifically, I incorporated blood temperature data from the near-infrared sensors on the thermal imaging camera and signal patterns from the laser Doppler sensors that indicate circulatory blood flow.

Real-Time Augmented Reality Projection

The digital stereo camera that we use can produce images at a rate of at least 25 frames per second. Because each frame carries more information than can be processed given the real-time constraints of the system, I implemented an algorithm in MATLAB that extracts needed information from the camera data while reducing the processing workload. The algorithm creates a dense 3D reconstruction for a given area of tissue using selected key frames. Once this reconstruction is complete, the algorithm does not attempt to add more points from the remaining frames but uses these frames only for camera localization. This key frame approach reduces the computational demands of the overall algorithm while enabling it to produce a dense reconstruction of the skin’s surface, which will ultimately help doctors to make more accurate diagnoses.

All the main phases of the algorithm—data acquisition and filtering, localization, mapping, and AR projection—run in real time within MATLAB on my laptop. To create the AR projection, the algorithms calculate the 2D image of the 3D construction that would be visible from the projector’s point of view and then send that image to the AR projector, which displays it on the surface of the patient’s skin.

Deep Learning Models and Planned Enhancements

We have already demonstrated a prototype system capable of projecting an augmented reality representation of blood flow circulation in real time. Our long-term goal is to provide diabetic patients with a system that can detect diminished circulation even before it becomes visible. I am developing deep learning models in MATLAB that use thermal imaging and perfusion data to classify areas of tissue with poor blood flow and those with healthy blood flow even before any change is visible to the naked eye. While early results from these deep learning models are promising, the training data set (from just 50 patients) is too small for us to draw definitive conclusions.

One of the biggest advantages of using MATLAB in my research is the ability to use a single platform for all aspects of the project, including image processing and computer vision, SLAM, and deep learning. As we move from a prototype to a production-ready system, I plan to use MATLAB Coder™ to generate code for a GPU, which will perform real-time processing on the device itself rather than on a laptop while deep learning algorithms classify data collected by the device offline in the cloud. 

My group continues to make improvements to the system to support in-home use, even as we explore additional applications for the technology and the potential for incorporating new imaging sources. For example, we are working to ensure that the device operates well under a range of lighting conditions and with a range of skin pigmentations. We are also considering enhancing the algorithms to use data from MRI systems and to assist physicians with surgery planning by enabling them to visualize internal structures via AR before inserting a needle for a biopsy or performing other surgical procedures.

About the Author

Beril Sirmacek holds a Ph.D. in computer science. In 2017, she joined the Robotics and Mechatronics (RAM) group at University of Twente, where she specializes in deep learning, simultaneous localization and mapping, and augmented reality.

Published 2018

View Articles for Related Industries