Single-Photon Camera Enables Video Playback at Any Timescale

Extreme Data Acquisition Illuminates New Computer Vision Applications


Computational imaging researchers at the University of Toronto captured an odd signal with their unique camera. While running experiments in the lab using the camera, a single-photon avalanche diode (SPAD), the Toronto Computational Imaging Group picked up an unexplained 80-kilohertz (kHz) flicker.

Using their powerful free-running imaging sensor, they detected each individual photon as it arrived from various light sources and logged its precise arrival time, down to a trillionth of a second. After acquiring all the data hitting each pixel, the team applied an algorithm that allowed them to create videos reconstructing the light at any given moment over an extreme range, from seconds to picoseconds.

“You can zoom in at whatever timescale you want and play back a video: 30 frames per second, a thousand, a million, a billion,” said University of Toronto Computer Science Professor Kyros Kutulakos. “You don’t have to know in advance the timescale at which you want to observe a phenomenon.”

Before this, researchers could capture light propagating through a scene over a few nanoseconds but couldn’t simultaneously image incredibly fast and slow events.

Existing technologies are specialized to particular time regimes, explained Kutulakos’ colleague, Assistant Professor David Lindell. Conventional high-speed cameras can reach speeds up to around 1 million frames per second—fast enough to capture a speeding bullet—but moving to billions or trillions of frames per second requires very specialized cameras that cannot capture events lasting longer than a microsecond or so.

Video length is 0:15

With a free-running imaging sensor, each individual photon from various light sources is detected and the precise arrival time is logged. (Video credit: University of Toronto)

The University of Toronto team used MATLAB® to acquire the individual photon timestamp data and to control the moving components in their setups for their imaging technique, which the group dubbed a “microscope for time.”

The single-pixel SPAD has a 60-millimeter by 60-millimeter detection head with a tiny sensor in the middle. When the SPAD detected a weird 80 kHz signal, at first the researchers wondered whether they had merely encountered some artifact. On closer inspection, the team uncovered the source.

“It turns out that we have T8 LED replacements for fluorescent bulbs in the lab. Those actually flicker at 80 kilohertz,” Kutulakos said. “We didn’t even know this was happening.”

At the recent International Conference on Computer Vision (ICCV) in Paris, the team’s paper on passive ultra-wideband single-photon imaging received a prestigious award only given to two papers from thousands that computer vision experts around the globe submitted.

Potential applications for their technique include new types of 3D imaging and lidar systems as well as scientific imaging—for example, to capture biological events across multiple timescales or for astronomical observation of brief pulses of light that coincide with fast radio bursts.

Billions of Frames per Second

Humans view light as a continuous experience: We can imagine using a fast camera to capture slow-motion footage. If we have higher- and higher-speed cameras, perhaps we can simply keep slowing down the playback continuously.

“But at some point, there’s a switch,” Lindell said. “Light isn’t continuous; it’s discrete. And the way we capture light is one photon at a time.”

“We’ve been working with single-photon sensors for a number of years and the MATLAB acquisition pipeline we have still works. It’s completely reliable.”

Single-photon avalanche diodes only became available more widely off-the-shelf within the past decade. The device’s high cost meant that very few computer science and computer vision laboratories had access to them, Kutulakos noted. Recent sensor developments prompted the University of Toronto team to ask fresh questions about problems that scientists in other fields have been touching upon for decades.

Astronomers specializing in gamma-ray astronomy, for example, deployed detectors to pick up individual particles and timestamp them. But these astronomers were more interested in periodic illumination from variable stars than the exact time-varying function describing star brightness since that’s what the physics required, Kutulakos explained.

In the Toronto Computational Imaging Group lab, postdoc Sotiris Nousias and Ph.D. student Mian Wei first used a SPAD to capture timing information for photons from light-emitting diodes.

“We had the LEDs flickering and looked at the output stream,” Nousias said. “We observed that the stream actually had the same pattern as the flicker and then worked to find the mathematical connection. That was our inspiration for the project.”

Nousias and Wei, joint first authors on the ICCV paper, wondered what information they could obtain if they had access to all the photons. Scientists tend to operate SPADs in special synchronized ways. Instead, the team operated the sensor in asynchronous mode. From there, the researchers sought to connect discrete photon arrivals with an underlying continuous function that describes the time-varying intensity of light.

“We are passively collecting photons and trying to reconstruct the contribution from all light sources in the environment, whether the variation in intensity is fast or slow,” Kutulakos said.

In one experimental setup, the team captured different lights simultaneously and then played them back at different speeds. The SPAD observed a single point on a white surface. Light from a 3-megahertz strobe pulsed laser, a 40-megahertz strobe pulsed laser, and a raster-scanning laser projector passed through a diffuser and hit the white point. A smart lightbulb shone overhead.

They pulled a timing information stream from the camera indicating when a photon hit the sensor and extracted intensity information. The team used MATLAB to steer the beam and obtain an image, to ensure pixel alignment in time, and to automate the entire data acquisition side.

“We’ve been working with single-photon sensors for a number of years and the MATLAB acquisition pipeline we have still works,” Lindell said. “It’s completely reliable.”

Specifically, they relied on Image Processing Toolbox™, Computer Vision Toolbox™, Signal Processing Toolbox™, Data Acquisition Toolbox™, and Parallel Computing Toolbox™ for the project. Nousias added that they also made a custom graphical interface to control scanning mirrors in some of their experiments.

It turns out that modern SPADs can do far more computational imaging than the researchers previously imagined.

“We never thought we would be able to see the laser pulses flickering asynchronously in the environment,” Kutulakos said.

Video length is 0:18
Video length is 0:33

Experimental setup that captures multiple light sources on a single point on a white surface. (Video credit: University of Toronto)

Zooming Through Time

The team’s reconstruction allowed them to zero in on one-second sample rates from the laser projector flicker all the way down to nearly nine orders of magnitude, to an individual pulse at the nanosecond scale from the 3-megahertz laser. Their method also enabled them to recreate a video played on a laser TV projector that wasn’t in the line of sight. In this case, it was the classic black-and-white Metro Goldwyn Mayer roaring lion recreated from just the raster scan signal.

Another setup featured a pale pink handheld battery-operated fan against a white background. To demonstrate passive ultra-wideband videography, they illuminated the spinning fan with laser pulses. Unlike active imaging techniques that blur the fan, their technique clearly freezes the fan blades moving at a different timescale than the strobed light. The scene still remains visible near the gigahertz scale as a red wavefront sweeps over seemingly frozen fan blades.

“When you get new equipment, if you don’t have the necessary support to do the interfacing then you have to leverage different GUIs to get the data you want. MATLAB gave us a convenient way to control all the component parts to do the data acquisition.”

“Think of it as a very fast video camera,” Nousias said. “With a mobile camera, you take a snapshot. But in our case, we take a snapshot of a snapshot of a snapshot. We’re zooming through time.”

The team was already familiar with MATLAB when they began using imaging equipment for the project eight years ago. “All the tools are very easy to integrate and provide us with quick access to the data,” Nousias said. “If something goes wrong and we have to recapture the data, it’s easy to visualize and that saves us time.”

Wei also appreciated MATLAB support. “When you get new equipment, if you don’t have the necessary support to do the interfacing then you have to leverage different GUIs to get the data you want,” he said. “MATLAB gave us a convenient way to control all the component parts to do the data acquisition.”

Lindell agreed. “The equipment we’re using is quite heterogeneous. We built a single programming interface from existing libraries,” he said. “If we didn’t have MATLAB, we’d have to fuss around with different third-party libraries or extensions that people tried to build in other programming languages.”

Video length is 0:35

Passive ultra-wideband videography freezes the fan blades moving at a different timescale than the strobed light. The blades remain visible near the gigahertz scale as a red wavefront sweeps over seemingly frozen fan blades. (Video credit: University of Toronto)

Wei estimated that, without access to the technical computing software, integrating each new component could take days to weeks. “Double-checking that the components worked as intended would add even more time,” he said.

“Because of our setups we can say, ‘Okay, I have this new Windows® machine. I’ll take the code that has been running before and then port it over,’” Wei continued. “Now someone else can make use of their code and run the system as well.”

Dynamic Imaging Possibilities

Receiving an award for best paper at the International Conference on Computer Vision represented a major professional achievement for the Toronto Computational Imaging Group researchers. Conference organizers selected just two papers from 8,068 submitted, Lindell said.

“In a day and age when deep learning dominates the conversation, this was a good reminder that there still is a need for research that helps us understand basic physical phenomena and how we can sense these phenomena using emerging technologies,” he said.

“I think there will be results that we didn’t anticipate as we point this camera around the lab or even outside. I’m excited to see what we find. It will be a new way of capturing the world.”

The researchers hope their breakthrough inspires new advancements. One area ripe for exploration is flash lidar, a technique that produces a high-resolution depth image from a diffused pulsed laser rather than from a scanning pulsed laser. Typically, this involves sending light into the scene, measuring how long it takes for that light to return, and using this timing information to measure distance.

“One of the caveats is that you need to synchronize your light source with your system for this mechanism to work,” Wei explained. “Given our ability to measure signals asynchronously, we’re interested in understanding whether we could use an asynchronous light source to do flash lidar.”

In biology, events like protein folding or binding can happen across timescales from nanoseconds to milliseconds. Lindell said the team has been discussing their imaging techniques with biology experts at the university.

“The fact that these events are happening at these timescales is interesting to us, and the potential to observe them in a way that people haven’t been able to so far is enticing,” he said.

Computational refocusing might have applications in other sciences. Astrophysicists have wanted to understand what causes fast radio bursts ever since a radio telescope first detected the phenomenon 20 years ago. These mysterious energy blasts occur across the sky and only last milliseconds, but each one is incredibly powerful. Computational refocusing could enable the detection of FRBs in the optical domain.

“We don’t know when they will happen,” Nousias said. “You need to monitor the sky and then go back into the data at the nanosecond timescale to observe them. Now we have the tool to do this.”

Kutulakos anticipates that their SPAD method could help detect problems or defects in engineered mechanical components that standard inspections miss. When an engine has trouble, vibrations might indicate an issue, but the exact source may prove elusive without more information. By training a camera on the engine and collecting all the data, you could find that needle in a haystack, he said.

The group is also collaborating with other microelectronics researchers to expand what the cameras can do. A single-pixel camera enables the team to produce videos from repetitive motions, such as spinning fan blades, but truly dynamic events call for a 2D sensor.

Next, the team plans to work on improving their technique. Reconstructing the signals remains fairly time-consuming and computationally inefficient, Lindell noted. So, they’re working on capturing events across all the timescales in real time.

“I think there will be results that we didn’t anticipate as we point this camera around the lab or even outside,” Lindell said. “I’m excited to see what we find. It will be a new way of capturing the world.”


Read Other Stories

Panel Navigation

AI

Speeding Scientific Discovery with AI-Powered Microscopy

User-Friendly Image Processing and Insights from Massive Lightsheet Data Sets

Panel Navigation
Panel Navigation

IMAGE PROCESSING

Exploring the World’s Largest Ecosystem — 500 Meters Below the Ocean Surface

Nat Geo Develops Underwater Robotic Camera to Explore the Deep