Exploring the World's Largest Ecosystem - 500 Meters Below the Ocean Surface
Nat Geo Develops Underwater Robotic Camera to Explore the Deep
Ancient fishermen long worried about what lies beneath the ocean’s waves. Mythological sea monsters haunted early sailors. Folklore regularly told of immense creatures that massacred unsuspecting ships. Little did these early sailors know that a large mass of sea creatures was rising toward the surface each night to feed, often right under their ships.
This mass, the deep scattering layer (DSL), was first discovered by World War II sonar operators. The mass was so large, it was mistaken for the bottom of the ocean.
The ocean’s deep scattering layer confounded the experts. During the day it could be located more than 500 meters below the ocean surface. At night, it would rise almost to the surface. But what was it, and why did it move at regular intervals? Those questions stumped scientists for decades following the discovery.
Based on the movement, scientists knew that the layer was made up of living organisms. Understanding those organisms would add greatly to the fundamental knowledge of basic oceanography and of the makeup of life under the sea. The challenge, however, was how to study something that is frequently found in the deepest, darkest regions of the ocean.
Early attempts at studying the scattering layer involved elaborate fishing nets designed by oceanographers. First, they lowered sound equipment to study the reverberations and identify the layer’s depth. Next, they attempted to position the nets at the depth at which the scattering sounds were found. When they hauled the nets in, they found an array of organisms, including shrimp and lantern fish.
Then, manned submersibles were used during the 1960s to study the scattering layer, but these crafts could only operate for a limited amount of time. What was learned is that the DSL is composed of millions of marine organisms, such as the bioluminescent lantern fish caught in the early studies. While the individual creatures are small, most less than 4 inches in length, their sheer numbers created the early confusion with ships’ sonar operators.
The DSL rises and falls each day, as the organisms in the layer swim to the nutrient-rich surface layer at night to feed, and then during daylight dive to depths of 300 to 500 meters, where there are far fewer predators. This is known as the diel vertical migration.
This daily traverse of the depths makes studying and understanding the layer extremely difficult. The deepest part of the layer’s daily dive is far too deep for divers. No light reaches these depths, the pressure is deadly to humans, and the temperature hovers between 0 and 5 degrees Celsius. In short, early studies provided basic insight into the DSL, but many questions remained about its complexity.
National Geographic Explores the DSL
For more than 130 years, National Geographic Society has invested in science, exploration, storytelling, and education to further an understanding of the planet.
The exploration technology team, part of National Geographic Labs, builds and deploys breakthrough systems and hardware to accelerate exploration. Every year National Geographic gathers new insight into the complexities of the ocean, including the deep scattering layer. Unfortunately, as early explorers realized, investigating the ocean depths is far from simple.
The team at National Geographic decided to create a robotic camera to study the DSL. They would base their design on an existing camera called the Dropcam. The Dropcam was designed to be “weighted and baited,” meaning a weight was tied on the camera and bait was used to attract the sea life. When the image capture was complete, the robotic camera dropped the weight and floated to the surface. The Dropcam was great for imaging the inhabitants of the deepest part of the oceans, but it would be of limited use for studying the DSL.
To acquire the desired insight, the team needed to design a camera that would move up and down through the water column in concert with the DSL. To accomplish this, they needed to track the depth of the DSL and then change the buoyancy of the robot depending on this vertical location. This would ensure that the camera would drift amidst the DSL as it performed its daily migration. But how do you remotely change the buoyancy of the robotic camera?
For this design challenge, the team turned to a discovery from 250 B.C.: the Archimedes’ principle. In his work, On Floating Bodies, Archimedes of Syracuse suggested that an object immersed in a fluid is buoyed up by a force equal to the weight of the fluid displaced by the object. So, in order to change the buoyancy, the team would need to change the volume of the robotic camera.
The team used a buoyancy engine that changes the volume of the new robotic camera, called the Driftcam. This type of depth control had the added advantage of quiet operation, reducing the noise that would scare off nearby fish.
The Driftcam
The early prototype Driftcam struggled with control issues, bobbing around in the testing tank at the University of Maryland’s Neutral Buoyancy Research Facility. One possible explanation was trapped air in the system.
To correct the control system, the team used the ideal gas law to determine the potential size of the bubble based on the data from the buoyancy engine. They ran a MATLAB® simulation and determined that the system did have an air bubble. After eliminating the trapped air, the Driftcam operated correctly.
The Driftcam weighs over 90 kg (200 lb) and receives signals from an echosounder through a digital acoustic link to locate the DSL. The robot descends to the correct depth so that the camera can image the creatures in the layer. Employing a 2 million ISO camera, the Driftcam can take images in very low light. This is critical, since the ecosystem lives at a depth where sunlight barely reaches, and the team wanted to minimize their use of artificial light. Minimizing the use of artificial light helps ensure an accurate observation of creatures in the DSL, since artificial light could scare away some species while attracting others.
“The Driftcam saves the images as RAW TIFFs,” says Berkenpas. “So, we use computer vision and image processing tools from MathWorks to transform them into a playable video.”
Next Steps in Exploring the DSL
The Driftcams have been deployed more than 30 times in various locations including Puerto Rico and the Gulf of California. They have performed well. But each deployment requires a research vessel and team to stay on site.
The next project for the team involves updating the Driftcams so they can be deployed in autonomous swarms. The goal is for the units to talk to each other and treat the location of the layer as an optimization problem. The Driftcams would communicate with each other to locate and track the deep scattering layer. National Geographic Labs is collaborating with the University of Maryland on this research thanks to funding from NOAA.
Creating an autonomous swarm of cooperating cameras would eliminate the need for a boat-deployed echosounder and crew to stay on site, and help the team gather even more information on the largest ecosystem on Earth.
Read Other Stories
MACHINE LEARNING / SIGNAL PROCESSING
Tackling Tumors in a Single Step
Signal Processing and Advanced Image Reconstruction Improves Cancer Treatments