The bleeding edge: Researchers from Princeton and the University of Washington have developed a camera the size of a large grain of salt. Typically, nano cameras like this produce a poor quality image. However, this group of researchers found a way to produce sharp color images comparable to conventional cameras 500,000 times larger.
The camera leverages imaging hardware and computer processing to produce stunning results compared to previous state-of-the-art equipment. The main innovation is a technology called “metasurface”.
In traditional cameras, a series of curved lenses focus light rays onto an image. A metasurface, which can be produced in the same way as integrated circuits, is only half a millimeter wide and contains 1.6 million cylindrical poles. These tiny columns are about the size of the human immunodeficiency virus.
“Each pole has a unique geometry and functions as an optical antenna,” notes Phys.org. “Variing the design of each post is necessary to properly shape the entire optical wavefront.”
Machine learning-based algorithms calculate data from pole interactions with light and produce higher quality images with the widest field of view of any comparable metasurface camera designed to date.
Additionally, previous cameras of this type required pure laser light and other laboratory conditions to produce an image. Because its optical surface is integrated with signal processing algorithms, this device can capture images with natural light, which makes it more practical. The researchers plan to use it in non-invasive medical procedures and as compact sensors for small robots.
The scientists compared the images captured with their technology to previous methods, and the results were day and night (image above). They also pitted it against a traditional camera with optics made up of six refractive lenses, and other than edge blur, the images were comparable.
“It was a challenge to design and configure these little microstructures to do what you want,” said Princeton Ph.D. student Ethan Tseng, who co-led the study published in Nature Communications. “For this specific task of capturing large field-of-view RGB images, it’s a challenge because there are millions of these small microstructures, and it’s not clear how to optimally design them.”
To understand the configurations of the stations, they designed a computer simulation to test different configurations of nano-antennas. However, developing a model with 1.6 million seats can consume “massive” amounts of RAM and time. So they scaled down the simulation to adequately approximate the image rendering capabilities of the metasurface.
The team’s next goal is to add more computing capabilities to the technology. Optimizing image quality is a no-brainer, but they also want to incorporate object detection and other sensing capabilities to make the camera viable for medical and commercial use.
As mentioned earlier, endoscopy and robotics are just a few practical applications of metasurfaces. An arguably more exciting use would be to eliminate the camera bump on smartphones.
“We could turn individual surfaces into ultra-high resolution cameras, so you wouldn’t need three cameras on the back of your phone, but the whole back of your phone would become one giant camera,” he said. said Felix Heide, the study’s lead author and assistant professor of computer science at Princeton. “We can think of completely different ways to build devices in the future.”