Researchers at Princeton University and the University of Washington have created an ultracompact camera the size of a coarse grain of salt which can produce crisp, full-color images as good as a conventional compound camera lens which is 500,000 times larger in volume, reported Princeton University on their website.
In a paper published November 29 in Nature Communications the researchers posit, the system "could enable minimally invasive endoscopy with medical robots to diagnose and treat diseases, and improve imaging for other robots with size and weight constraints."
Minuscule cameras of this size can potentially discover problems in the human body and enable sensing to aid very small robots.
The camera uses a new optical system which relies on a technology called a metasurface, which can be manufactured like a computer chip. Only half a millimeter wide, the metasurface is studded with 1.6 million cylindrical posts, each roughly the size of the human immunodeficiency virus (HIV).
"Each post has a unique geometry and functions like an optical antenna. Varying the design of each post is necessary to correctly shape the entire optical wavefront. With the help of machine learning-based algorithms, the posts’ interactions with light combine to produce the highest-quality images and widest field of view for a full-color metasurface camera developed to date," reveals the report.
According to Felix Heide, the study’s senior author and an assistant professor of computer science at Princeton, a key innovation in the camera’s creation was the integrated design of the optical surface and the signal processing algorithms that produce the image. This enhanced the camera’s performance in natural light conditions, in contrast to previous metasurface cameras that required the pure laser light of a laboratory or other ideal conditions to produce high-quality images.
“It’s been a challenge to design and configure these little nano-structures to do what you want,” said Ethan Tseng, a computer science PhD student at Princeton who co-led the study. “For this specific task of capturing large field of view RGB images, it was previously unclear how to co-design the millions of nano-structures together with post-processing algorithms.”
Co-lead author Shane Colburn tackled this challenge by creating a computational simulator to automate testing of different nano-antenna configurations. Because of the number of antennas and the complexity of their interactions with light, this type of simulation can use “massive amounts of memory and time,” said Colburn. He developed a model to efficiently approximate the metasurfaces’ image production capabilities with sufficient accuracy
Colburn conducted the work as a PhD student at the University of Washington Department of Electrical & Computer Engineering (UW ECE), where he is now an affiliate assistant professor.
Coauthor James Whitehead, a PhD student at UW ECE, fabricated the metasurfaces, which are based on silicon nitride, a glass-like material that is compatible with standard semiconductor manufacturing methods used for computer chips — meaning that a given metasurface design could be easily mass-produced at lower cost than the lenses in conventional cameras.
“Although the approach to optical design is not new, this is the first system that uses a surface optical technology in the front end and neural-based processing in the back,” said Joseph Mait, a consultant at Mait-Optik and a former senior researcher and chief scientist at the US Army Research Laboratory.
“The significance of the published work is completing the Herculean task to jointly design the size, shape and location of the metasurface’s million features and the parameters of the post-detection processing to achieve the desired imaging performance,” added Mait, who was not involved in the study.
Heide and his colleagues are now working to add more nuanced computational abilities to the camera. The are aiming to equip the camera with object detection and other sensing modalities beneficial for medicine and robotics.
Moreover, Heide envisions using ultracompact imagers to create “surfaces as sensors.” “We could turn individual surfaces into cameras that have ultra-high resolution, so you wouldn’t need three cameras on the back of your phone anymore, but the whole back of your phone would become one giant camera. We can think of completely different ways to build devices in the future,” he said.
Besides Tseng, Colburn, Whitehead, Majumdar and Heide, the study’s authors include Luocheng Huang, a PhD student at the University of Washington; and Seung-Hwan Baek, a postdoctoral research associate at Princeton.
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ