Analysis

The optimisation of 3D imaging

2nd December 2015
Enaie Azambuja
0

MIT researchers have shown that by exploiting the polarisation of light — the physical phenomenon behind polarised sunglasses and most 3D movie systems — they can increase the resolution of conventional 3D imaging devices as much as 1,000 times. The technique could lead to high-quality 3D cameras built into cellphones, and perhaps to the ability to snap a photo of an object and then use a 3D printer to produce a replica.

By combining the information from the Kinect depth frame in (a) with polarized photographs, MIT researchers reconstructed the 3-D surface shown in (c). Polarization cues can allow coarse depth sensors like Kinect to achieve laser scan quality (b).

“Today, they can miniaturize 3D cameras to fit on cellphones,” says Achuta Kadambi, a PhD student in the MIT Media Lab and one of the system’s developers. “But they make compromises to the 3D sensing, leading to very coarse recovery of geometry. That’s a natural application for polarisation, because you can still use a low-quality sensor, and adding a polarising filter gives you something that’s better than many machine-shop laser scanners.”

The researchers describe the new system, called Polarized 3D, in a paper published at the International Conference on Computer Vision in December. Kadambi is the first author, and he’s joined by his thesis advisor, Ramesh Raskar, associate professor of media arts and sciences in the MIT Media Lab; Boxin Shi, who was a postdoc in Raskar’s group and is now a research fellow at the Rapid-Rich Object Search Lab; and Vage Taamazyan, a master’s student at the Skolkovo Institute of Science and Technology in Russia, which MIT helped found in 2011.

If an electromagnetic wave can be thought of as an undulating squiggle, polarisation refers to the squiggle’s orientation. It could be undulating up and down, or side to side, or somewhere in-between. Polarisation also affects the way in which light bounces off of physical objects. If light strikes an object squarely, much of it will be absorbed, but whatever reflects back will have the same mix of polarisations that the incoming light did. At wider angles of reflection, however, light within a certain range of polarisations is more likely to be reflected. The polarisation of reflected light carries information about the geometry of the objects it has struck.

This relationship has been known for centuries, but it’s been hard to do anything with it, because of a fundamental ambiguity about polarised light. Light with a particular polarisation, reflecting off of a surface with a particular orientation and passing through a polarising lens is indistinguishable from light with the opposite polarisation, reflecting off of a surface with the opposite orientation. Researchers use coarse depth estimates provided by some other method, such as the time a light signal takes to reflect off of an object and return to its source. Calculating surface orientation from measurements of polarised light may be complicated, but it can be done in real-time by a graphics processing unit, the type of special-purpose graphics chip found in most video game consoles.

The researchers’ experimental setup consisted of a Microsoft Kinect — which gauges depth using reflection time — with an ordinary polarising photographic lens placed in front of its camera. In each experiment, the researchers took three photos of an object, rotating the polarising filter each time, and their algorithms compared the light intensities of the resulting images. On its own, at a distance of several meters, the Kinect can resolve physical features as small as a centimeter or so across. But with the addition of the polarisation information, the researchers’ system could resolve features in the range of hundreds of micrometers, or one-thousandth the size.

A mechanically rotated polarisation filter would probably be impractical in a cellphone camera, but grids of tiny polarisation filters that can overlay individual pixels in a light sensor are commercially available. Capturing three pixels’ worth of light for each image pixel would reduce a cellphone camera’s resolution, but no more than the color filters that existing cameras already use.

The new paper also offers the tantalizing prospect that polarisation systems could aid the development of self-driving cars. Today’s experimental self-driving cars are, in fact, highly reliable under normal illumination conditions, but their vision algorithms go haywire in rain, snow, or fog. That’s because water particles in the air scatter light in unpredictable ways, making it much harder to interpret.

“The work fuses two 3D sensing principles, each having pros and cons,” says Yoav Schechner, an associate professor of electrical engineering at Technion — Israel Institute of Technology in Haifa, Israel. “One principle provides the range for each scene pixel: This is the state of the art of most 3D imaging systems. The second principle does not provide range. On the other hand, it derives the object slope, locally. In other words, per scene pixel, it tells how flat or oblique the object is". 

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier