Femto-photography is a term describing ultra high speed imaging. Femto-photography of macroscopic objects was first done by a team at the MIT Media Lab lead by Ramesh Raskar. Before that the term has at least been used for certain proposed experimental procedures in experimental nuclear physics.

In their publications Raskar’s team claims to be able to capture exposures so short that light only traverses 0.6 mm (Equal to only a few picoseconds, or thousands of femtoseconds) during one exposure by combining available high end research equipment and sophisticated sampling algorithms. Raskar presented his team’s feat during TEDGlobal 2012, reaching a wider audience through viral video.

In his demonstration, He shows a 1mm wide light “bullet”, created by flashing a laser for one trillionth of a second, passing through a bottle in darkness. As the beam of light hits the other end of the bottle, the photons disperse and ripple through the entire body before exploding upon contact with the cap. The whole phenomenon takes place in less than a nanosecond.

The femto-photography technique allowed Raskar’s team to, among other things, show the path of a packet of light traversing a coca cola bottle, and to image an object hidden behind a corner by reconstruction of individual photon reflection paths.

MIT website says

We have built an imaging solution that allows us to visualize propagation of light. The effective exposure time of each frame is two trillionths of a second and the resultant visualization depicts the movement of light at roughly half a trillion frames per second. Direct recording of reflected or scattered light at such a frame rate with sufficient brightness is nearly impossible. We use an indirect ‘stroboscopic’ method that records millions of repeated measurements by careful scanning in time and viewpoints. Then we rearrange the data to create a ‘movie’ of a nanosecond long event.

The device has been developed by the MIT Media Lab’s Camera Culture group in collaboration with Bawendi Lab in the Department of Chemistry at MIT. A laser pulse that lasts less than one trillionth of a second is used as a flash and the light returning from the scene is collected by a camera at a rate equivalent to roughly half a trillion frames per second. However, due to very short exposure times (roughly two trillionth of a second) and a narrow field of view of the camera, the video is captured over several minutes by repeated and periodic sampling.

The new technique, which we call Femto Photography, consists of femtosecond laser illumination, picosecond-accurate detectors and mathematical reconstruction techniques. Our light source is a Titanium Sapphire laser that emits pulses at regular intervals every ~13 nanoseconds. These pulses illuminate the scene, and also trigger our picosecond accurate streak tube which captures the light returned from the scene. The streak camera has a reasonable field of view in horizontal direction but very narrow (roughly equivalent to one scan line) in vertical dimension. At every recording, we can only record a ‘1D movie’ of this narrow field of view. In the movie, we record roughly 480 frames  and each frame has a roughly 1.71 picosecond exposure time. Through a system of mirrors, we orient the view of the camera towards different parts of the object and capture a movie for each view. We maintain a fixed delay between the laser pulse and our movie starttime. Finally, our algorithm uses this captured data to compose a single 2D movie of roughly 480 frames each with an effective exposure time of 1.71 picoseconds.

Beyond the potential in artistic and educational visualization, applications include industrial imaging to analyze faults and material properties, scientific imaging for understanding ultrafast processes and medical imaging to reconstruct sub-surface elements, i.e., ‘ultrasound with light’. In addition, the photon path analysis will allow new forms ofcomputational photography, e.g., to render and re-light photos using computer graphics techniques.

Frequently Asked Questions

How can one take a photo of photons in motion at a trillion frames per second?
We use a pico-second accurate detector. We use a special imager called a streak tube that behaves like an oscilloscope with corresponding trigger and deflection of beams. A light pulse enters the instrument through a narrow slit along one direction. It is then deflected in the perpendicular direction so that photons that arrive first hit the detector at a different position compared to photons that arrive later. The resulting image forms a “streak” of light. Streak tubes are often used in chemistry or biology to observe milimeter sized objects but rarely for free space imaging.

Can you capture any event at this frame rate? What are the limitations?
We can NOT capture arbitrary events at picosecond time resolution. If the event is not repeatable, the required signal-to-noise ratio (SNR) will make it nearly impossible to capture the event. We exploit the simple fact that the photons statistically will trace the same path in repeated pulsed illuminations. By carefully synchronizing the pulsed illumination with the capture of reflected light, we record the same pixel at the same exact relative time slot millions of times to accumulate sufficient signal. Our time resolution is 1.71 picosecond and hence any activity spanning smaller than 0.5mm in size will be difficult to record.

How does this compare with capturing videos of bullets in motion?
About 50 years ago, Doc Edgerton created stunning images of fast-moving objects such as bullets. We follow in his footsteps. Beyond the scientific exploration, our videos could inspire artistic and educational visualizations. The key technology back then was the use of a very short duration flash to ‘freeze’ the motion. Light travels about a million times faster than bullet. To observe photons (light particles) in motion requires a very different approach. The bullet is recorded in a single shot, i.e., there is no need to fire a sequence of bullets. But to observe photons, we need to send the pulse (bullet of light) millions of times into the scene.

What is new about the Femto-photography approach?
Modern imaging technology captures and analyzes real world scenes using 2D camera images. These images correspond to steady state light transport and ignore the delay in propagation of light through the scene. Each ray of light takes a distinct path through the scene which contains a plethora of information which is lost when all the light rays are summed up at the traditional camera pixel. Light travels very fast (~1 foot in 1 nanosecond) and sampling light at these time scales is well beyond the reach of conventional sensors (fast video cameras have microsecond exposures). On the other hand, LiDAR and Femtosecond imaging techniques such as optical coherence tomography which do employ ultra-fast sensing and laser illumination capture only the direct light (ballistic photons) coming from the scene, but ignore the indirectly reflected light. We combine the recent advances in ultra-fast hardware and illumination with a reconstruction technique that reveals unusual information.

What are the challenges?
Fastest electronic sensors have exposure time in nanoseconds or hundreds of picoseconds. To capture propagation of light in a tabletop scene we need sensor speeds of about 1 ps or one trillion frames per second. To achieve this speed we use a streak tube. The streak camera uses a trick to capture a one dimensional field of view at close to one trillion frames per second in a single streak image. To obtain a complete movie of the scene we stitch together many of these streak images. The resulting movie is not of one pulse, but is an average of many pulses. By carefully synchronizing the laser and camera we have to make sure each of those pulses look the same.

How will these complicated instruments transition out of the lab?
The ultrafast imaging devices today are quite bulky. The laser sources and high-speed cameras fit on a small optical bench and need to be carefully calibrated for triggering. However, there is parallel research in femtosecond solid-state lasers and they will greatly simplify the illumination source. In addition, progress in optical communication and optical computing shows great promise for compact and fast optical sensors. Nevertheless, in the short run, we are building applications where portability is not as critical.

Download High Resolution photos and videos here..

Main Video…Dont Miss Out!!

Visualizing Light over a Fruit with a Trillion FPS Camera

Source:http://web.media.mit.edu

Post By

Shiva Chaitanya(4/4 B.Tech) MGIT

Advertisements