October 16, 2013 Barmak Heshmat, Gordon Wetzstein and Christopher Barsi

This special monthly meeting of the NES/OSA will take place at the MIT Media Lab and features speakers from three different areas of optics; THz Technology, Compressive Cameras and Displays, and Time-Resolved Imaging.

 

THz technology, past, present and future

THz waves are EM waves ranging from 300GHz to 10THz. These radiations have challenged researches in different fields as the bandgap required to emit and receive THz is extremely low. we first cover a broad overview of the technology  and its future and present potential applications in health and industry

 

Barmak Heshmat

Barmak is currently a Postdoctoral Associate at the MIT Media Lab. Barmak’s research interest covers a wide range of topics in optics and photonics; nano-optics, THz devices, holography, acousto-optics, and image/display related optics are a few to mention. He received his Ph.D. at the University of Victoria in 2013 and his M. Sc. and B.Sc. at the Isfahan University of Technology in 2006 and 2008 respectively. Barmak finished his Ph.D. work on enhancement of THz photoconductive switches using nanotechnology with 10 scientific journal publications and 3 patents. His other contributions include an invited TEDx talk on the future of THz technology and an introductory work on topological metrics for self-reconfigurable robots.


Compressive Displays and Cameras

With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today -more than a century later- glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account.

In this talk, we explore modern approaches to glasses-free 3D display using compressive light field displays. In contrast to conventional display technology, these systems aim for a joint-design of the display optics and computational processing - a concept that has been exploited for image capture in computational photography for about a decade. In addition to modern approaches to compressive light field display, we will also discuss single-shot approaches to compressive light field acquisition. Using optimized optical systems and compressive computational reconstructions, we demonstrate the first resolution-preserving computational light field camera.

Gordon Wetzstein

Gordon Wetzstein is a Research Scientist in the Camera Culture Group at the MIT Media Lab. His research interests are at the intersection of computer graphics, machine vision, optics, scientific computing, and perception. In 2006, Gordon graduated with Honors from the Bauhaus in Weimar, Germany, and received a Ph.D. in Computer Science from the University of British Columbia in 2011. His doctoral dissertation focuses on computational light modulation for image acquisition and display and won the Alain Fournier Ph.D. Dissertation Annual Award. He organized the IEEE 2012 and 2013 International Workshops on Computational Cameras and Displays, founded displayblocks.org  as a forum for sharing computational display design instructions with the DIY community, and presented a number of courses on Computational Displays and Computational Photography at ACM SIGGRAPH. Gordon won the best paper award for "Hand-Held Schlieren Photography with Light Field Probes" at ICCP 2011 and a Laval Virtual Award in 2005.


New Approaches in Time-Resolved Imaging

Can we look around corners beyond the line of sight? Our goal is to exploit the finite speed of light to improve image capture and scene understanding. New theoretical analysis coupled with ultra-high-speed imaging techniques can lead to a new source of computational visual perception. We are developing the theoretical foundation for sensing and reasoning using femto-photography and transient light transport to experiment with scenarios in which transient reasoning exposes scene properties that are beyond the reach of traditional computational imaging.  The key idea is to time-resolve the multiple diffuse bounces of light. In addition to the ability to image hidden objects, the analysis also allows us to recover reflectance properties and sub-surface scattering. Visualization of the propagation of light provides a fascinating intuitive insight into the complex light transport.

Christopher Barsi

Christopher is a Postdoctoral Associate in the Camera Culture group at the MIT Media Lab. He earned his Ph.D. in electrical engineering from Princeton University for his work integrating nonlinear wave physics with optical imaging. A key research interest is exploiting the properties of nonlinear propagation, which couples the different spatial and spectral components of a signal, for use in computational optics. Currently, he explores ultrafast optics and advanced numerical algorithms for development of novel tools in imaging and sensing.

 

MEETING SPONSORED BY

Reservations:

Pre-registration discount DINNER reservations must be made by 6 PM, October 14, 2013, the Monday before the meeting. Full-price reservation accepted thereafter. Walk-ins welcome at full-price. MEETING-ONLY registrations appreciated by October 15, 2013 

Please make reservations online. Reservations may also be left on the answering machine at 617.584.0266. We no longer have an email address for reservations due to SPAM. When making reservation requests, please provide the following information:

  • DINNER AND MEETING or meeting only
  • Name(s) and membership status
  • Daytime phone number where you can be reached (in case of change or cancellation)

Location:

MIT Media Laboratory
MIT Building  E14 [map] 6th floor multi-purpose room

77 Massachusetts Avenue
Cambridge, MA 02139

Parking:

Visitors to MIT can park after 5:00 pm for free in several MIT parking lots, as well as on local streets.  The Media Lab is located at 20 Ames St, Cambridge (MIT Building E15).  The best parking lots for this location are the Hayward St. lot (just off Main St. in Kendall Square between Amherst and Main Sts.) or the Tang Center lot on Amherst St (near Wadsworth St and MIT Sloan School).  The Media Lab is reached by walking west on Amherst and then turning right on Ames.  Memorial Drive also has plenty of on-street spaces which open up after work hours.  There are also parking garages for pay in Kendall Square.  Further details on parking can be found at MIT Parking .

Networking—5:45 PM, Dinner—6:45, Meeting—7:30 PM.

 

Menu:

Vegetarian option available on request

Dinner Prices:

   Register on/before
 DINNER Reservation Date 
 Late Reservations 
 NES/OSA Members and their guests   $25.00 each   $30.00 
 Non-members   $30.00 (See NOTE Below)   $35.00 
 Students   $5.00   $5.00 
 Post-Docs   $5.00   $5.00 

 

 

 

 

 

 

 

NOTE: The NES/OSA has not changed dinner prices in several years but has been facing higher costs. We will maintain the current dinner prices for those reserving dinner on the requested date but still try to accommodate late reservations.

General Information on NES/OSA Meetings

Cancellations and No-shows:

If the meeting must be canceled for any reason, we will try to call you at the phone number you leave with your reservation. Official notice of cancellation will be on our answering machine.

We have to pay for the dinners reserved as of the Tuesday before the meeting, so no-shows eat into our cash reserve. If you will not be able to attend, please let us know as early as possible. Otherwise, no-shows will be billed.

Membership Rates:

Regular members $15.00
Student members free

 

 

 

NOTE: The extra $5.00 of the non-member dinner fee can be used toward membership dues if the nonmember joins and pays dues for the current year at the meeting.