May 21, 2009 - Edward Gilroy

Effective Time Management

As we are all aware, time is our most important resource. But how do we manage our time to use it effectively and achieve our goals?

Effective time management requires a commitment and a time management culture. Our approach affects both our business and personal lives. Our approach needs to consider what is Urgent versus what is Important. We must also identify and understand the Obstacles to using our time wisely.

With these issues in mind, this presentation will present and discuss the 7 Obstacles to Effective Time Management. We will also identify some aids that can help in implementing an ongoing program to manage our time commitments.

Read more: May 21, 2009 - Edward Gilroy

Apr. 16,2009 - Martin Pralle

Shallow Junction Photonics:  Black Silicon Enhanced Photodetectors

SiOnyx has developed a next generation silicon based photodetector with spectral sensitivity from 350 to 1550 nm making silicon a very attractive infrared detector material system.  These detectors exhibit photoconductive gain, routinely generating responsivities in excess of 100 A/W (>100x standard silicon detectors) throughout the visible, NIR and SWIR bands.   We enhance the spectral sensitivity of silicon in these devices by employing a femtosecond laser doping processing on a CMOS compatible platform.  The energetic of the laser process enable the formation of a new material with these remarkable properties.  Operating at a mere 3V the devices made from this material rival avalanche photodiodes at much lower power and bias. When applied to imaging platforms, this detector will enhance visible light imaging and will enable silicon to become the next generation nightvision detector, outperforming InGaAs, Ge, and image intensifiers in nearly all nighttime light conditions.

Read more: Apr. 16,2009 - Martin Pralle

Mar. 19, 2009 - MIT Media Labs

Three Talks at the Media Lab

This meeting will be held at MIT Media Lab in Cambridge, MA.
The program includes the following speakers.

Joint Meeting with the Boston LEOS Chapter

Computational Photography: From Epsilon to Coded Photography

Normal 0 false false false MicrosoftInternetExplorer4 Computational Photography is an emerging multi-disciplinary field that is at the intersection of optics, signal processing, computer graphics+vision, electronics, art, and online sharing in social networks. The field is evolving through three phases. The first phase was about building a super-camera that has enhanced performance in terms of the traditional parameters, such as dynamic range, field of view or depth of field. I call this Epsilon Photography. Due to limited capabilities of a camera, the scene is sampled via multiple photos, each captured by epsilon variation of the camera parameters. It corresponds to the low-level vision: estimating pixels and pixel features. The second phase is building tools that go beyond capabilities of this super-camera. I call this Coded Photography. The goal here is to reversibly encode information about the scene in a single photograph (or a very few photographs) so that the corresponding decoding allows powerful decomposition of the image into light fields, motion deblurred images, global/direct illumination components or distinction between geometric versus material discontinuities. This corresponds to the mid-level vision: segmentation, organization, inferring shapes, materials and edges. The third phase will be about going beyond the radiometric quantities and challenging the notion that a camera should mimic a single-chambered human eye. Instead of recovering physical parameters, the goal will be to capture the visual essence of the scene and analyze the perceptually critical components. I call this Essence Photography and it may loosely resemble depiction of the world after high level vision processing. It will spawn new forms of visual artistic expression and communication.

In this talk, I will focus on Coded Photography. 'Less is more' in Coded Photography. By blocking light over time or space, we can preserve more details about the scene in the recorded single photograph.

  1. Coded Exposure : By blocking light in time, by fluttering the shutter open and closed in a carefully chosen binary sequence, we can preserve high spatial frequencies of fast moving objects to support high quality motion deblurring.
  2. Coded Aperture Optical Heterodyning : By blocking light near the sensor with a sinusoidal grating mask, we can record 4D light field on a 2D sensor. And by blocking light with a mask at the aperture, we can extend the depth of field and achieve full resolution digital refocussing.
  3. Coded Illumination : By observing blocked light at silhouettes, a multi-flash camera can locate depth discontinuities in challenging scenes without depth recovery.
  4. Coded Sensors : By sensing intensities with lateral inhibition, a ‘Gradient Camera’ can record large as well as subtle changes in intensity to recover a high-dynamic range image.
  5. Coded Spectrum : By blocking parts of a ‘rainbow’, we can create cameras with digitally programmable wavelength profile.
Normal 0 false false false MicrosoftInternetExplorer4

I will show several applications and describe emerging techniques to recover scene parameters from coded photographs.

Recent joint work with Jack Tumblin, Amit Agrawal, Ashok Veeraraghavan and Ankit Mohan

Ramesh Raskar

Normal 0 false false false MicrosoftInternetExplorer4 Ramesh Raskar joined the MIT Media Lab in spring 2008 as head of the Camera Culture research group. The group focuses on developing tools to help us better capture and share the visual experience. This research involves
developing novel cameras with unusual optical elements, programmable illumination, digital wavelength control, and femtosecond analysis of light transport, as well as tools to decompose pixels into perceptually meaningful components. Raskar's research also involves creating a universal platform for the sharing and consumption of visual media.

Raskar received his PhD from the University of North Carolina at Chapel Hill, where he introduced "Shader Lamps," a novel method for seamlessly merging synthetic elements into the real world using projector-camera based spatial augmented reality. In 2004, Raskar received the TR100 Award from Technology Review, which recognizes top young innovators under the age of 35, and in 2003, the Global Indus Technovator Award, instituted at MIT to recognize the top 20 Indian technology innovators worldwide. He holds 35 US patents and has received four Mitsubishi Electric Invention Awards. He is currently co-authoring a book on computational photography.


Optical Brain Control: Towards New Therapies for Brain Disorders

The brain is three-dimensional, densely-wired, and composed of heterogeneous primitives.  Understanding how these elements work in real time to mediate behavior and computation, and how they are compromised in pathological states, is a top priority.  We have recently revealed methods for real-time optical activation and
silencing of specific cell types, using naturally-occurring molecular sensitizers such as channelrhodopsin-2 and halorhodopsin.  We also have created optical hardware and algorithms for systematically testing the contribution, at both the levels of necessity and sufficiency, of brain regions, cell types, and circuit connections to behavioral function.  These technologies are designed to be scalable and systematic, and to confront the detailed three-dimensional nature of the brain.  We discuss the application of these technologies to the analysis of neural dynamics.

Ed Boyden

Ed Boyden leads the Media Lab's Synthetic Neurobiology group, which is inventing and applying tools for the analysis and engineering of brain circuits, with the goal of developing new strategies for systematically repairing pathology, augmenting cognition, and revealing insights into the human condition. He is integrating nanotechnological, molecular, optical, and other technologies into interfaces for the precise control of neural circuit dynamics and function. Recently, he and his colleagues created a genetically targeted way to activate neurons with millisecond-timescale pulses of light, an innovation that has been widely adopted in neuroscience and neuroengineering, and resulted in his being named one of the "Top 35 Innovators Under the Age of 35" by Technology Review.

Boyden received his PhD in neurosciences from Stanford University as a Hertz Fellow, where he discovered that the molecular mechanisms used to store a memory are determined by the content to be learned. He received an MEng in electrical engineering, and BS degrees in physics and electrical engineering, all from MIT. These pursuits, as well as an independent career as an inventor, have earned him many invited talks, awards, papers, and pending patents.

Normal 0 false false false MicrosoftInternetExplorer4

Toward Consumer Holographic Video

The MIT Media Lab has a long history of research in the computation and display of holographic video, but in recent years the focus has turned toward making holo-video a consumer product.  In my talk I review the background, present the engineering issues, and describe our approaches to making inexpensive high-bandwidth light modulators and to doing real-time holographic computation on off-the-shelf GPUs.

Michael Bove

V. Michael Bove, Jr. holds an S.B.E.E., an S.M. in Visual Studies, and a Ph.D. in Media Technology, all from the Massachusetts Institute of Technology, where he is currently head of the Object-Based Media Group at the Media Laboratory, co-directs the Center for Future Storytelling, and directs the consumer electronics program CELab. He is the author or co-author of over 60 journal or conference papers on digital television systems, video processing hardware/software design, multimedia, scene modeling, visual display technologies, and optics. He holds patents on inventions relating to video recording, hardcopy, interactive television, and medical imaging, and has been a member of several professional and government committees. He is co-author with the late Stephen A. Benton of the book Holographic Imaging (Wiley, 2008). He is on the Board of Editors of the Journal of the Society of Motion Picture and Television Engineers, and associate editor of Optical Engineering. He served as general chair of the 2006 IEEE Consumer Communications and Networking Conference (CCNC'06), and is a member of Board of Governors of the National Academy of Media Arts and Sciences. Bove is a fellow of the SPIE and of the Institute for Innovation, Creativity, and Capital. He was a founder of and technical advisor to WatchPoint Media, Inc. (now a part of Tandberg Television) and is technical advisor to One Laptop Per Child (creators of the XO laptop for children in developing countries).

Read more: Mar. 19, 2009 - MIT Media Labs

Feb. 19, 2009 - Ben Vakoc

 Optical frequency domain imaging
A new tool for studying tumor biology and evaluating cancer therapies

High resolution, intravital multiphoton microscopy has provided powerful mechanistic insights into health and disease, and has become a common instrument in the modern biological laboratory. The requisite high numerical aperture and exogenous contrast agents that enable multiphoton microscopy, however, result in a limited capacity to investigate substantial tissue volumes or to probe dynamic changes repeatedly over prolonged periods. Here, we introduce optical frequency domain imaging (OFDI) as an intravital microscopic tool that circumvents the technical limitations of multiphoton microscopy and, as a result, provides unprecedented access to previously unexplored, critically important aspects of tumor biology. We describe the operation of the OFDI instrument, and present measurements of tumor angiogenesis, lymphangiogenesis, and both vascular and cellular responses to therapy, thereby demonstrating the potential of OFDI to facilitate the exploration of physiological and pathological processes and the evaluation of treatment strategies.
Read more: Feb. 19, 2009 - Ben Vakoc

Jan. 15, 2009 - Mordechai Rothschild

Advanced Lithography and the Semiconductor Industry Roadmap

Semiconductor devices are expected to continue their historic trend of diminishing size for at least several more device generations. Patterning at 65-, 45-, 32-nm and beyond is being enabled by new developments in optical lithography. “Conventional” 193-nm projection lithography will be replaced by liquid immersion lithography, double patterning processes, sophisticated mask design, and other technologies. Alternative lithographies, such as extreme ultraviolet (EUV) projection, electron-beam direct write, and nanoimprint, must compete with enhanced optical lithography in terms of cost, yield, and throughput. This talk will discuss the various aspects of sub-65 nm optical lithography, and place them in the broader context of the semiconductor industry roadmap. Topics will include prospects of high-index liquid immersion lithography, the applicability of double patterning, resist limitations, as well as new lithographic concepts. 

Read more: Jan. 15, 2009 - Mordechai Rothschild

Page 1 of 2