Winter 2015, EE367/CS448I: Computational Imaging and Display

Course project presentations: March 17, 2015, 12:30-2:30 (oral presentations), 2:30-3:30 poster and demo session, CIS-X Auditorium

Instructors: Gordon Wetzstein, Isaac Kauvar (TA)

Light field photograph of the 2015 class. Top row, from left: front focus, center focus, rear focus. Click on the images for high-resolution pictures that were refocused from the light field in post-processing. Bottom row, from left: contrast-enhanced depth map computed from the light field and rectified, raw light field. Click on the images to see the original, full-resolution data.


List of class projects

Project title: A Content Adaptive Multispectral Projector
Team: Samuel Yang

A typical video projector has three fixed primaries, typically red, green and blue, and operates by showing a sub-frame consisting of a single color primary at a time. In order to improve the color gamut the projector can cover, one could add more color primaries (e.g. yellow, magenta), however, this would require the projector to be able to operate at higher speeds to show more primaries. Here, we modify the projector for multispectral projection by choosing 3 custom primaries adaptively for each image.

project report

This project directly led to the following ACM SIGGRAPH Asia 2015 technical paper: Kauvar et al., Adaptive Color Display via Perceptually-driven Factored Spectral Projection


Project title: Light field stereoscope
Team: Kevin Chen

Virtual reality can bring immersive 3D experiences, allowing for exciting games, realistic training simulations, and even better treatment for disorders such as PTSD. However, conventional head-mounted displays do not support focus cues, resulting in the vergence-accommodation conflict that causes headaches and discomfort in users. This problem prevents extended use of head-mounted displays. I will build a light field stereoscope that displays a static compressive light field which encodes focus cues, allowing the user to accommodate at different depths and providing an immersive experience.

project report

This project directly led to the following ACM SIGGRAPH 2015 technical paper: Huang et al., The Light Field Stereoscope: Immersive Computer Graphics via Factored Near-Eye Light Field Displays with Focus Cues


Project title: Depth cues in vr head mounts with focus tunable lenses.
Team: Robert Konrad and Terry Kong

With the advent of new VR and AR technologies, it has become increasingly important to implement natural focus cues as the technology reaches more and more people. All 3D VR systems are limited by how well they handle the vergence-accommodation conflict. This project aims to solve the vergence-accommodation conflict by using defocus blur and focus cues to simulate the perception of depth. The novelty of this project is the use of focus-tunable lenses, which until recently have been too small to have a usable field of view. The defocus blur will be implemented in OpenGL and a full integration of the hardware and software will be demoed on the Oculus Rift. The long term goal of this project will be to incorporate an eye tracking module to automate the refocusing of the lenses.

project report

This project directly led to the following ACM SIGCHI 2016 technical paper: Konrad et al., Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays


Project title: Nystagmus simulation and correction
Team: Philip Lee, Joe Landry, Maneeshika Madduri

This project aims to simulate nystagmus, an eye condition that causes involuntary eye movements, using MATLAB and process the first steps towards adjusting for this condition in devices such as Oculus Rift. Using eye-tracking hardware, we plan to simulate the motion and focus blur of nystagmus and look at various computational photography techniques, such as deconvolution and coded apertures, to correct for nystagmus.

project report


Project title: Holographic direct view display
Team: Stephen Hamann and Xuerong Xiao

The project aims at creating a hologram to be used in a fixed-view display. We will be using a Lytro camera as a Shack-Hartmann wavefront sensor, transforming light field images to mutual intensity functions to be displayed on a spatial light modulator.

project report


Project title: Dielectric Gradient Metasurface Lens
Team: Dianmin Lin

The goal of this project is to develop new optics to implement higher functionalities of 3D imaging and display that unreachable by conventional optical components. In traditional optical system design, people assemble a series of optical elements and characterize its transfer function to see if it satisfies the requirements. With a metasurface, we are able to ‘compute’ the desired function of optical system and implement it in metasurface. Here we show that metasurfaces not only has the advantage of ultra-thin, easy-to-fabricate, but also have entirely new functions beyond conventional optical elements. As an example, we developed an advanced dielectric gradient metasurface lens to extend the depth of field. Also the imaging abilities of metasurface lens are demonstrated, which pave the way for future application in advanced imaging and display system.

project report N/A


Project title: WiCapture: Motion capture using WiFi
Team: Manikanta Kotaru

Imagine we can accurately capture the motion of any device that has a WiFi chip/tag on it, like a cell phone, by using existing WiFi access points infrastructure. The applications are numerous from innovative human computer interfaces to occlusion-resistant tracking of VR head displays to motion capture in films. Existing visible light/infrared/inertial sensors based system require specialized sensors, expensive equipment, or high bandwidth or all of these. State-of-the-art WiFi based systems require large access points which are 3m in length and achieves an accuracy of few cm in tracking the motion. In this project, we use compressive imaging techniques to build WiFi based motion capture system that achieve mm level accuracy in motion tracking using access points that are as small as 6cm in size.

Project report - currenly unavailable


Project title: Spectral identification of nanoparticles in OCT
Team: Orly Liba

Optical coherence tomography (OCT) is an imaging modality that is able to visualize the light backscattered from a sample and create a micron resolution image of its structure. This enables very high resolution images of biological structures, such as the retina and tumors. Functional imaging of the molecular processes inside tissue has not been realized with OCT yet. One way to image molecules with OCT could be to selectively attach beacons to the molecules, for example by using antibody labeled plasmonic nanoparticles. I aim to detect the nanoparticles in tissue with OCT owing to their plasmonic spectral peak. This is a challenging task that people have been working on for about 15 years. Its difficulty originates in the composition of the OCT signal which encodes the scattering spectrum along with the location of the scatterers. I plan to solve this challenge by combining several images captured at various conditions and use priors of the location and spectrum of the nanoparticles along with optimization and machine learning algorithms.

project report

This project was published in Scientific Reports: Liba et al., Contrast-enhanced optical coherence tomography with picomolar sensitivity for functional in vivo imaging


Project title: Extended depth of field for fluorescence microscopy
Team: Julie Chang

High numerical aperture objectives result in extremely shallow depths of field, which may or may not be desired by the user of the microscope. When looking at extended objects, planes that aren’t necessarily perpendicular to the light path, or fluorophores that move in and out of focus, an extended depth of field (EDOF) would be useful to fully visualize the sample. PSF engineering can be applied to this problem with several advantages, including ease of implementation, no need for a focal sweep, and retention of high spatial frequencies. In this project, I will use a cubic phase mask to achieve a depth-invariant PSF and compare this to other EDOF methods in both simulations and experiments. Several deconvolution algorithms can be explored to identify the best option in terms of SNR, MSE (for simulated images), and image artifacts.

project report


Project title: Exploiting Non-local Low Rank Structure in Image Reconstruction
Team: Evan Levine and Tiffany Jou

Constrained image models based on linear dependence are commonly used in high dimensional imaging and computer vision to exploit or extract structure, which can be mapped to low rank matrices. Natural images also have a self-similarity property, where features repeat themselves all over the image, and linear dependence relationships may be non-local. To exploit non-local linear dependence structure for image reconstruction, we develop a novel and flexible framework using low rank matrix reconstruction and a union of subspaces model, which is learned using subspace clustering. Subspace clustering makes weaker assumptions about the image than previous methods for non-local low rank regularization and also scales to very large-sized problems. We extend this approach for tensors, which are natural representations for images. We demonstrate a benefit of non-local low rank modeling for image denoising and reconstruction of MRI and light field images.

project report


Project title: Intrinsic image decomposition using rgbd sensors
Team: Matt Viteli

The goal of this project is to decompose a single RGBD image into a reflectance image, direct illumination image, and indirect illumination image. From this, we are able to fit commom parametric lighting algorithms to the direct illumination and geometrically determine the placement of lights in the scene. The end result is that users can then insert computer-generated models into the image with photo-realistic lighting.

project report


Project title: Compressive sensing based single pixel hyperspectral imaging system
Team: Shikhar Shrestha and Liang Shi

The goal of this project is to combine a DLP projector with a portable spectrometer to build a single pixel hyperspectral data acquisition system. We are also developing compressive sensing algorithms to then recover hyperspectral images from the captured data. Hyperspectral images contain lots of information about the scene that is not captured in a regular camera image. They have been used for several interesting applications but the hardware is still expensive and bulky. This project will help us evaluate whether a compressive sensing paradigm can be successfully applied to hyperspectral imaging to reduce cost of hardware while still retaining useful information in the images.

project report


Project title: Depth invariant reflectance imaging
Team: Henryk Blasinski

Surface reflectance spectra can be estimated computationally from a sequence of monochromatic images of a scene taken under known, narrowband illuminants such as LEDs. Reflectance estimates are then found by solving an inverse problem using quadratic programming. It is commonly assumed that the illuminants are spatially uniform and scene depth invariant. This condition holds if the light sources are far away from the scene, but fails miserably when point sources, such as LEDs, are close to the objects of interest. This situation usually arises in microscopy or endoscopy in which cases close objects will appear brighter than far away ones, irrespectively of their actual reflectance properties. To solve this problem we will incorporate a depth camera into the imaging system which will allow us to derive accurate, depth dependent light fall-off models. Once the reference light intensity is known, reliable and depth invariant scene reflectance estimates will be computed.

project report


Project title: Dynamic tone mapping
Team: Matt Yu

The real world consists of many scenes which contain a high dynamic range. While modern cameras are capable of capturing the dynamic range of these scenes, displays still only show a low dynamic range. Many tone map operators exist but very few consider the use of head-mounted displays. We create a dynamic tone map operator for use on panorama high dynamic range images by considering a user’s head position and subsequent viewport. The tone map operator normalizes the image shown to the user by the log average luminance of the viewport. Furthermore, we use a simple model of eye adaptation to mimic the effects of light and dark adaptation.

project report


Project title: Seeing subcellular structures with computation
Team: Roshni Cooper

Microtubules are dynamic structures that form the basis of cellular structure and intracellular transport. Until recently, microtubules have been difficult to study, in part because of their small size. With confocal microscopy, we can observe the intensity changes in a bundle of these subcellular structures. This signal has been corrupted by the point spread function and noise of the imaging system. This project will apply convex optimization techniques to solve this inverse problem and identify the locations of individual microtubules.

no report, course not taken for credit


Project title: Depth based image segmentation
Team: Nathan Loewke

In this paper I investigate light field imaging as it might relate to the problem of image segmentation in cell culture, time-lapse microscopy. I discuss the current field of light field imaging, depth-based imaging segmentation, and light field microscopy. I then discuss the process of gathering data that lends itself well to this problem, calibrating depth map data with ground-truth measurements, generating heat map overlays for quick error estimation, image data segmentation performance, and depth discretization. Finally, I remark on how light field imaging might be applied to the world of microscopy, and in particular, automatic cell tracking.

project report


Project title: Immersive drone cinematography
Team: Botao Hu and Qian Lin

We build an immersive aerial cinematography system combining programmed aerial cinematography with route planning and preview in 3D virtual reality (VR) scene. The user will have a 3D Oculus-Rift-based VR experience previewing the Google Earth model of the scene they plan to videotape. Switching between camera first-person-view and a global view, was well as multi-user interacting in the virtual world is supported. The user will be able to specify keyframes while viewing the scene from camera first-person view. These keyframes are subsequently used to construct a smooth trajectory, whose GPS coordinates are streamed to a quadrotor to execute the shot in autopiloting mode with GPS tracking.

project report


Project title: Adaptive Learned Denoising
Team: Dash Bodington

Most current denoising algorithms rely on regional image similarities or prior assumptions of image properties, but it may be possible to develop smarter denoising methods with machine learning. By learning image features and corresponding adaptive filtering, robust denoising with a high level of detail preservation can be performed. This project develops several learned denoising methods and compares them to state of the art techniques.

project report


Project title: A Guided User Experience Using Subtle Gaze Direction
Team: Eli Ben-Joseph and Eric Greenstein

We are investigating how illumination modulation can be used to subtly guide a user’s gaze on a display. By taking advantage of the differences in illumination sensitivity of the eye’s rod and cone cells, modulations will be apparent in the periphery of a user’s field of view and disappear when they look at regions of interest. We will use an eye-tracking system to quantitatively measure where a user’s eyes are on the monitor to determine effectiveness. If successful, this technology can create new guided user experiences and improve training of doctors and police.

project report


Project title: Modelling the Stereo-lithography Process
Team: Iretiayo Akinola

Absorption, scattering, reflections and other physical phenomenon of light all interplay in a complex way during the stereolithography process of 3D printing. In the project, we investigate a means of modelling all these factors wholistically. We show through simulations and experiments that there is spatial linearity in the response of resins to light sources. Further experiments could be build on these initial result to make similar characterization of the curing process as a function of exposure intensity and duration.

project report


Project title: Transmission Electron Microscope Tomography of Nanometer-scale Lithography Structures
Team: Gregory Pitner

We are generating a 3D tomography image of a directed self-assembly (DSA) di-block copolymer cell. DSA is a hot technology in nanoelectronics for lithography roadmap extension. Visualization of structures in this size domain requires very specialized sample preparation, imaging conditions, and computational processing of the resulting images. We have captured these images recently, and will implement the tomography image processing pipeline in Matlab or Avizio, thereby producing the first 3D tomographic image of this cell structure.

project report