Nitish Padmanaban

PhD Candidate, Electrical Engineering
Packard Bldg, Rm 225

About

I’m a third year PhD candidate at Stanford EE, supported by an NSF Graduate Research Fellowship. I’m advised by Prof. Gordon Wetzstein as part of the Stanford Computational Imaging Lab.

My research is currently focused on optical and computational techniques for virtual and augmented reality. In particular I’ve spent the couple of years working on building and evaluating displays to alleviate the vergence–accommodation conflict, and have also looked into the role of the vestibular system conflicts in causing motion sickness in VR.

I received my Master’s from Stanford EE in 2017, and a Bachelor’s in EECS from UC Berkeley in 2015. In my time at Berkeley, my coursework focused on signal processing, and I was an undergraduate researcher in the Berkeley Imaging Systems Lab.

Publications

Towards a Machine-Learning Approach for Sickness Prediction in 360° Stereoscopic Videos

Virtual reality systems are widely believed to be the next major computing platform. There are, however, some barriers to adoption that must be addressed, such as that of motion sickness – which can lead to undesirable symptoms including postural instability, headaches, and nausea. Motion sickness in virtual reality occurs as a result of moving visual stimuli that cause users to perceive self-motion while they remain stationary in the real world. There are several contributing factors to both this perception of motion and the subsequent onset of sickness, including field of view, motion velocity, and stimulus depth. We verify first that differences in vection due to relative stimulus depth remain correlated with sickness. Then, we build a dataset of stereoscopic 3D videos and their corresponding sickness ratings in order to quantify their nauseogenicity, which we make available for future use. Using this dataset, we train a machine learning algorithm on hand-crafted features (quantifying speed, direction, and depth as functions of time) from each video, learning the contributions of these various features to the sickness ratings. Our predictor generally outperforms a naïve estimate, but is ultimately limited by the size of the dataset. However, our result is promising and opens the door to future work with more extensive datasets. This and further advances in this space have the potential to alleviate developer and end user concerns about motion sickness in the increasingly commonplace virtual world.

Citation

Padmanaban, N.*, Ruban, T.*, Sitzmann, V., Norcia, A. M., & Wetzstein, G. (2018). Towards a Machine-Learning Approach for Sickness Prediction in 360° Stereoscopic Videos. IEEE Transactions on Visualization and Computer Graphics, 24(4), 1594–1603.

BibTeX

@article{Padmanaban:2018:Sickness, 
    author={Padmanaban, Nitish and Ruban, Timon and Sitzmann, Vincent and Norcia, Anthony M and Wetzstein, Gordon},
    journal={IEEE Transactions on Visualization and Computer Graphics},
    title={Towards a Machine-Learning Approach for Sickness Prediction in 360$^\circ$ Stereoscopic Videos}, 
    year={2018},
    volume={24},
    number={4}, 
    pages={1594--1603}
}
                    

Accommodation-Invariant Computational Near-Eye Displays

Although emerging virtual and augmented reality (VR/AR) systems can produce highly immersive experiences, they can also cause visual discomfort, eyestrain, and nausea. One of the sources of these symptoms is a mismatch between vergence and focus cues. In current VR/AR near-eye displays, a stereoscopic image pair drives the vergence state of the human visual system to arbitrary distances, but the accommodation, or focus, state of the eyes is optically driven towards a fixed distance. In this work, we introduce a new display technology, dubbed accommodation-invariant (AI) near-eye displays, to improve the consistency of depth cues in near-eye displays. Rather than producing correct focus cues, AI displays are optically engineered to produce visual stimuli that are invariant to the accommodation state of the eye. The accommodation system can then be driven by stereoscopic cues, and the mismatch between vergence and accommodation state of the eyes is significantly reduced. We validate the principle of operation of AI displays using a prototype display that allows for the accommodation state of users to be measured while they view visual stimuli using multiple different display modes.

Citation

Konrad, R., Padmanaban, N., Molner, K., Cooper, E. A., & Wetzstein, G. (2017). Accommodation-Invariant Computational Near-Eye Displays. ACM SIGGRAPH (Transactions on Graphics), 36(4), 88.

BibTeX

@article{Konrad:2017:Accommodation,
    title={Accommodation-Invariant Computational Near-Eye Displays},
    author={Konrad, Robert and Padmanaban, Nitish and Molner, Keenan and Cooper, Emily A and Wetzstein, Gordon},
    journal={ACM Transactions on Graphics (TOG)},
    volume={36},
    number={4},
    pages={88:1--88:12},
    year={2017}
}
                    

Evaluation of Accommodation Responses to Monovision for Virtual Reality

Emerging virtual and augmented reality (VR/AR) systems can produce highly immersive experiences, but also induce visual discomfort, eyestrain, and nausea for some users. One of the sources of these symptoms is the lack of natural focus cues in all current VR/AR near-eye displays. These displays project stereoscopic image pairs, driving the vergence state of the eyes to arbitrary distances. However, the accommodation, or focus state of the eyes, is optically driven to a fixed distance. In this work, we empirically evalaute monovision: a simple, yet unconventional method for potentially driving the accommodation state of the eyes to two distances by allowing each eye to drive focus to a different distance.

Citation

Padmanaban, N., Konrad, R., & Wetzstein, G. (2017, June). Evaluation of Accommodation Response to Monovision for Virtual Reality. In 3D Image Acquisition and Display: Technology, Perception and Applications (pp. DM2F.3). Optical Society of America.

BibTeX

@inproceedings{Padmanaban:2017:Evaluation,
    title={Evaluation of Accommodation Response to Monovision for Virtual Reality},
    author={Padmanaban, Nitish and Konrad, Robert and Wetzstein, Gordon},
    booktitle={3D Image Acquisition and Display: Technology, Perception and Applications},
    pages={DM2F.3},
    year={2017},
    month={June},
    organization={Optical Society of America}
}
                    

Optimizing Virtual Reality for All Users through Gaze-Contingent and Adaptive Focus Displays

From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

Citation

Padmanaban, N., Konrad, R., Stramer, T., Cooper, E. A., & Wetzstein, G. (2017). Optimizing Virtual Reality for All Users through Gaze-Contingent and Adaptive Focus Displays. Proceedings of the National Academy of Sciences of the United States of America, 114(9), 2183–2188.

BibTeX

@article{Padmanaban:2017:Optimizing,
    title={Optimizing Virtual Reality for All Users through Gaze-Contingent and Adaptive Focus Displays},
    author={Padmanaban, Nitish and Konrad, Robert and Stramer, Tal and Cooper, Emily A and Wetzstein, Gordon},
    journal={Proceedings of the National Academy of Sciences},
    volume={114}, 
    number={9}, 
    pages={2183--2188}, 
    year={2017}
}
                    

Abstracts

Computational Focus-Tunable Near-Eye Displays

Immersive virtual and augmented reality systems (VR/AR) are entering the consumer market and have the potential to profoundly impact our society. Applications of these systems range from communication, entertainment, education, collaborative work, simulation and training to telesurgery, phobia treatment, and basic vision research. In every immersive experience, the primary interface between the user and the digital world is the near-eye display. Thus, developing near-eye display systems that provide a high-quality user experience is of the utmost importance. Many characteristics of near-eye displays that define the quality of an experience, such as resolution, refresh rate, contrast, and field of view, have been significantly improved in recent years. However, a significant source of visual discomfort prevails: the vergence-accommodation conflict (VAC). This visual conflict results from the fact that vergence cues, but not focus cues, are simulated in near-eye display systems. Indeed, natural focus cues are not supported by any existing near-eye display. Afforded by focus-tunable optics, we explore unprecedented display modes that tackle this issue in multiple ways with the goal of increasing visual comfort and providing more realistic visual experiences.

Citation

Konrad, R., Padmanaban, N., Cooper, E., & Wetzstein, G. (2016, July). Computational Focus-Tunable Near-Eye Displays. In ACM SIGGRAPH 2016 Emerging Technologies (p. 3). ACM.

BibTeX

@inproceedings{Konrad:2016:Computational,
    title={Computational Focus-Tunable Near-Eye Displays},
    author={Konrad, Robert and Padmanaban, Nitish and Cooper, Emily A and Wetzstein, Gordon},
    booktitle={ACM SIGGRAPH 2016 Emerging Technologies},
    pages={3:1--3:2},
    year={2016},
    month={July},
    organization={ACM}
}
                    

Active Feedback Real Time MPI Control Software

Real-time MPI has the potential to serve as a noninvasive alternative to X-ray angiography. In order to achieve real time MPI, we must (a.) generate vector-drive field waveforms at a location governed in real time by the physician, (b.) acquire the MPI image data in real time, (c.) reconstruct the MPI images in real time. We have designed our MPI data acquisition and control (DAQ) to enable all these steps in real time.

Citation

Padmanaban, N., Orendorff, R. D., Konkle, J. J., Goodwill, P. W., & Conolly, S. M. (2015, March). Active Feedback Real Time MPI Control Software. In Magnetic Particle Imaging (IWMPI), 2015 5th International Workshop on (pp. 1–1). IEEE.

BibTeX

@inproceedings{Padmanaban:2015:Active,
    title={Active Feedback Real Time MPI Control Software},
    author={Padmanaban, Nitish and Orendorff, Ryan D and Konkle, Justin J and Goodwill, Patrick W and Conolly, Steven M},
    booktitle={Magnetic Particle Imaging (IWMPI), 2015 5th International Workshop on},
    pages={1--1},
    year={2015},
    month={March},
    organization={IEEE}
}
                    

Presentations

Build Your Own VR Display: An Introduction to VR Display Systems for Hobbyists and Educators

Electronic Imaging Short Courses, January 2018

Optimizing VR for All Users Through Adaptive Focus Displays

ACM SIGGRAPH Talks, July 2017

Build Your Own VR System: An Introduction to VR Displays and Cameras for Hobbyists and Educators

ACM SIGGRAPH Courses, July 2017

Gaze‐contingent Adaptive Focus Near‐Eye Displays

SID Display Week Invited Talk, May 2017

Computational Focus Tunable Near-Eye Displays

NVIDIA GPU Technology Conference, May 2017

Panel: Frontiers in Technology

Stanford mediaX – Sensing and Tracking for 3D Narratives, October 2016