Avatar

Robert Konrad

PhD Candidate, Electrical Engineering Stanford University

rkkonrad [at] stanford [dot] edu

350 Serra Mall
           Packard Building, Room 225
           Stanford, CA 94305

             


About

I am a 5th year PhD candidate in the Electrical Engineering Department at Stanford University, advised by Professor Gordon Wetzstein, as part of the Stanford Computational Imaging Lab. My research interests lie at the intersection of computational displays and human physiology with a specific focus on virtual and augmented reality systems. For such systems, I have worked on supporting various depth cues, with a specific interest on focus cues, as well as computationally efficient cinematic VR capture systems. I received my Bachelor’s Degree from the ECE department at the University of Toronto in 2014, and my Master’s Degree from the EE Department at Stanford University in 2016.

Publications

Autofocals: Evaluating gaze-contingent eyeglasses for presbyopes

(project page, Science Advances, full text)

As humans age, they gradually lose the ability to accommodate, or refocus, to near distances because of the stiffening of the crystalline lens. This condition, known as presbyopia, affects nearly 20% of people worldwide. We design and build a new presbyopia correction, autofocals, to externally mimic the natural accommodation response, combining eye tracker and depth sensor data to automatically drive focus-tunable lenses. We evaluated 19 users on visual acuity, contrast sensitivity, and a refocusing task. Autofocals exhibit better visual acuity when compared to monovision and progressive lenses while maintaining similar contrast sensitivity. On the refocusing task, autofocals are faster and, compared to progressives, also significantly more accurate. In a separate study, a majority of 23 of 37 users ranked autofocals as the best correction in terms of ease of refocusing. Our work demonstrates the superiority of autofocals over current forms of presbyopia correction and could affect the lives of millions.

N. Padmanaban, R. Konrad, G. Wetzstein, “Autofocals: Evaluating gaze-contingent eyeglasses for presbyopes”, Sci. Adv. 5, eaav6187 (2019).


Gaze-Contingent Ocular Parallax Rendering for Virtual Reality

(project page, arXiv, full text)

Immersive computer graphics systems strive to generate perceptually realistic user experiences. Current-generation virtual reality (VR) displays are successful in accurately rendering many perceptually important effects, including perspective, disparity, motion parallax, and other depth cues. In this paper we introduce ocular parallax rendering, a technology that accurately renders small amounts of gaze-contingent parallax capable of improving depth perception and realism in VR. Ocular parallax describes the small amounts of depth-dependent image shifts on the retina that are created as the eye rotates. The effect occurs because the centers of rotation and projection of the eye are not the same. We study the perceptual implications of ocular parallax rendering by designing and conducting a series of user experiments. Specifically, we estimate perceptual detection and discrimination thresholds for this effect and demonstrate that it is clearly visible in most VR applications. Additionally, we show that ocular parallax rendering provides an effective ordinal depth cue and it improves the impression of realistic depth in VR.

R. Konrad, A. Angelopoulos, G. Wetzstein, “Gaze-Contingent Ocular Parallax Rendering for Virtual Reality”, in arXiv, 2019.


SpinVR: Towards Live-Streaming 3D Virtual Reality Video

(project page, ACM, full text)

Immersive computer graphics systems strive to generate perceptually realistic user experiences. Current-generation virtual reality (VR) displays are successful in accurately rendering many perceptually important effects, including perspective, disparity, motion parallax, and other depth cues. In this paper we introduce ocular parallax rendering, a technology that accurately renders small amounts of gaze-contingent parallax capable of improving depth perception and realism in VR. Ocular parallax describes the small amounts of depth-dependent image shifts on the retina that are created as the eye rotates. The effect occurs because the centers of rotation and projection of the eye are not the same. We study the perceptual implications of ocular parallax rendering by designing and conducting a series of user experiments. Specifically, we estimate perceptual detection and discrimination thresholds for this effect and demonstrate that it is clearly visible in most VR applications. Additionally, we show that ocular parallax rendering provides an effective ordinal depth cue and it improves the impression of realistic depth in VR.

R. Konrad*, D. G. Dansereau*, A. Masood, G. Wetzstein. “SpinVR: Towards Live-Streaming 3D Virtual Reality Video”, ACM SIGGRAPH Asia (Transactions on Graphics 36, 6), 2017.


Accommodation-invariant Computational Near-eye Displays

(project page, ACM, full text)

Immersive computer graphics systems strive to generate perceptually realistic user experiences. Current-generation virtual reality (VR) displays are successful in accurately rendering many perceptually important effects, including perspective, disparity, motion parallax, and other depth cues. In this paper we introduce ocular parallax rendering, a technology that accurately renders small amounts of gaze-contingent parallax capable of improving depth perception and realism in VR. Ocular parallax describes the small amounts of depth-dependent image shifts on the retina that are created as the eye rotates. The effect occurs because the centers of rotation and projection of the eye are not the same. We study the perceptual implications of ocular parallax rendering by designing and conducting a series of user experiments. Specifically, we estimate perceptual detection and discrimination thresholds for this effect and demonstrate that it is clearly visible in most VR applications. Additionally, we show that ocular parallax rendering provides an effective ordinal depth cue and it improves the impression of realistic depth in VR.

R. Konrad, N. Padmanaban, K. Molner, E. A. Cooper, G. Wetzstein. “Accommodation-invariant Computational Near-eye Displays”, ACM SIGGRAPH (Transactions on Graphics 36, 4), 2017.


Evaluation of Accommodation Responses to Monovision for Virtual Reality

(OSA, full text)

Emerging virtual and augmented reality (VR/AR) systems can produce highly immersive experiences, but also induce visual discomfort, eyestrain, and nausea for some users. One of the sources of these symptoms is the lack of natural focus cues in all current VR/AR near-eye displays. These displays project stereoscopic image pairs, driving the vergence state of the eyes to arbitrary distances. However, the accommodation, or focus state of the eyes, is optically driven to a fixed distance. In this work, we empirically evalaute monovision: a simple, yet unconventional method for potentially driving the accommodation state of the eyes to two distances by allowing each eye to drive focus to a different distance.

Nitish Padmanaban, Robert Konrad, and Gordon Wetzstein, "Evaluation of Accommodation Response to Monovision for Virtual Reality", in 3D Image Acquisition and Display: Technology, Perception and Applications, OSA Technical Digest (online) (Optical Society of America, 2017), paper DM2F.3.


Optimizing virtual reality for all users through gaze-contingent and adaptive focus display

(project page, PNAS, full text)

From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

N. Padmanaban, R. Konrad, T. Stramer, E.A. Cooper, and G. Wetzstein. Optimizing virtual reality for all users through gaze-contingent and adaptive focus display. Proceedings of the National Academy of Sciences. 2017.


Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays

(project page, ACM, full text)

Emerging virtual reality (VR) displays must overcome the prevalent issue of visual discomfort to provide high-quality and immersive user experiences. In particular, the mismatch between vergence and accommodation cues inherent to most stereoscopic displays has been a long standing challenge. In this paper, we evaluate several adaptive display modes afforded by focus-tunable optics or actuated displays that have the promise to mitigate visual discomfort caused by the vergence-accommodation conflict, and improve performance in VR environments. We also explore monovision as an unconventional mode that allows each eye of an observer to accommodate to a different distance. While this technique is common practice in ophthalmology, we are the first to report its effectiveness for VR applications with a custom built set up. We demonstrate that monovision and other focus-tunable display modes can provide better user experiences and improve user performance in terms of reaction times and accuracy, particularly for nearby simulated distances in VR.

R. Konrad, E.A. Cooper, and G. Wetzstein. Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’16). 2016.


A GPU-Accelerated Physical Layer for Simulating Wireless Networks

(ACM)

In recent years, graphics processing units (GPUs) have been leveraged to speed-up massively parallel computations. Knowing the path loss between nodes in a wireless network is crucial in accurately simulating physical layer effects in wireless network simulators and emulators. Because path loss and interference calculations are often repeated for every transmitter and receiver pair, leveraging GPU computing to parallelize these calculations can lead to significant reduction in processing time. In this paper, we present an implementation of a high-fidelity GPU-accelerated PHY that calculates path loss and interference over time for every receiver/transmitter pair using realistically defined node antenna patterns. Performance results are compared against traditional CPU calculations and we demonstrate that by offloading parallel computations to the GPU, significant gains can be had for wireless network simulation and emulation. Additionally, GPU limitations and design considerations are presented to aid in future GPU-based wireless simulation implementations.

R. Konrad, B. Hamilton, B. Cheng. A GPU-Accelerated Physical Layer for Simulating Wireless Networks. Proceedings of the 17th Communications & Networking Simulation Symposium (CNS'14). 2014.


Media