About Me

I am a Research Scientist in the Department of Psychiatry and Behavioral Sciences, Stanford University. I am working with Dr. Kilian M. Pohl and Dr. Edith V. Sullivan on the intersection between machine learning and translational research based on neuroimaging applications. I was a postdoctoral research fellow in the same lab during 2017-2019.

I finished my Ph.D. at the Department of Computer Science, University of North Carolina at Chapel Hill. My principal advisor was Prof. Stephen Pizer. I was co-advised by Prof. Marc Niethammer and Prof. Ron Alterovitz. Before joining UNC, I recieved my B.S. degree from Shanghai Jiao Tong University.

I am generally interested in using image analysis and machine learning techniques to improve detection, diagnosis and treatment of diseases. My research interests lie in the areas of non-linear statistics, machine learning and computational neuroscience.

Contact: qingyuz -at- stanford dot edu
[Google Scholar][CV][LinkedIn][Github]

Recent Work

Battling Confounders of Deep Learning Models

Presence of confouning effects is the most critical challenge in applying deep learning techniques to medical applications. Confounders are extraneous variables that influence both input and output variables and thereby can easily distort the training and interpreation of deep learning models. While confounder control has been the center of discussion for traditional models, this topic is largely overlooked in the surge of deep learning applications as researchers put more attention on designing deeper and more powerful network architectures. To address this issue, we propose to learn features impartial to confounders (unbiased features) by adversarial training [CF-Net, Nature Communications][BR-Net][github], and we propose a visualization method that can disentangle the confounding effects falsely learned by the model. [Confounder-aware Visualization][github]

Interpretable Deep Learning Models for Neuroimaging Applications

When deep learning methods are applied to neuroimaging and computational neuroscience applications, it is not only important to enable prediction of disease outcomes, but also to understand the underlying reasons why each subject is classified with any specific disease. Such an interpretation contributes to a mechanistic understanding of the diseases and may help clinicians design new therapeutic procedures. To this end, we conducted several research projects on the interpretability and explainability of deep learning methods applied to neuroimages [VAE-R for T1-w MRI][github] [ST-GCN for rs-fMRI][github]. We proposed 3D Convolutional Neural Networks and exploited their model parameters to tailor the end-to-end architecture for the diagnosis of different diseases from MRIs. Based on the learned models, we identified disease biomarkers and validated the results by exploring the importance of brain regions (or voxels) [Visualizing sex differences during pre-adolescence] to the model prediction and by relating the findings to the clinical literature.

Translational Nueroimaging Study on the Impact of Alcohol Use

► [Association of Heavy Drinking With Deviant Fiber Tract Development in Frontal Brain Systems in Young Adolescents, JAMA Psychiatry 2020]

► [Jacobian Mapping Reveals Converging Brain Substrates of Disruption and Repair in Response to Ethanol Exposure and Abstinence in 2 Strains of Rats, ACER 2020]

► [Adolescent alcohol use disrupts functional neurodevelopment in sensation seeking girls, Addiction Biology 2020]

► [Accelerated aging and motor control deficits are related to regional deformation of central cerebellar white matter in alcohol use disorder, Addiction Biology 2019]

► [Jacobian Maps Reveal Under-reported Brain Regions Sensitive to Extreme Binge Ethanol Intoxication in the Rat, Frontiers in Nueroanatomy, 2018]

► [Alcohol use effects on adolescent brain development revealed by simultaneously removing confounding factors, identifying morphometric patterns, and classifying individuals, Scientific Reports, 2018]

Learning Underlying Geometry of Neuroimaging Data via Deep Generative Models

One central challenge in neuroimaging studies is that data often follow complex distributions in a high-dimensional image space. Therefore, learning the underlying low-dimensional latent space has been critical for successfully uncovering neuroscientific findings. Leveraging recent advances in deep learning, we have explored novel generative models to equip the latent space with the capability of feature disentanglement and characterizing multi-modal distribution. The resulting models can adapt to both supervised or unsupervised applications based on both structural and functional MRI data, such as to understand how the brain structures change with age in both healthy aging and in neurodegenerative diseases [LSSL][VAE-R][github], and to discover major patterns of functional brain connectivity [tGM-VAE][github].

Longitudinal Tools for rs-fMRI Analysis

Longitudinal neuroimaging studies have become increasingly prevelant these days. Longitudinal analysis of structural and functional organization of the brain, however, still relies on cross-sectional procedures (computational methods), which neglect intra-subject dependencies of longitudinal MRI data. We developed novel methods for characterizing macrostructural [LSSL][Longitudinal Pooling] and functional neurodevelopment [L-ICA][github] that reflects biologically plausible longitudinal effects. We then improved the analysis of longitudinal trajectories of connectivity patterns by studying the manifold of positive-definite cone (that underlies the connectivity data) incorporating theories of parallel-transport and Lie group action [Riemannian Geometry][github]. These methods were shown to have improved statistical power in detecting group differences in functional development of the brain.

Endoscopic Video 3D Reconstruction and Registration with CT

The clinical problem we want to tackle here is to transfer the tumor information from a 2D endoscopic movie frame into the 3D CT space for radiation treatment planning. The solution is to first reconstrut a 3D surface model from the video and then register that surface to the CT image. We developed methods for fusing single-frame reconstructions into a complete surface based on physical and statistical models [Geometry Fusion][Anisotropic Stiffness Learning][Joint Disparity-estimation/Registration]. The reconstruction surface was then mapped to the CT space based on spectral and phyiscal models [Thin Shell Demons] [Spectral Graph Theory].

2D/3D Registration for Abdomen Radiation Treatment Planning

2D/3D registration is often used in Image-Guided Radiation Therapy (IGRT) for tracking target motion during treatment delivery. A challenge in disease sites subject to respiratory motion is that organ deformation may occur, thus requiring incorporation of deformation in the registration process. An improved metric-learning algorithm was designed for this purpose. We were the first ones to study this problem in the abdomen. [Local Metric Learning]