I'm an assistant professor in the department of Electrical Engineering at Stanford University.

Prior to joining Stanford, I was an assistant professor of Electrical Engineering and Computer Science at the University of Michigan . In 2017, I was a Math+X postdoctoral fellow working with Emmanuel Candès at Stanford University. I received my Ph.D. in Electrical Engineering and Computer Science from UC Berkeley in 2016. My Ph.D. advisors were Laurent El Ghaoui and Martin Wainwright, and my studies were supported partially by a Microsoft Research PhD Fellowship. I obtained my B.S. and M.S. degrees in Electrical Engineering from Bilkent University, where I worked with Orhan Arikan and Erdal Arikan.

Research Interests: Optimization, Machine Learning, Neural Networks, Signal Processing, Information Theory

Contact


E-mail:
pilanci[at]stanford.edu
Address:
350 Jane Stanford Way
Packard Building, Room 255
Stanford, CA 94305-9510

Teaching


Spring 2022 - Stanford University

EE 364b — Convex Optimization II

Fall 2021 - Stanford University

EE 269 — Signal Processing for Machine Learning

Spring 2021 - Stanford University

EE 364b — Convex Optimization II

Winter 2021 - Stanford University

EE 270 — Large Scale Matrix Computation, Optimization and Learning


Research group members


Tolga Ergen

Rajarshi Saha

Neo Charalambides

Arda Sahiner

Yifei Wang

Aaron Mishkin

Emi Zeger

Alumni


Burak Bartan

Jonathan Lacotte

Qijia Jiang



Recent Presentations


The Hidden Convex Optimization Landscape of Deep Neural Networks, 2021 (slides) (video) code

Computational Polarization: An Information-Theoretic Method for Resilient Computing, 2020 (slides) (video)

Randomized Sketching for Convex and Non-Convex Optimization, 2018 (slides) (video)

Publications


2022


 
R. Saha, M. Pilanci, A. J. Goldsmith
Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication Budget
accepted to the IEEE Journal on Selected Areas in Information Theory, 2022
optimization quantization source coding neural networks information theory
PDF code
 
B. Ozturkler, A. Sahiner, T. Ergen, A. D. Desai, C. M. Sandino, S. Vasanawala, J. Pauly, M. Mardani, M. Pilanci
GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
preprint, 2022
deep neural networks greedy learning compressed sensing MRI reconstruction
arXiv code
 
N. Charalambides, M. Pilanci, A. O. Hero III
Secure Linear MDS Coded Matrix Inversion
preprint, 2022
distributed computing security error resilience
arXiv
 
S. Tang, X. Hu, L. Atlas, A. Khanzada, M. Pilanci
Hierarchical Multi-modal Transformer for Automatic Detection of COVID-19
International Conference on Signal Processing and Machine Learning (SPML), 2022
deep learning transformers COVID-19
PDF
 
B. Gunel, A. Sahiner, A. D. Desai, A. S. Chaudhari, S. Vasanawala, M. Pilanci, J. Pauly
Scale-Equivariant Unrolled Neural Networks for Data-Efficient Accelerated MRI Reconstruction
International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2022
neural networks deep learning MRI reconstruction
arXiv
 
Y. Wang, P. Chen, M. Pilanci, W. Li
Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization
preprint, 2022
Bayesian inference neural networks convex optimization semidefinite programming
arXiv
 
B. Bartan, M. Pilanci
Neural Fisher Discriminant Analysis: Optimal Neural Network Embeddings in Polynomial Time
International Conference on Machine Learning (ICML), 2022
neural networks dimension reduction nonlinear discriminant supervised learning
PDF
 
A. Sahiner, T. Ergen, B. Ozturkler, J. Pauly, M. Mardani, M. Pilanci
Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
International Conference on Machine Learning (ICML), 2022
transformer neural networks attention mechanism MLP mixers transfer learning convex optimization
PDF arXiv
 
A. Mishkin, A. Sahiner, M. Pilanci
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
International Conference on Machine Learning (ICML), 2022
neural networks convex optimization accelerated proximal methods convex cones
arXiv code
 
T. Z. Baharav, G. Cheng, M. Pilanci, D. Tse
Approximate Function Evaluation via Multi-Armed Bandits
International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
multi-armed bandits Monte Carlo method adaptivity lower bounds
arXiv
 
B. Bartan, M. Pilanci
Distributed Sketching for Randomized Optimization: Exact Characterization, Concentration and Lower Bounds
preprint, 2022
sketching distributed optimization concentration of measure privacy
arXiv
 
R. Saha, M. Pilanci, A. J. Goldsmith
Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
preprint, 2022
quantization Hadamard embeddings minimax bounds information theory
arXiv
 
E. Haritaoglu, N. Rasmussen, D. Tan, J. Ranjani, J. Xiao, G. Chaudhari, A. Rajput, P. Govindan, C. Canham, W. Chen, M. Yamaura, L. Gomezjurado, A. Khanzada, M. Pilanci
Using Deep Learning with Large Aggregated Datasets for COVID-19 Classification from Cough
preprint, 2022
deep learning signal processing self-supervised learning COVID-19
arXiv
 
N. Charalambides, H. Mahdavifar, M. Pilanci, A. O. Hero III
Orthonormal Sketches for Secure Coded Regression
IEEE International Symposium on Information Theory (ISIT), 2022
sketching security distributed computing error resilience
arXiv
 
J. Lacotte, M. Pilanci
Adaptive and Oblivious Randomized Subspace Methods for High-Dimensional Optimization: Sharp Analysis and Lower Bounds
IEEE Transactions on Information Theory, 2022
dimension reduction convex optimization randomized SVD kernel methods lower-bounds
PDF arXiv
 
M. Pilanci
Computational Polarization: An Information-Theoretic Method for Resilient Computing
IEEE Transactions on Information Theory, 2022
Polar Codes error correcting codes distributed and cloud computing error resilience martingales
PDF arXiv
 
Y. Wang, J. Lacotte, M. Pilanci
The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural Networks: an Exact Characterization of the Optimal Solutions
International Conference on Learning Representations (ICLR), 2022 (oral presentation)
neural networks convex analysis non-convex non-smooth optimization
arXiv
 
A. Sahiner, T. Ergen, B. Ozturkler, B. Bartan, J. Pauly, M. Mardani, M. Pilanci
Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
International Conference on Learning Representations (ICLR), 2022
generative adversarial networks convex optimization
arXiv code
 
T. Ergen, A. Sahiner, B. Ozturkler, J. Pauly, M. Mardani, M. Pilanci
Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization
International Conference on Learning Representations (ICLR), 2022
neural networks deep learning non-convex optimization convexity
arXiv
 
Y. Wang, M. Pilanci
The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program
International Conference on Learning Representations (ICLR), 2022
neural networks non-convex optimization gradient flow convex geometry
arXiv

2021


 
B. Ozturkler, A. Sahiner, T. Ergen, A. D. Desai, J. M. Pauly, S. Vasanawala, M. Mardani, M. Pilanci
Greedy Learning for Large-Scale Neural MRI Reconstruction
NeurIPS Workshop on Deep Learning and Inverse Problems, 2021
deep neural networks MRI reconstruction
PDF
 
Y. Wang, T. Ergen, M. Pilanci
Parallel Deep Neural Networks Have Zero Duality Gap
preprint, 2021
deep neural networks non-convex optimization convex duality
arXiv
 
T. Ergen, M. Pilanci
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
preprint, 2021
deep neural networks convex analysis non-convex optimization
arXiv
 
T. Ergen, M. Pilanci
Convex Geometry and Duality of Over-parameterized Neural Networks
Journal of Machine Learning Research (JMLR), 2021
neural networks convex analysis non-convex optimization
PDF
M. Dereziński, J. Lacotte, M. Pilanci, M. W. Mahoney
Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update
Neural Information Processing Systems (NeurIPS), 2021 (spotlight presentation)
randomized algorithms optimization Newton's method
arXiv
 
V. Gupta, B. Bartan, T. Ergen, M. Pilanci
Convex Neural Autoregressive Models: Towards Tractable, Expressive, and Theoretically-Backed Models for Sequential Forecasting and Generation
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021 (Outstanding Student Paper Award)
generative models neural networks convex optimization
PDF
J.A. Oscanoa, F. Ong, Z. Li, C.M. Sandino, D.M. Ennis, M. Pilanci, S. Vasanawala
Coil Sketching for fast and memory-efficient iterative reconstruction
In Proceedings of the 29th Annual Meeting of the International Society for Magnetic Resonance in Medicine (ISMRM), 2021
magnetic resonance imaging randomized algorithms sketching inverse problems
PDF
 

B. Bartan, M. Pilanci
Training Quantized Neural Networks to Global Optimality via Semidefinite Programming
International Conference on Machine Learning (ICML), 2021
neural networks quantization semidefinite programming Grothendieck inequality
arXiv
 
T. Ergen, M. Pilanci
Global Optimality Beyond Two Layers: Training Deep ReLU Networks via Convex Programs
International Conference on Machine Learning (ICML), 2021
deep neural networks convex optimization convex regularization
pdf
 
J. Lacotte, Y. Wang, M. Pilanci
Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality
International Conference on Machine Learning (ICML), 2021
convex optimization Newton's method randomized algorithms sketching
pdf code
 
T. Ergen, M. Pilanci
Revealing the Structure of Deep Neural Networks via Convex Duality
International Conference on Machine Learning (ICML), 2021
deep neural networks convex analysis non-convex optimization
arXiv
 
L. Liu, X. Zhan, R. Wu, X. Guan, Z. Wang, W. Zhang, M. Pilanci, Y. Wang, Z. Luo, G. Li
Boost AI Power: Data Augmentation Strategies with Unlabelled Data and Conformal Prediction, a Case in Alternative Herbal Medicine Discrimination with Electronic Nose
IEEE Sensors Journal, 2021
deep neural networks convex analysis non-convex optimization
pdf
 
J. Lacotte, M. Pilanci
Fast Convex Quadratic Optimization Solvers with Adaptive Sketching-based Preconditioners
Preprint, 2021
linear solvers optimization preconditioning randomized algorithms
arXiv
 
R. Saha, M. Pilanci, A. J. Goldsmith
Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Preprint, 2021
optimization quantization source coding information theory
arXiv
 
E. J. Candès, Q. Jiang, M. Pilanci
Randomized Alternating Direction Methods for Efficient Distributed Optimization
Preprint, 2021
randomized algorithms distributed optimization
PDF
 
B. Bartan, M. Pilanci
Neural Spectrahedra and Semidefinite Lifts: Global Convex Optimization of Polynomial Activation Neural Networks in Fully Polynomial-Time
Preprint, 2021
neural networks non-convex optimization computational complexity semidefinite programming
PDF arXiv code
 
T. Ergen, M. Pilanci
Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time
International Conference on Learning Representations, ICLR 2021 (spotlight presentation)
convolutional neural networks convex optimization deep learning
arXiv
 
A. Sahiner, T. Ergen, J. Pauly, M. Pilanci
Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms
International Conference on Learning Representations, ICLR 2021
neural networks non-convex optimization copositive programming
arXiv
 
A. Sahiner, M. Mardani, B. Ozturkler, M. Pilanci, J. Pauly
Convex Regularization behind Neural Reconstruction
International Conference on Learning Representations, ICLR 2021
neural networks inverse problems convex duality
arXiv
 
L. Kim, R. Goel, J. Liang, M. Pilanci, P. Paredes
Linear Predictive Coding as a Valid Approximation of a Mass Spring Damper Model for Acute Stress Prediction from Computer Mouse Movement
43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2021
signal processing passive sensing neural networks
arXiv

2020

 
J. Lacotte, M. Pilanci
Adaptive and Oblivious Randomized Subspace Methods for High-Dimensional Optimization: Sharp Analysis and Lower Bounds
Preprint, 2020
randomized algorithms high dimensional optimization lower-bounds
arXiv
 
I.K. Ozaslan, M. Pilanci, O. Arikan
M-IHS: An Accelerated Randomized Preconditioning Method Avoiding Costly Matrix Decompositions
Preprint, 2020
randomized algorithms iterative linear solvers preconditioning
arXiv
 
N. Charalambides, M. Pilanci, A. O. Hero III
Approximate Weighted CR Coded Matrix Multiplication
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021
distributed computing randomized linear algebra
arXiv
 
J. Lacotte, M. Pilanci
Effective Dimension Adaptive Sketching Methods for Faster Regularized Least-Squares Optimization
NeurIPS 2020 (oral presentation)
randomized algorithms random projection random matrix theory
arXiv code
 
M. Derezinski, B. Bartan, M. Pilanci, M. Mahoney
Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization
NeurIPS 2020
randomized algorithms distributed optimization determinantal point processes
arXiv
J. Lacotte, S. Liu, E. Dobriban, M. Pilanci
Limiting Spectrum of Randomized Hadamard Transform and Optimal Iterative Sketching Methods
NeurIPS 2020
random matrix theory free probability randomized algorithms
arXiv
 
S. Sridhar, M. Pilanci, A. Ozgur
Lower Bounds and a Near-Optimal Shrinkage Estimator for Least Squares using Random Projections
IEEE Journal on Selected Areas in Information Theory, 2020
randomized algorithms Stein's paradox Fisher Information
arXiv
 
J. Lacotte, M. Pilanci
All Local Minima are Global for Two-Layer ReLU Neural Networks: The Hidden Convex Optimization Landscape
preprint, 2020
neural networks convex analysis non-convex non-smooth optimization
arXiv
 
M. Pilanci, T. Ergen
Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-Layer Networks
International Conference on Machine Learning (ICML), 2020
neural networks convex analysis non-convex optimization
PDF arXiv code
 
S. Ahn, A. Ozgur, M. Pilanci
Global Multiclass Classification from Heterogeneous Local Models
IEEE Journal on Selected Areas in Information Theory 2020
information theory federated learning
arXiv
 
N. Charalambides, M. Pilanci, A. O. Hero III
Straggler Robust Distributed Matrix Inverse Approximation
Preprint, 2020
distributed computing error resilience
arXiv
 
E. Chai, M. Pilanci , B. Murmann
Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory
Asilomar 2020
convolutional neural networks adaptive filters
arXiv
 
J. Lacotte, M. Pilanci
Optimal Randomized First-Order Methods for Least-Squares Problems
International Conference on Machine Learning (ICML), 2020
convex optimization randomized algorithms orthogonal polynomials
arXiv
 
T. Ergen, M. Pilanci
Convex geometry of two-layer relu networks: Implicit autoencoding and interpretable models
23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020)
neural networks convex analysis non-convex optimization
PDF
 
B. Bartan, M. Pilanci
Distributed Averaging Methods for Randomized Second Order Optimization
Preprint, 2020
distributed optimization randomized algorithms serverless computing
arXiv
 
B. Bartan, M. Pilanci
Distributed Sketching Methods for Privacy Preserving Regression
Preprint, 2020
machine learning randomized algorithms serverless computing
arXiv
   
A. d'Aspremont, M. Pilanci
Global Convergence of Frank Wolfe on One Hidden Layer Networks
Preprint, 2020
neural networks non-convex optimization
arXiv
 
N. Charalambides, M. Pilanci, A.O. Hero III
Weighted Gradient Coding with Leverage Score Sampling
Preprint, 2020
coding theory distributed optimization
arXiv

2019


T. Ergen, E. J. Candès, M. Pilanci
Randomized Methods for Fitting Non-Convex Models
NeurIPS Workshop on Beyond First Order Methods in Machine Learning, 2019
non-convex optimization neural nets phase retrieval
PDF
J. Lacotte, M. Pilanci
Faster Least Squares Optimization
Preprint, 2019
optimization machine learning computational complexity randomized algorithms
arXiv
 
J. Lacotte, M. Pilanci, M. Pavone
High-Dimensional Optimization in Adaptive Random Subspaces
Advances in Neural Information Processing Systems (NeurIPS), 2019
optimization machine learning kernel methods randomized algorithms
arXiv
 
B. Bartan, M. Pilanci
Distributed Black-Box Optimization via Error Correcting Codes
57th Annual Allerton Conference on Communication, Control, and Computing, 2019
optimization error correcting codes deep learning adversarial examples cloud computing
arXiv
 
B. Bartan and M. Pilanci
Straggler Resilient Serverless Computing Based on Polar Codes
57th Annual Allerton Conference on Communication, Control, and Computing, 2019
polar codes information theory distributed optimization cloud computing
arXiv
 
B. Bartan and M. Pilanci
Convex Relaxations of Convolutional Neural Nets
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019
neural nets convex optimization
arXiv

2018


M. Pilanci
Information-Theoretic Bounds on Sketching
Book chapter, Information-Theoretic Methods in Data Science, Cambridge University Press, 2021
random projection information theory
PDF
 
I.K. Ozaslan, M. Pilanci and O. Arikan
Iterative Hessian Sketch with Momentum
Preprint, 2018
random projection momentum least squares
PDF
 

2017


M. Pilanci and M. J. Wainwright
Newton Sketch: A Linear-time Optimization Algorithm with Linear-Quadratic Convergence
SIAM Journal of Optimization, SIAM J. Optim., 27(1), 205–245, 2017
randomized algorithms newton's method interior point method convex optimization linear program logistic regression
DOI PDF arXiv

2016


M. Pilanci and M. J. Wainwright
Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares
Journal of Machine Learning Research (JMLR) 17, 2016
information-theoretic lower-bounds sketching l1 regularized least-squares nuclear norm regularization
PDF arXiv
 
M. Pilanci
Fast Randomized Algorithms for Convex Optimization and Statistical Estimation
PhD Thesis, University of California, Berkeley, 2016
machine learning convex optimization convex relaxations randomized algorithms
PDF

2015


M. Pilanci, M. J. Wainwright, and L. E. Ghaoui
Sparse learning via Boolean relaxations
Mathematical Programming, 151(1), 2015
sparse regression sparse classification randomized rounding
DOI PDF

M. Pilanci and M.J. Wainwright
Randomized Sketches of Convex Programs With Sharp Guarantees
IEEE Transactions on Information Theory, 61(9), 2015
random projection regression compressed sensing data privacy
DOI arXiv

Y. Yang, M. Pilanci, and M. J. Wainwright
Randomized sketches for kernels: Fast and optimal non-parametric regression
Annals of Statistics, Volume 45, Number 3 (2017), 991-1023.
kernel regression smoothing random projection Rademacher complexity
DOI PDF arXiv

2014


M. Pilanci and M. J. Wainwright
Randomized sketches of convex programs with sharp guarantees
IEEE International Symposium on Information Theory (ISIT), 2014
random projection convex optimization
DOI arXiv

2012


M. Pilanci, L. E. Ghaoui, and V. Chandrasekaran
Recovery of sparse probability measures via convex programming
Advances in Neural Information Processing Systems (NIPS), 2012
probability measures convex relaxation data clustering
PDF
 
A. C. Gurbuz, M. Pilanci, and O. Arikan
Expectation maximization based matching pursuit
Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, 2012
sparse approximation compressed sensing
DOI PDF

2011


M. Pilanci and O. Arikan
Recovery of sparse perturbations in least squares problems
Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on,2011
robust optimization regression sparsity
DOI
A. C. Gurbuz, M. Pilanci, and O. Arikan
Sparse signal reconstruction with ellipsoid enlargement
Signal Processing and Communications Applications (SIU), 2011 IEEE 19th Conference on, 2011
sparsity compressed sensing signal processing
DOI

2010


M. Pilanci, O. Arikan, and M. C. Pinar
Structured least squares problems and robust estimators
IEEE Transactions on Signal Processing, 58(5), 2010
robust optimization robust estimation
DOI
M. Pilanci
Uncertain linear equations
MS Thesis, Bilkent University, 2010
robust optimization compressed sensing coding and information theory polar codes
PDF

M. Pilanci, O. Arikan, and E. Arikan
Polar compressive sampling: A novel technique using Polar codes
Signal Processing and Communications Applications Conference (SIU), 2010 IEEE 18th,2010
compressed sensing coding and information theory polar codes signal processing
 
B. Guldogan, M. Pilanci, and O. Arikan
Compressed sensing on ambiguity function domain for high resolution detection
Signal Processing and Communications Applications Conference (SIU), 2010 IEEE 18th, 2010
compressed sensing signal processing radar

M. Pilanci and O. Arikan
Compressive sampling and adaptive multipath estimation
Signal Processing and Communications Applications Conference (SIU), 2010 IEEE 18th, 2010
wireless communications compressed sensing

2009


M. Pilanci, O. Arikan, B. Oguz, and M. Pinar
A novel technique for a linear system of equations applied to channel equalization
Signal Processing and Communications Applications Conference, 2009. SIU 2009. IEEE 17th, 2009
wireless communications robust optimization
DOI
M. Pilanci, O. Arikan, B. Oguz, and M. Pinar
Structured least squares with bounded data uncertainties
Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on, 2009
robust optimization regression statistical estimation
DOI