I'm an assistant professor in the department of Electrical Engineering at Stanford University.

Prior to joining Stanford, I was an assistant professor of Electrical Engineering and Computer Science at the University of Michigan . In 2017, I was a Math+X postdoctoral fellow working with Emmanuel Candès at Stanford University. I received my Ph.D. in Electrical Engineering and Computer Science from UC Berkeley in 2016. My Ph.D. advisors were Laurent El Ghaoui and Martin Wainwright, and my studies were supported partially by a Microsoft Research PhD Fellowship. I obtained my B.S. and M.S. degrees in Electrical Engineering from Bilkent University, where I worked with Orhan Arikan and Erdal Arikan.

Research Interests: Optimization, Machine Learning, Neural Networks, Signal Processing, Information Theory

Contact


E-mail:
pilanci[at]stanford.edu
Address:
350 Jane Stanford Way
Packard Building, Room 255
Stanford, CA 94305-9510

Teaching


Autumn 2023 - Stanford University

EE 269 — Signal Processing for Machine Learning

Spring 2023 - Stanford University

EE 364b — Convex Optimization II

Winter 2021 - Stanford University

EE 270 — Large Scale Matrix Computation, Optimization and Learning


Research group


PhD students


Rajarshi Saha

Yifei Wang

Aaron Mishkin

Emi Zeger

Fangzhao Zhang

Rajat Dwaraknath

Postdocs


Sara Fridovich-Keil

Alumni


Burak Bartan

Jonathan Lacotte

Qijia Jiang

Arda Sahiner

Neo Charalambides

Tolga Ergen



Recent Presentations


Randomized Embeddings and Neural Networks, Simons Institute, UC Berkeley, 2023 (slides) (video)

The Hidden Convex Optimization Landscape of Deep Neural Networks, ISL Seminar, Stanford, 2023 (slides) (video) code

Computational Polarization: An Information-Theoretic Method for Resilient Computing, Conference on Information Sciences and Systems, Princeton, 2020 (slides) (video)

Randomized Sketching for Convex and Non-Convex Optimization, 2018 (slides) (video)

Selected Publications


E. Zeger, Y. Wang, A. Mishkin, T. Ergen, E. J. Candès, M. Pilanci
A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features
preprint, 2024
deep neural networks convex geometry signal processing Lasso
PDF arXiv
F. Zhang, M. Pilanci
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models
preprint, 2024
fine-tuning large language models Stable Diffusion Riemannian optimization
arXiv code
M. Pilanci
From Complexity to Clarity: Analytical Expressions of Deep Neural Network Weights via Clifford's Geometric Algebra and Convexity
preprint, 2023
deep neural networks Clifford's Geometric Algebra convex optimization convex geometry
PDF arXiv
 
M. Pilanci
Computational Polarization: An Information-Theoretic Method for Resilient Computing
IEEE Transactions on Information Theory, 2022
Polar Codes error correcting codes distributed and cloud computing error resilience martingales
DOI arXiv
 
M. Pilanci
Information-Theoretic Bounds on Sketching
Book chapter, Information-Theoretic Methods in Data Science, Cambridge University Press, 2021
random projection information theory
PDF
 
M. Pilanci, T. Ergen
Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-Layer Networks
International Conference on Machine Learning (ICML), 2020
neural networks convex analysis non-convex optimization
PDF arXiv code

All Publications


2024


E. Zeger, Y. Wang, A. Mishkin, T. Ergen, E. J. Candès, M. Pilanci
A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features
preprint, 2024
deep neural networks convex geometry signal processing Lasso
PDF arXiv
 
S. Kim, M. Pilanci
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in Polynomial Time
preprint, 2024
ReLU neural networks global optimization convex analysis
PDF
 
F. Zhang, M. Pilanci
Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models
preprint, 2024
fine-tuning large language models Stable Diffusion Riemannian optimization
arXiv code
 
F. Zhang, M. Pilanci
Analyzing Neural Network-Based Generative Diffusion Models through Convex Optimization
preprint, 2024
generative diffusion models Langevin Monte Carlo neural networks convex optimization
PDF
 
A. Sahiner, T. Ergen, B. Ozturkler, J. Pauly, M. Mardani, M. Pilanci
Scaling Convex Neural Networks with Burer-Monteiro Factorization
accepted to the International Conference on Learning Representations (ICLR) 2024
deep neural networks layerwise learning convex optimization semidefinite programming
PDF
 
E. Craig, M. Pilanci, T. Le Menestrel, B. Narasimhan, M. Rivas, R. Dehghannasiri, J. Salzman, J. Taylor, R. Tibshirani
Pretraining and the Lasso
preprint, 2024
Lasso supervised learning transfer learning sparse recovery
PDF
 
S. Hor, Y. Qian, M. Pilanci, A. Arbabian
Adaptive Inference: Theoretical Limits and Unexplored Opportunities
preprint, 2024
adaptive inference computer vision large language models
PDF
 
A.E. Tzikas, L. Romao, M. Pilanci, A. Abate, M.J. Kochenderfer
Distributed Markov Chain Monte Carlo Sampling based on the Alternating Direction Method of Multipliers
preprint, 2024
Markov Chain Monte Carlo sampling distributed optimization
PDF

2023


 
M. Pilanci
From Complexity to Clarity: Analytical Expressions of Deep Neural Network Weights via Clifford's Geometric Algebra and Convexity
preprint, 2023
deep neural networks Clifford's Geometric Algebra convex optimization convex geometry
PDF arXiv
 
T. Ergen, M. Pilanci
The Convex Landscape of Neural Networks: Characterizing Global Optima and Stationary Points via Lasso Models
preprint, 2023
ReLU neural networks global optimization Lasso stationary points zonotope sampling
PDF
 
R. Saha, V. Srivastava, M. Pilanci
Matrix Compression via Randomized Low Rank and Low Precision Factorization
Neural Information Processing Systems (NeurIPS), 2023
low precision arithmetic low rank models sketching large language models
PDF
 
R. Dwaraknath, T. Ergen, M. Pilanci
Fixing the NTK: From Neural Network Linearizations to Exact Convex Programs
Neural Information Processing Systems (NeurIPS), 2023
deep neural networks convex analysis non-convex optimization
PDF
 
T. Ergen, M. Pilanci
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Neural Information Processing Systems (NeurIPS), 2023
deep neural networks convex analysis non-convex optimization
PDF arXiv
 
B. Bartan, M. Pilanci
Randomized Polar Codes for Anytime Distributed Machine Learning
accepted to the IEEE Journal on Selected Areas in Information Theory, 2023
error correcting codes distributed machine learning information theory polar codes
PDF

 
I.K. Ozaslan, M. Pilanci, O. Arikan
M-IHS: An Accelerated Randomized Preconditioning Method Avoiding Costly Matrix Decompositions
accepted to the Journal of Linear Algebra and Its Applications, 2023
randomized algorithms iterative linear solvers preconditioning
PDF DOI
 
A. Mishkin, M. Pilanci
Optimal Sets and Solution Paths of ReLU Networks
International Conference on Machine Learning (ICML), 2023
neural networks convex optimization group Lasso network compression
PDF

 
F. Zhang, M. Pilanci
Optimal Shrinkage for Distributed Second-Order Optimization
International Conference on Machine Learning (ICML), 2023
distributed optimization Newton's method random matrix theory
PDF

 
Y. Wang, M. Pilanci
Sketching the Krylov Subspace: Faster Computation of the Entire Ridge Regularization Path
Springer Journal of Supercomputing, 2023
convex optimization ridge regression randomized algorithms
PDF DOI
 
J.A. Oscanoa, F. Ong, S.S. Iyer, Z. Li, C.M. Sandino, D.M. Ennis, M. Pilanci, S. Vasanawala
Coil Sketching for Computationally-Efficient MR Iterative Reconstruction
accepted to Wiley Magnetic Resonance in Medicine, 2023
MRI reconstruction sketching inverse problems
PDF

 
B. Bartan, M. Pilanci
Distributed Sketching for Randomized Optimization: Exact Characterization, Concentration and Lower Bounds
IEEE Transactions on Information Theory, 2023
sketching distributed optimization concentration of measure privacy
PDF
 
T. Ergen, H. I. Gulluk, J. Lacotte, M. Pilanci
Globally Optimal Training of Neural Networks with Threshold Activation Functions
International Conference on Learning Representations (ICLR), 2023
deep neural networks threshold networks convex optimization
PDF code
 
Y. Wang, T. Ergen, M. Pilanci
Parallel Deep Neural Networks Have Zero Duality Gap
International Conference on Learning Representations (ICLR), 2023
deep neural networks non-convex optimization convex duality
PDF
 
N. Charalambides, M. Pilanci, A. O. Hero III
Secure Linear MDS Coded Matrix Inversion
IEEE Journal on Selected Areas in Information Theory, 2023
distributed computing security error resilience
arXiv
 
R. Saha, M. Pilanci, A. J. Goldsmith
Low Precision Representations for High Dimensional Models
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
quantization model compression minimax lower-bounds
PDF
 
B. Bartan, M. Pilanci
Convex Optimization of Deep Polynomial and ReLU Activation Neural Networks
International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
deep neural networks non-convex optimization convex duality
PDF
 
L. Atlas, N. Rasmussen, F. Schwock, M. Pilanci
Complex Clipping for Improved Generalization in Machine Learning
preprint, 2023
spectrogram signal processing signal classification
arXiv

2022


 
Y. Wang, Y. Hua, E. J. Candès, M. Pilanci
Overparameterized ReLU Neural Networks Learn the Simplest Model: Neural Isometry and Phase Transitions
preprint, 2022
neural networks deep learning generalization phase transitions compressed sensing sparse recovery
arXiv code
 
R. Saha, M. Pilanci, A. J. Goldsmith
Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication Budget
IEEE Journal on Selected Areas in Information Theory, 2022
optimization quantization source coding neural networks information theory
PDF DOI code
 
B. Ozturkler, A. Sahiner, T. Ergen, A. D. Desai, C. M. Sandino, S. Vasanawala, J. Pauly, M. Mardani, M. Pilanci
GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction
preprint, 2022
deep neural networks greedy learning compressed sensing MRI reconstruction
arXiv code
 
S. Hor, M. Pilanci, A. Arbabian
A Data-Driven Waveform Adaptation Method for Mm-Wave Gait Classification at the Edge
IEEE Signal Processing Letters, 2022
deep neural networks reinforcement learning radar sensor adaptation
PDF DOI
 
S. Tang, X. Hu, L. Atlas, A. Khanzada, M. Pilanci
Hierarchical Multi-modal Transformer for Automatic Detection of COVID-19
International Conference on Signal Processing and Machine Learning (SPML), 2022
deep learning transformers COVID-19
PDF
 
B. Gunel, A. Sahiner, A. D. Desai, A. S. Chaudhari, S. Vasanawala, M. Pilanci, J. Pauly
Scale-Equivariant Unrolled Neural Networks for Data-Efficient Accelerated MRI Reconstruction
International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), 2022
neural networks deep learning MRI reconstruction
arXiv
 
Y. Wang, P. Chen, M. Pilanci, W. Li
Optimal Neural Network Approximation of Wasserstein Gradient Direction via Convex Optimization
preprint, 2022
Bayesian inference neural networks convex optimization semidefinite programming
arXiv
 
B. Bartan, M. Pilanci
Neural Fisher Discriminant Analysis: Optimal Neural Network Embeddings in Polynomial Time
International Conference on Machine Learning (ICML), 2022
neural networks dimension reduction nonlinear discriminant supervised learning
PDF
 
A. Sahiner, T. Ergen, B. Ozturkler, J. Pauly, M. Mardani, M. Pilanci
Unraveling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers
International Conference on Machine Learning (ICML), 2022
transformer neural networks attention mechanism MLP mixers transfer learning convex optimization
PDF arXiv
 
A. Mishkin, A. Sahiner, M. Pilanci
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
International Conference on Machine Learning (ICML), 2022
neural networks convex optimization accelerated proximal methods convex cones
arXiv code
 
T. Z. Baharav, G. Cheng, M. Pilanci, D. Tse
Approximate Function Evaluation via Multi-Armed Bandits
International Conference on Artificial Intelligence and Statistics (AISTATS), 2022
multi-armed bandits Monte Carlo method adaptivity lower bounds
arXiv
 
R. Saha, M. Pilanci, A. J. Goldsmith
Minimax Optimal Quantization of Linear Models: Information-Theoretic Limits and Efficient Algorithms
preprint, 2022
quantization Hadamard embeddings minimax bounds information theory
arXiv
 
E. Haritaoglu, N. Rasmussen, D. Tan, J. Ranjani, J. Xiao, G. Chaudhari, A. Rajput, P. Govindan, C. Canham, W. Chen, M. Yamaura, L. Gomezjurado, A. Khanzada, M. Pilanci
Using Deep Learning with Large Aggregated Datasets for COVID-19 Classification from Cough
preprint, 2022
deep learning signal processing self-supervised learning COVID-19
arXiv
 
N. Charalambides, H. Mahdavifar, M. Pilanci, A. O. Hero III
Orthonormal Sketches for Secure Coded Regression
IEEE International Symposium on Information Theory (ISIT), 2022
sketching security distributed computing error resilience
arXiv
 
J. Lacotte, M. Pilanci
Adaptive and Oblivious Randomized Subspace Methods for High-Dimensional Optimization: Sharp Analysis and Lower Bounds
IEEE Transactions on Information Theory, 2022
dimension reduction convex optimization randomized SVD kernel methods lower-bounds
PDF arXiv
 
M. Pilanci
Computational Polarization: An Information-Theoretic Method for Resilient Computing
IEEE Transactions on Information Theory, 2022
Polar Codes error correcting codes distributed and cloud computing error resilience martingales
DOI arXiv
 
Y. Wang, J. Lacotte, M. Pilanci
The Hidden Convex Optimization Landscape of Two-Layer ReLU Neural Networks: an Exact Characterization of the Optimal Solutions
International Conference on Learning Representations (ICLR), 2022 (oral presentation)
neural networks convex analysis non-convex non-smooth optimization
arXiv
 
A. Sahiner, T. Ergen, B. Ozturkler, B. Bartan, J. Pauly, M. Mardani, M. Pilanci
Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
International Conference on Learning Representations (ICLR), 2022
generative adversarial networks convex optimization
arXiv code
 
T. Ergen, A. Sahiner, B. Ozturkler, J. Pauly, M. Mardani, M. Pilanci
Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization
International Conference on Learning Representations (ICLR), 2022
neural networks deep learning non-convex optimization convexity
arXiv
 
Y. Wang, M. Pilanci
The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program
International Conference on Learning Representations (ICLR), 2022
neural networks non-convex optimization gradient flow convex geometry
arXiv

2021


 
B. Ozturkler, A. Sahiner, T. Ergen, A. D. Desai, J. M. Pauly, S. Vasanawala, M. Mardani, M. Pilanci
Greedy Learning for Large-Scale Neural MRI Reconstruction
NeurIPS Workshop on Deep Learning and Inverse Problems, 2021
deep neural networks MRI reconstruction
PDF
 
B. Ozturkler, A. Sahiner, M. Pilanci, S. Vasanawala, J. M. Pauly, M. Mardani
Scalable and Interpretable Neural MRI Reconstruction via Layer-Wise Training
Proceedings of the 29th Annual Meeting of the International Society for Magnetic Resonance in Medicine (ISMRM), 2021
deep neural networks convex optimization MRI reconstruction
PDF
 
T. Ergen, M. Pilanci
Convex Geometry and Duality of Over-parameterized Neural Networks
Journal of Machine Learning Research (JMLR), 2021
neural networks convex analysis non-convex optimization
PDF
M. Dereziński, J. Lacotte, M. Pilanci, M. W. Mahoney
Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update
Neural Information Processing Systems (NeurIPS), 2021 (spotlight presentation)
randomized algorithms optimization Newton's method
arXiv
 
V. Gupta, B. Bartan, T. Ergen, M. Pilanci
Convex Neural Autoregressive Models: Towards Tractable, Expressive, and Theoretically-Backed Models for Sequential Forecasting and Generation
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021 (Outstanding Student Paper Award)
generative models neural networks convex optimization
PDF
J.A. Oscanoa, F. Ong, Z. Li, C.M. Sandino, D.M. Ennis, M. Pilanci, S. Vasanawala
Coil Sketching for fast and memory-efficient iterative reconstruction
Proceedings of the 29th Annual Meeting of the International Society for Magnetic Resonance in Medicine (ISMRM), 2021
magnetic resonance imaging randomized algorithms sketching inverse problems
PDF
 

B. Bartan, M. Pilanci
Training Quantized Neural Networks to Global Optimality via Semidefinite Programming
International Conference on Machine Learning (ICML), 2021
neural networks quantization semidefinite programming Grothendieck inequality
arXiv
 
T. Ergen, M. Pilanci
Global Optimality Beyond Two Layers: Training Deep ReLU Networks via Convex Programs
International Conference on Machine Learning (ICML), 2021
deep neural networks convex optimization convex regularization
pdf
 
J. Lacotte, Y. Wang, M. Pilanci
Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality
International Conference on Machine Learning (ICML), 2021
convex optimization Newton's method randomized algorithms sketching
pdf code
 
T. Ergen, M. Pilanci
Revealing the Structure of Deep Neural Networks via Convex Duality
International Conference on Machine Learning (ICML), 2021
deep neural networks convex analysis non-convex optimization
arXiv
 
L. Liu, X. Zhan, R. Wu, X. Guan, Z. Wang, W. Zhang, M. Pilanci, Y. Wang, Z. Luo, G. Li
Boost AI Power: Data Augmentation Strategies with Unlabelled Data and Conformal Prediction, a Case in Alternative Herbal Medicine Discrimination with Electronic Nose
IEEE Sensors Journal, 2021
deep neural networks convex analysis non-convex optimization
pdf
 
J. Lacotte, M. Pilanci
Fast Convex Quadratic Optimization Solvers with Adaptive Sketching-based Preconditioners
Preprint, 2021
linear solvers optimization preconditioning randomized algorithms
arXiv
 
R. Saha, M. Pilanci, A. J. Goldsmith
Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Preprint, 2021
optimization quantization source coding information theory
arXiv
 
E. J. Candès, Q. Jiang, M. Pilanci
Randomized Alternating Direction Methods for Efficient Distributed Optimization
Preprint, 2021
randomized algorithms distributed optimization
PDF
 
B. Bartan, M. Pilanci
Neural Spectrahedra and Semidefinite Lifts: Global Convex Optimization of Polynomial Activation Neural Networks in Fully Polynomial-Time
Preprint, 2021
neural networks non-convex optimization computational complexity semidefinite programming
PDF arXiv code
 
T. Ergen, M. Pilanci
Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time
International Conference on Learning Representations, ICLR 2021 (spotlight presentation)
convolutional neural networks convex optimization deep learning
arXiv
 
A. Sahiner, T. Ergen, J. Pauly, M. Pilanci
Vector-output ReLU Neural Network Problems are Copositive Programs: Convex Analysis of Two Layer Networks and Polynomial-time Algorithms
International Conference on Learning Representations, ICLR 2021
neural networks non-convex optimization copositive programming
arXiv
 
A. Sahiner, M. Mardani, B. Ozturkler, M. Pilanci, J. Pauly
Convex Regularization behind Neural Reconstruction
International Conference on Learning Representations, ICLR 2021
neural networks inverse problems convex duality
arXiv
 
L. Kim, R. Goel, J. Liang, M. Pilanci, P. Paredes
Linear Predictive Coding as a Valid Approximation of a Mass Spring Damper Model for Acute Stress Prediction from Computer Mouse Movement
43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2021
signal processing passive sensing neural networks
arXiv

2020

 
J. Lacotte, M. Pilanci
Adaptive and Oblivious Randomized Subspace Methods for High-Dimensional Optimization: Sharp Analysis and Lower Bounds
Preprint, 2020
randomized algorithms high dimensional optimization lower-bounds
arXiv
 
N. Charalambides, M. Pilanci, A. O. Hero III
Approximate Weighted CR Coded Matrix Multiplication
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021
distributed computing randomized linear algebra
arXiv
 
J. Lacotte, M. Pilanci
Effective Dimension Adaptive Sketching Methods for Faster Regularized Least-Squares Optimization
Neural Information Processing Systems (NeurIPS) 2020 (oral presentation)
randomized algorithms random projection random matrix theory
arXiv code
 
M. Derezinski, B. Bartan, M. Pilanci, M. Mahoney
Debiasing Distributed Second Order Optimization with Surrogate Sketching and Scaled Regularization
Neural Information Processing Systems (NeurIPS) 2020
randomized algorithms distributed optimization determinantal point processes
arXiv
J. Lacotte, S. Liu, E. Dobriban, M. Pilanci
Limiting Spectrum of Randomized Hadamard Transform and Optimal Iterative Sketching Methods
Neural Information Processing Systems (NeurIPS) 2020
random matrix theory free probability randomized algorithms
arXiv
 
S. Sridhar, M. Pilanci, A. Ozgur
Lower Bounds and a Near-Optimal Shrinkage Estimator for Least Squares using Random Projections
IEEE Journal on Selected Areas in Information Theory, 2020
randomized algorithms Stein's paradox Fisher Information
arXiv
 
J. Lacotte, M. Pilanci
All Local Minima are Global for Two-Layer ReLU Neural Networks: The Hidden Convex Optimization Landscape
preprint, 2020
neural networks convex analysis non-convex non-smooth optimization
arXiv
 
M. Pilanci, T. Ergen
Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-Layer Networks
International Conference on Machine Learning (ICML), 2020
neural networks convex analysis non-convex optimization
PDF arXiv code
 
S. Ahn, A. Ozgur, M. Pilanci
Global Multiclass Classification from Heterogeneous Local Models
IEEE Journal on Selected Areas in Information Theory 2020
information theory federated learning
arXiv
 
N. Charalambides, M. Pilanci, A. O. Hero III
Straggler Robust Distributed Matrix Inverse Approximation
Preprint, 2020
distributed computing error resilience
arXiv
 
E. Chai, M. Pilanci , B. Murmann
Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory
Asilomar 2020
convolutional neural networks adaptive filters
arXiv
 
J. Lacotte, M. Pilanci
Optimal Randomized First-Order Methods for Least-Squares Problems
International Conference on Machine Learning (ICML), 2020
convex optimization randomized algorithms orthogonal polynomials
arXiv
 
T. Ergen, M. Pilanci
Convex geometry of two-layer relu networks: Implicit autoencoding and interpretable models
23rd International Conference on Artificial Intelligence and Statistics (AISTATS 2020)
neural networks convex analysis non-convex optimization
PDF
 
B. Bartan, M. Pilanci
Distributed Averaging Methods for Randomized Second Order Optimization
Preprint, 2020
distributed optimization randomized algorithms serverless computing
arXiv
 
B. Bartan, M. Pilanci
Distributed Sketching Methods for Privacy Preserving Regression
Preprint, 2020
machine learning randomized algorithms serverless computing
arXiv
   
A. d'Aspremont, M. Pilanci
Global Convergence of Frank Wolfe on One Hidden Layer Networks
Preprint, 2020
neural networks non-convex optimization
arXiv
 
N. Charalambides, M. Pilanci, A.O. Hero III
Weighted Gradient Coding with Leverage Score Sampling
Preprint, 2020
coding theory distributed optimization
arXiv

2019


T. Ergen, E. J. Candès, M. Pilanci
Randomized Methods for Fitting Non-Convex Models
NeurIPS Workshop on Beyond First Order Methods in Machine Learning, 2019
non-convex optimization neural nets phase retrieval
PDF
J. Lacotte, M. Pilanci
Faster Least Squares Optimization
Preprint, 2019
optimization machine learning computational complexity randomized algorithms
arXiv
 
J. Lacotte, M. Pilanci, M. Pavone
High-Dimensional Optimization in Adaptive Random Subspaces
Advances in Neural Information Processing Systems (NeurIPS), 2019
optimization machine learning kernel methods randomized algorithms
arXiv
 
B. Bartan, M. Pilanci
Distributed Black-Box Optimization via Error Correcting Codes
57th Annual Allerton Conference on Communication, Control, and Computing, 2019
optimization error correcting codes deep learning adversarial examples cloud computing
arXiv
 
B. Bartan and M. Pilanci
Straggler Resilient Serverless Computing Based on Polar Codes
57th Annual Allerton Conference on Communication, Control, and Computing, 2019
polar codes information theory distributed optimization cloud computing
arXiv
 
B. Bartan and M. Pilanci
Convex Relaxations of Convolutional Neural Nets
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019
neural nets convex optimization
arXiv

2018


M. Pilanci
Information-Theoretic Bounds on Sketching
Book chapter, Information-Theoretic Methods in Data Science, Cambridge University Press, 2021
random projection information theory
PDF
 
I.K. Ozaslan, M. Pilanci and O. Arikan
Iterative Hessian Sketch with Momentum
Preprint, 2018
random projection momentum least squares
PDF
 

2017


M. Pilanci and M. J. Wainwright
Newton Sketch: A Linear-time Optimization Algorithm with Linear-Quadratic Convergence
SIAM Journal of Optimization, SIAM J. Optim., 27(1), 205–245, 2017
randomized algorithms newton's method interior point method convex optimization linear program logistic regression
DOI PDF arXiv

2016


M. Pilanci and M. J. Wainwright
Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares
Journal of Machine Learning Research (JMLR) 17, 2016
information-theoretic lower-bounds sketching l1 regularized least-squares nuclear norm regularization
PDF arXiv
 
M. Pilanci
Fast Randomized Algorithms for Convex Optimization and Statistical Estimation
PhD Thesis, University of California, Berkeley, 2016
machine learning convex optimization convex relaxations randomized algorithms
PDF

2015


M. Pilanci, M. J. Wainwright, and L. E. Ghaoui
Sparse learning via Boolean relaxations
Mathematical Programming, 151(1), 2015
sparse regression sparse classification randomized rounding
DOI PDF

M. Pilanci and M.J. Wainwright
Randomized Sketches of Convex Programs With Sharp Guarantees
IEEE Transactions on Information Theory, 61(9), 2015
random projection regression compressed sensing data privacy
DOI arXiv

Y. Yang, M. Pilanci, and M. J. Wainwright
Randomized sketches for kernels: Fast and optimal non-parametric regression
Annals of Statistics, Volume 45, Number 3 (2017), 991-1023.
kernel regression smoothing random projection Rademacher complexity
DOI PDF arXiv

2014


M. Pilanci and M. J. Wainwright
Randomized sketches of convex programs with sharp guarantees
IEEE International Symposium on Information Theory (ISIT), 2014
random projection convex optimization
DOI arXiv

2012


M. Pilanci, L. E. Ghaoui, and V. Chandrasekaran
Recovery of sparse probability measures via convex programming
Advances in Neural Information Processing Systems (NIPS), 2012
probability measures convex relaxation data clustering
PDF
 
A. C. Gurbuz, M. Pilanci, and O. Arikan
Expectation maximization based matching pursuit
Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on, 2012
sparse approximation compressed sensing
DOI PDF

2011


M. Pilanci and O. Arikan
Recovery of sparse perturbations in least squares problems
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2011
robust optimization regression sparsity
DOI
A. C. Gurbuz, M. Pilanci, and O. Arikan
Sparse signal reconstruction with ellipsoid enlargement
IEEE 19th Conference on Signal Processing and Communications Applications (SIU), 2011
sparsity compressed sensing signal processing
DOI

2010


M. Pilanci, O. Arikan, and M. C. Pinar
Structured least squares problems and robust estimators
IEEE Transactions on Signal Processing, 58(5), 2010
robust optimization robust estimation
DOI
M. Pilanci
Uncertain linear equations
MS Thesis, Bilkent University, 2010
robust optimization compressed sensing coding and information theory polar codes
PDF

M. Pilanci, O. Arikan, and E. Arikan
Polar compressive sampling: A novel technique using Polar codes
IEEE 18th Conference on Signal Processing and Communications Applications (SIU), 2010
compressed sensing coding and information theory polar codes signal processing
 
B. Guldogan, M. Pilanci, and O. Arikan
Compressed sensing on ambiguity function domain for high resolution detection
IEEE 18th Conference on Signal Processing and Communications Applications (SIU), 2010
compressed sensing signal processing radar

M. Pilanci and O. Arikan
Compressive sampling and adaptive multipath estimation
IEEE 18th Conference on Signal Processing and Communications Applications (SIU), 2010
wireless communications compressed sensing

2009


M. Pilanci, O. Arikan, B. Oguz, and M. Pinar
A novel technique for a linear system of equations applied to channel equalization
IEEE 17th Conference on Signal Processing and Communications Applications, 2009
wireless communications robust optimization
DOI
M. Pilanci, O. Arikan, B. Oguz, and M. Pinar
Structured least squares with bounded data uncertainties
IEEE International Conference on Acoustics, Speech and Signal Processing, 2009
robust optimization regression statistical estimation
DOI