Yifei Wang

Me 

Yifei Wang (汪祎非)
Department of Electrical Engineering
Stanford University


About me

I am a forth-year Ph.D. candidate in the Department of Electrical Engineering at Stanford University, where I am coadvised by Prof. Mert Pilanci and Prof. David Tse. Prior to joining Stanford, I obtained my B.S. degree of Computational and Applied Mathematics in School of Mathematical Science at Peking University, advised by Prof. Zaiwen Wen. My research interests include applications of convex neural networks on the training of large language model and the design of consensus protocols.

Recent News

  • 2024/03 Our paper A Library of Mirrors: Deep Neural Nets in Low Dimensions are Convex Lasso Models with Reflection Features is available on arxiv. Too long; Don't read (TLDR): training a neural network on 1D dataset is equivalent to solving a Lasso problem. This extends to deep neural networks up to 4 layer.

  • 2024/02 Our paper A Circuit Approach to Constructing Blockchains on Blockchains is available on arxiv. TLDR: we build a more secure overlay blockchain by reading from and writing to a given set of blockchains.

  • 2023/10 Our paper Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes is available on arxiv. TLDR: we use max-cut and zonotope to provide a classification on the difficulty of training two-layer ReLU neural networks.

  • 2023/9 I finished my internship at Babylon Chain. Thanks a lot to David and Sankha for hosting me. My main responsibilities encompassed user behavior analysis of Bitcoin, an extensive survey on the Proof of Stake (PoS) liquid staking model and their related incentive programs, and a thorough tokenomics investigation of emerging blockchains, including Akash.

  • 2023/4 Our paper Sketching the Krylov Subspace: Faster Computation of the Entire Ridge Regularization path with code is accepted for the Journal of Supercomputing 2023. Too long; Don't read (TLDR): we use polynomial expansion and iterative Hessian sketch to compute the entire regularzation path of ridge regression.

  • 2023/3 Our preprint Overparameterized ReLU Neural Networks Learn the Simplest Models: Neural Isometry and Exact Recovery has been updated. Too long; Don't read (TLDR): we find the phase-transition in recovering 2-layer ReLU neural network in the teacher-student setting.

  • 2023/2 Our paper A Decomposition Augmented Lagrangian Method for Low-rank Semidefinite Programming is to appear on SIAM on Optimization (2023). We provide a fast solvers for general non-smooth semi-definite programs with low-rank structure.

  • 2023/1 Our paper Parallel Deep Neural Networks Have Zero Duality Gap is accepted for 2023 as a poster. In short, we make the convex duality gap for deep neural networks become zero by considering the parallel neural network structure. To a parallel neural network with m branches, the output is the linear combinations of the outputs of m standard fully connected neural networks.

Contact

If you find any of my research interesting or have any questions, feel free to reach out to me via email!

Email    wangyf18 at stanford dot edu

LinkedIn

Google scholar

My Github

Pilanci Research Group's Github

CV

My CV

Last update

March 22nd, 202