FREDERIC SALA

Assistant Professor, University of Wisconsin CS
fredsala@cs.wisc.edu

I study the fundamentals of data-driven systems.

I am also a research scientist at Snorkel, where we are building a data-first approach to AI.

Previously, I was a postdoc in Stanford CS, associated with the Hazy group. I completed my Ph.D. in electrical engineering at UCLA, where I worked with the LORIS and StarAI groups.

Bio | CV | Google Scholar | Twitter | Interests

News
Publications

Preprints

Train and You'll Miss It: Interactive Model Iteration with Weak Supervision and Pre-Trained Embeddings
Mayee F. Chen, Daniel Y. Fu, Frederic Sala, Sen Wu, Ravi Teja Mullapudi, Fait Poms, Kayvon Fatahalian, Christopher Ré
arXiv | code | video

2020

Fast and Three-rious: Speeding Up Weak Supervision with Triplet Methods
Daniel Y. Fu*, Mayee F. Chen*, Frederic Sala, Sarah M. Hooper, Kayvon Fatahalian, Christopher Ré
International Conference on Machine Learning (ICML), 2020
arXiv | code | video | blog

Ivy: Instrumental Variable Synthesis for Causal Inference
Zhaobin Kuang, Frederic Sala, Nimit Sohoni, Sen Wu, Aldo Cordova-Palomera, Jared Dunnmon, James Priest, Christopher Ré
International Conference on Artificial Intelligence and Statistics (AISTATS), 2020
arXiv | tutorial | video

Low-Dimensional Hyperbolic Knowledge Graph Embeddings
Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, Christopher Ré.
Annual Conference of the Association for Computational Linguistics (ACL), 2020
arXiv | code | video

2019

Multi-Resolution Weak Supervision for Sequential Data
Frederic Sala*, Paroma Varma*, Shiori Sagawa, Jason Fries, Daniel Y. Fu, Saelig Khattar, Ashwini Ramamoorthy, Ke Xiao, Kayvon Fatahalian, James Priest, Christopher Ré.
Neural Information Processing Systems (NeurIPS), 2019
paper

Learning Mixed-Curvature Representations in Products of Model Spaces
Albert Gu, Frederic Sala, Beliz Gunel, Christopher Ré
International Conference on Learning Representations (ICLR), 2019
paper | code | blog

Learning Dependency Structures for Weak Supervision Models
Paroma Varma*, Frederic Sala*, Ann He, Alexander Ratner, Christopher Ré
International Conference on Machine Learning (ICML), 2019
arXiv | code

Training Complex Models with Multi-Task Weak Supervision
Alexander J. Ratner, Braden Hancock, Jared Dunnmon, Frederic Sala, Shreyash Pandey, Christopher Ré
AAAI Conference on Artificial Intelligence, 2019
arXiv | code

Context-Aware Resiliency: Unequal Message Protection for Random-Access Memories
Clayton Schoeny, Frederic Sala, Mark Gottscho, Irina Alam, Puneet Gupta, Lara Dolecek
IEEE Transactions on Information Theory, 2019
paper

Codes Correcting Two Deletions
Ryan Gabrys and Frederic Sala
IEEE Transactions on Information Theory, 2019
arXiv

2018

Representation Tradeoffs for Hyperbolic Embeddings
Frederic Sala, Christopher De Sa, Albert Gu, Cristopher Ré
International Conference on Machine Learning (ICML), 2018
paper | code | blog

Signal Processing and Coding Techniques for 2-D Magnetic Recording: An Overview
Shayan Srinivasa Garani, Lara Dolecek, John Barry, Frederic Sala, Bane Vasić
Proceedings of the IEEE, 2018
paper

2017

Exact Reconstruction from Insertions in Synchronization Codes
Frederic Sala, Ryan Gabrys, Clayton Schoeny, Lara Dolecek
IEEE Transactions on Information Theory, 2017
arXiv

On Nonuniform Noisy Decoding for LDPC Codes with Application to Radiation-Induced Errors
Frederic Sala, Clayton Schoeny, Shahroze Kabir, Dariush Divsalar, Lara Dolecek
IEEE Transactions on Communications, 2017
paper

Coded Machine Learning: Joint Informed Replication and Learning for Linear Regression
Shahroze Kabir, Frederic Sala, Guy Van den Broeck, Lara Dolecek
Allerton Conference on Communication, Control, and Computing, 2017
paper

2016

Synchronizing Files from a Large Number of Insertions and Deletions
Frederic Sala, Clayton Schoeny, Nicolas Bitouze, Lara Dolecek
IEEE Transactions on Communications, 2016
paper

Codes Correcting Erasures and Deletions for Rank Modulation
Ryan Gabrys, Eitan Yaakobi, Farzad Farnoud, Frederic Sala, Shuki Bruck, Lara Dolecek
IEEE Transactions on Information Theory, 2016
paper

Robust Channel Coding Strategies for Machine Learning Data
Kayvon Mazooji, Frederic Sala, Guy Van den Broeck, Lara Dolecek
Allerton Conference on Communication, Control, and Computing, 2016
paper

A Unified Framework for Error Correction Techniques in On-Chip Memories
Frederic Sala, Henry Duwe, Lara Dolecek, and Rakesh Kumar
Workshop on Silicon Errors in Logic - System Effects (SELSE-12), 2016
Best of SELSE-12, selected for special session at International Conference on Dependable Systems and Networks (DSN)
paper

older
Research Interests
Weak supervision for machine learning models
ICML '20 | NeurIPS '19 | ICML '19 | AAAI '19 | blog

Obtaining large amounts of labeled data is such a bottleneck that practitioners have increasingly turned to weaker forms of supervision. We study efficient algorithms for synthesizing labels from weak supervision sources with theoretical guarantees, an efficient way to learn the structure of a model of such sources, and new ways to tackle labeling data for large-scale video and other applications.

Geometry and structure of data
ACL '20 | ICLR '19 | ICML '18 | blog 1 | blog 2

Modern ML methods require first embedding data into a continuous space-traditionally Euclidean space. However, the structure of data makes Euclidean space unsuitable for many types of data (like hierarchies!) My work shows that non-Euclidean spaces like hyperbolic space (and other manifolds!) are more suitable for embeddings and study the limits and tradeoffs of these techniques.

Efficient data synchronization and reconstruction
IT '19 | IT '17 | TCOM '16

What is the least amount of information we must exchange to synchronize between two versions of a file, or to reconstruct a core piece of data from noisy samples? My work studies bounds and algorithms for these techniques.

Reliable data storage & next-gen memories
TCOM '17 | TCOM '13 | CL '14 | SELSE '16 (best of) | book

Modern memories offer speed and efficiency but suffer from physical limitations that lead to errors and corruption. New reliability and error-correcting techniques are critical to the future of these devices. My work develops new data representations, new coding techniques, and how to make algorithms more robust. I also study theoretical frameworks to evaluate broad ranges of ECCs.

Awards
  • Top Reviewer NeurIPS, 2018, 2019
  • Outstanding Ph.D. Dissertation Award, UCLA Department of Electrical Engineering (Signals & Systems Track) (2017)
  • UCLA Dissertation Year Fellowship (2015-2016)
  • Qualcomm Innovation Fellowship Finalist (2015)
  • Edward K. Rice Outstanding Masters Student Award UCLA Henry Samueli School of Engineering & Applied Science (2013)
  • Outstanding M.S. Thesis Award, UCLA Department of Electrical Engineering (Signals & Systems Track) (2013)
  • National Science Foundation Graduate Research Fellowship (NSF GRFP) (2012-2015)