Scaling Expert Language Models with Unsupervised Domain Discovery
Suchin Gururangan*, Margaret Li*, Mike Lewis, Weijia Shi, Tim Althoff, Noah A. Smith, Luke Zettlemoyer
in submission // [paper] [code]
*Equal contribution

Editing Models with Task Arithmetic
Gabriel Ilharco, Marco Tulio Riberio, Mitchell Wortsman, Suchin Gururangan, Ludwig Schmidt, Hannaneh Hajishirzi, Ali Farhadi
ICLR 2023 // [paper] [code]


lo-fi: distributed fine-tuning without communication
Mitchell Wortsman, Suchin Gururangan, Shen Li, Ali Farhadi, Ludwig Schmidt, Michael Rabbat, Ari S. Morcos
TMLR // [paper]

M2D2: A Massively Multi-Domain Language Modeling Dataset
Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer
EMNLP 2022 // [paper] [code]

Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Wang, Blarry Wang,Luke Zettlemoyer, and Noah A. Smith
EMNLP 2022 // [paper] [code]

Nearest Neighbor Zero-Shot Inference
Weijia Shi, Julian Michael, Suchin Gururangan, and Luke Zettlemoyer
EMNLP 2022 // [paper] [code]

Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models
Margaret Li*, Suchin Gururangan*, Tim Dettmers, Mike Lewis, Noah A. Smith, and Luke Zettlemoyer
in submission // [paper] [code]
*Equal contribution

Time Waits for No One! Analysis and Challenges of Temporal Misalignment
Kelvin Luu, Daniel Khashabi, Suchin Gururangan, Karishma Mandyam, and Noah A. Smith
NAACL 2022 // [paper] [code]

DEMix Layers: Disentangling Domains for Modular Language Modeling
Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, and Luke Zettlemoyer
NAACL 2022 // [paper] [model code] [data code]


All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text
Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A. Smith
ACL 2021 // [paper]
🔥 Outstanding Paper Award 🔥

Expected Validation Performance and Estimation of a Random Variable’s Maximum
Jesse Dodge, Suchin Gururangan, Roy Schwartz, Dallas Card, and Noah A. Smith
EMNLP Findings 2021 // [paper]

Detoxifying Language Models Risks Marginalizing Minority Voices
Albert Xu, Eshaan Pathak, Eric Wallace, Suchin Gururangan, Maarten Sap, and Dan Klein
NAACL 2021 // [paper]


RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
Sam Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith
EMNLP Findings 2020 // [paper] [code] [demo]
Press: [Wired] [IEEE] [GeekWire][Nature]

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith
ACL 2020 // [paper] [code]
🔥 Honorable Mention for Best Overall Paper 🔥


Variational Pretraining for Semi-supervised Text Classification
Suchin Gururangan,Tam Dang, Dallas Card, and Noah A. Smith
ACL 2019 // [paper] [code]

Show Your Work: Improved Reporting of Experimental Results
Jesse Dodge, Suchin Gururangan, Roy Schwartz, Dallas Card, and Noah A. Smith
EMNLP 2019 // [paper] [code]
Press: [Wired]
Basis for the Reproducibility Checklist of major NLP conferences

Emergent coordination underlying learning to reach to grasp with a brain-machine interface
with many authors 🙂
Journal of Neurophys 2019 // [paper]


Annotation Artifacts in Natural Language Inference Data
Suchin Gururangan*, Swabha Swayamdipta*, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith
NAACL 2018 // [paper]
*Equal contribution


Analysis of Graph Invariants in Functional Neocortical Circuitry Reveals Generalized Features Common to Three Areas of Sensory Cortex
Suchin Gururangan, Alex Sadovsky and Jason Maclean
PLOS Comp Bio 2014 // [paper]