Stronger Than You Think: Benchmarking Weak Supervision on Realistic Tasks

University of Wisconsin-Madison
NeurIPS 2024

Abstract

Weak supervision (WS) is a popular approach for label-efficient learning, leveraging diverse sources of noisy but inexpensive weak labels to automatically annotate training data. Despite its wide usage, WS and its practical value are challenging to benchmark due to the many knobs in its setup, including: data sources, labeling functions (LFs), aggregation techniques (called label models), and end model pipelines. Existing evaluation suites tend to be limited, focusing on particular components or specialized use cases. Moreover, they often involve simplistic benchmark tasks or de-facto LF sets that are suboptimally written, producing insights that may not generalize to real-world settings. We address these limitations by introducing a new benchmark, BoxWRENCH, designed to more accurately reflect real-world usages of WS. This benchmark features tasks with (1) higher class cardinality and imbalance, (2) notable domain expertise requirements, and (3) linguistic variations found in parallel corpora. For all tasks, LFs are written using a careful procedure aimed at mimicking real-world settings. In contrast to existing WS benchmarks, we show that supervised learning requires substantial amounts (1000+) of labeled examples to match WS in many settings.

BibTeX

@inproceedings{zhang2024boxwrench,
title={Stronger Than You Think: Benchmarking Weak Supervision on Realistic Tasks},
author={Zhang, Tianyi and Cai, Linrong and Li, Jeffrey and Roberts, Nicholas and Guha, Neel and Sala, Frederic},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024}
}