UMass Amherst & Google Improve Few-Shot Learning on NLP Benchmarks via Task Augmentation and Self-Training | Synced

A team from University of Massachusetts Amherst and Google Research proposes STraTA, an approach that combines task augmentation and self-training to leverage unlabelled data and improve sample eff...

By · · 1 min read

Source: Synced | AI Technology & Industry Review

A team from University of Massachusetts Amherst and Google Research proposes STraTA, an approach that combines task augmentation and self-training to leverage unlabelled data and improve sample efficiency and performance on NLP tasks.