Are Multilingual Language Models Fragile? IBM Adversarial Attack Strategies Cut MBERT QA Performance by 85% | Synced

An IBM research team proposes four multilingual adversarial attack strategies and attacks seven languages in a zero-shot setting on large multilingual pretrained language models (e.g. MBERT), reduc...

By · · 1 min read

Source: Synced | AI Technology & Industry Review

An IBM research team proposes four multilingual adversarial attack strategies and attacks seven languages in a zero-shot setting on large multilingual pretrained language models (e.g. MBERT), reducing average performance by up to 85.6 percent.