Integrating Self-Attention and Convolution: Tsinghua, Huawei & BAAI’s ACmix Achieves SOTA Performance on CV Tasks With Minimum Cost | Synced

In the new paper On the Integration of Self-Attention and Convolution, a research team from Tsinghua University, Huawei Technologies Ltd. and the Beijing Academy of Artificial Intelligence proposes...

By · · 1 min read

Source: Synced | AI Technology & Industry Review

In the new paper On the Integration of Self-Attention and Convolution, a research team from Tsinghua University, Huawei Technologies Ltd. and the Beijing Academy of Artificial Intelligence proposes ACmix, a mixed model that leverages the benefits of both self-attention and convolution for computer vision representation tasks while achieving minimum computational overhead compared to its pure convolution or self-attention counterparts.