Multiscale Collaborative Deep Models for Neural Machine Translation

Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng, Weihua Luo

Abstract Paper Share

Machine Translation Long Paper

Session 1A: Jul 6 (05:00-06:00 GMT)
Session 3B: Jul 6 (13:00-14:00 GMT)
Abstract: Recent evidence reveals that Neural Machine Translation (NMT) models with deeper neural networks can be more effective but are difficult to train. In this paper, we present a MultiScale Collaborative (MSC) framework to ease the training of NMT models that are substantially deeper than those used previously. We explicitly boost the gradient back-propagation from top to bottom levels by introducing a block-scale collaboration mechanism into deep NMT models. Then, instead of forcing the whole encoder stack directly learns a desired representation, we let each encoder block learns a fine-grained representation and enhance it by encoding spatial dependencies using a context-scale collaboration. We provide empirical evidence showing that the MSC nets are easy to optimize and can obtain improvements of translation quality from considerably increased depth. On IWSLT translation tasks with three translation directions, our extremely deep models (with 72-layer encoders) surpass strong baselines by +2.2~+3.1 BLEU points. In addition, our deep MSC achieves a BLEU score of 30.56 on WMT14 English-to-German task that significantly outperforms state-of-the-art deep NMT models. We have included the source code in supplementary materials.
You can open the pre-recorded video in a separate window.
NOTE: The SlidesLive video may display a random order of the authors. The correct author list is shown at the top of this webpage.

Similar Papers

Norm-Based Curriculum Learning for Neural Machine Translation
Xuebo Liu, Houtim Lai, Derek F. Wong, Lidia S. Chao,
A representative figure from paper main.41
Content Word Aware Neural Machine Translation
Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita,
A representative figure from paper main.34
Lipschitz Constrained Parameter Initialization for Deep Transformers
Hongfei Xu, Qiuhui Liu, Josef van Genabith, Deyi Xiong, Jingyi Zhang,
A representative figure from paper main.38
Multi-Task Neural Model for Agglutinative Language Translation
Yirong Pan, Xiao Li, Yating Yang, Rui Dong,
A representative figure from paper srw.54