From SPMRL to NMRL: What Did We Learn (and Unlearn) in a Decade of Parsing Morphologically-Rich Languages (MRLs)?

Reut Tsarfaty, Dan Bareket, Stav Klein, Amit Seker

Abstract Paper Share

Theme Long Paper

Session 12B: Jul 8 (09:00-10:00 GMT)
Session 13B: Jul 8 (13:00-14:00 GMT)
Abstract: It has been exactly a decade since the first establishment of SPMRL, a research initiative unifying multiple research efforts to address the peculiar challenges of Statistical Parsing for Morphologically-Rich Languages (MRLs). Here we reflect on parsing MRLs in that decade, highlight the solutions and lessons learned for the architectural, modeling and lexical challenges in the pre-neural era, and argue that similar challenges re-emerge in neural architectures for MRLs. We then aim to offer a climax, suggesting that incorporating symbolic ideas proposed in SPMRL terms into nowadays neural architectures has the potential to push NLP for MRLs to a new level. We sketch a strategies for designing Neural Models for MRLs (NMRL), and showcase preliminary support for these strategies via investigating the task of multi-tagging in Hebrew, a morphologically-rich, high-fusion, language.
You can open the pre-recorded video in a separate window.
NOTE: The SlidesLive video may display a random order of the authors. The correct author list is shown at the top of this webpage.

Similar Papers

Learning Architectures from an Extended Search Space for Language Modeling
Yinqiao Li, Chi Hu, Yuhao Zhang, Nuo Xu, Yufan Jiang, Tong Xiao, Jingbo Zhu, Tongran Liu, Changliang Li,
A representative figure from paper main.592
Should All Cross-Lingual Embeddings Speak English?
Antonios Anastasopoulos, Graham Neubig,
A representative figure from paper main.766
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
Su Lin Blodgett, Solon Barocas, Hal Daumé III, Hanna Wallach,
A representative figure from paper main.485