Abstract
Extending semantic parsing systems to new domains and languages is a highly expensive, time-consuming process, so making effective use of existing resources is critical. In this paper, we describe a transfer learning method using crosslingual word embeddings in a sequence-tosequence model. On the NLmaps corpus, our approach achieves state-of-the-art accuracy of 85.7% for English. Most importantly, we observed a consistent improvement for German compared with several baseline domain adaptation techniques. As a by-product of this approach, our models that are trained on a combination of English and German utterances perform reasonably well on codeswitching utterances which contain a mixture of English and German, even though the training data does not contain any code-switching. As far as we know, this is the first study of code-switching in semantic parsing. We manually constructed the set of code-switching test utterances for the NLmaps corpus and achieve 78.3% accuracy on this dataset.
Original language | English |
---|---|
Title of host publication | The 21st Conference on Computational Natural Language Learning (CoNLL 2017): Proceedings of the Conference, Augus 3-4, 2017, Vancouver, Canada |
Publisher | The Association for Computational Linguistics |
Pages | 379-389 |
Number of pages | 11 |
ISBN (Print) | 9781945626548 |
Publication status | Published - 2017 |
Event | Conference on Computational Natural Language Learning - Duration: 3 Aug 2017 → … |
Conference
Conference | Conference on Computational Natural Language Learning |
---|---|
Period | 3/08/17 → … |
Keywords
- English language
- German language
- parsing (computer grammar)
- semantics