Domain Adaptation for Syntactic and Semantic Dependency Parsing Using Deep Belief Networks

Haitong Yang, Tao Zhuang and Chengqing Zong


Abstract

In the current systems for syntactic and semantic dependency parsing, people usually define a very high-dimensional feature to achieve good performance. But these systems often suffer severe performance drops on out-of-domain test data due to the diversity of features of different domains. This paper focuses on how to relieve the domain adaptation with the help of the unlabeled target domain data. We propose a deep learning method to adapt both syntactic and semantic parsers. With additional unlabeled target domain data, our method can learn a latent feature representation (LFR) that is beneficial to both domains. Experiments on English data in CoNLL 2009 shared task show that our method largely reduced the performance drop on out-of-domain test data. Moreover, we get a Macro F1 score that is 2.36 points higher than the best system in the CoNLL 2009 shared task in out-of-domain tests.