Perceptually Grounded Selectional Preferences

Ekaterina Shutova, Niket Tandon, Gerard de Melo


Abstract

Selectional preferences (SPs) are widely used in NLP as a rich source of semantic information. While SPs have been traditionally induced from textual data, human lexical acquisition is known to rely on both linguistic and perceptual experience. We present the first SP learning method that simultaneously draws knowledge from text, images and videos, producing a perceptually grounded SP model. Our results show that it outperforms linguistic and visual models in isolation, as well as the existing SP induction approaches.