Morphological Derivation in Context Configurations for Class-Specific Word Embeddings

Speaker:
Silvie Cinková, Iveta Kršková, Vincent Kríž
Abstract:
We explore how the knowledge of morphological derivation relations in individual English words affects the performance of a word embedding model (in our case the word2vecf model, Levy et al. 2014).   A word embedding model is a representation of words in a corpus as vectors in a semantic vector space. The similarity of individual vectors is considered to reflect the semantic similarity/relatedness of individual words. This idea draws on the Distributional Hypothesis by Zellig S. Harris (1970), according to which two words occurring in more similar contexts are more semantically related than two words that occur in less similar contexts. The Distributional Hypothesis has a less frequently cited counterpart -- a theory of linguistic transformations, which elaborates on various aspects of context similarity with many examples and which we found particularly interesting.  As Harris' proposed transformations are numerous and go across all levels of linguistic description, we narrowed down our scope to transformations with morphological derivation (e.g. to sing aloud - a loud singer or to love cats - a cat lover). We drew on CELEX, a publicly available database of English morphological derivativation (Baayen et al., 1995), to extract word pairs connected by morphological derivations. Then we took word2vecf and tested different experiment setups to add the morphological derivation information to the regular text input. The parsing scheme was Universal Dependencies (Agic et al., 2015). The baseline was a system reported by Vulic et al. (2017), which contains information on syntactic dependencies between words.
Length:
00:50:22
Date:
20/11/2017
views: 1136

Images:
Attachments: (video, slides, etc.)
70 MB
644 downloads
466 MB
669 downloads
257 MB
1137 downloads
125 MB
603 downloads
87 MB
638 downloads