Multimodal Representations for Natural Language Meaning
Speaker:
James Pustejovsky
Abstract:
In this talk, I discuss the recent research my group has been
conducting on multimodal representations for modeling natural language
meaning. This aproach, Multimodal Semantic Simulation (MSS), assumes a
rich formal model of events and their participants, as well as a
modeling language for constructing 3D visualizations of objects and
events denoted by natural language expressions. The Dynamic Event
Model (DEM) encodes events as programs in a dynamic logic with an
operational semantics, while the language VoxML, Visual Object Concept
Modeling Language, is being used as the platform for multimodal
semantic simulations in the context of human-computer
communication. Within the context of embodiment and simulation
semantics, a rich extension of Generative Lexicon's qualia structure
has been developed into a notion of situational context, called a
habitat. Visual representations of concepts are called voxemes and are
stored in a voxicon. The linked structures of a lexicon and voxicon
are called a multimodal lexicon, which is accessed for natural
language parsing, generation, and simulation.