Multimodal Representations for Natural Language Meaning

Speaker:
James Pustejovsky
Abstract:
In this talk, I discuss the recent research my group has been conducting on multimodal representations for modeling natural language meaning. This aproach, Multimodal Semantic Simulation (MSS), assumes a  rich formal model of events and their participants, as well as a modeling language for constructing 3D visualizations of objects and events denoted by natural language expressions. The Dynamic Event Model (DEM) encodes events as programs in a dynamic logic with an  operational semantics, while the language VoxML, Visual Object Concept Modeling Language, is being used as the platform for multimodal semantic simulations in the context of human-computer communication. Within the context of embodiment and simulation semantics, a rich extension of Generative Lexicon's qualia structure has been developed into a notion of situational context, called a  habitat. Visual representations of concepts are called voxemes and are stored in a voxicon. The linked structures of a lexicon and voxicon are called a multimodal lexicon, which is accessed for natural language parsing, generation, and simulation.
Length:
01:13:48
Date:
07/10/2016
views:

Images:
Attachments: (video, slides, etc.)
102 MB
19 downloads
118 MB
19 downloads
789 MB
22 downloads
161 MB
18 downloads
326 MB
19 downloads