Sequence-to-sequence natural language generation for spoken dialogue systems

Speaker:
Ondřej Dušek
Abstract:
The task of  natural language generation for spoken dialogue systems is to convert dialogue acts (consisting of speech acts, such as "inform" or "request", and a list of domain-specific attributes and their values) into fluent and relevant natural language senteces. We present three of our recent experiments with applying sequence-to-sequence (seq2seq) neural network models to this problem: First, we compare direct sentence generation with two-step generation via deep syntax trees and show that it is possible to train seq2seq generators from very little data. Second, we enhance the seq2seq model so that it takes previous dialogue context into account and produces contextually appropriate responses. And finally, we evaluate a few simple extensions to the model designed for generating morphologically rich languages, such as Czech.
Length:
01:15:10
Date:
20/03/2017
views: 1695

Images:
Attachments: (video, slides, etc.)
104 MB
909 downloads
821 MB
1696 downloads
117 MB
991 downloads
167 MB
972 downloads
350 MB
907 downloads