Semantic parsing, the task of mapping utterances to semantic
representations (e.g. logical forms), has its roots in the early
natural language understanding systems of the 1960s. These rule-based
systems were representationally sophisticated, but brittle, and thus
fell out of favor as the statistical revolution swept NLP. Since the
late 1990s, however, there has been a resurgence of interest in
semantic parsing from the statistical perspective, where the
representations are logical but the learning is not. Most recently,
there are efforts to learn these logical forms automatically from
denotations, a much more realistic but also challenging setting. The
learning perspective has both led to practical large-scale semantic
parsers, but interestingly also has implications for the semantic
representations.