Understanding, Meaning, and Representations in Large Language Models

Speaker:
Tomáš Musil (ÚFAL MFF UK)
Abstract:
Significant in contemporary language processing applications, Large Language Models (LLMs) have become indispensable across diverse domains, ranging from improving chatbot interactions and language translation to code generation and the creation of imaginative content. The question of whether LLMs truly comprehend language has been posed repeatedly, often eliciting a resolute negative response. Nevertheless, recent literature presents nuanced perspectives on the notion of understanding within LLMs. In this presentation, we will explore these alternative viewpoints, illustrating how varying concepts of understanding and meaning can reshape our perception of the functioning of LLMs. Ultimately, we will demonstrate how this philosophical exploration informs our empirical research into the interpretation of vector language representations in LLMs.
Length:
01:02:40
Date:
04/12/2023
views: 396

Images:
Attachments: (video, slides, etc.)
99.0 MB
397 downloads