Building Linguistically Generalizable NLP Systems
  • Home
  • CFP
  • Schedule
  • Organization
  • Accepted Papers
  • Invited Talks
Picture
09:15 - 10:00  Invited Talk (Presenter: Aurelie Herbelot)

Title:  How to refer like a god: aligning speaker-dependent meaning through formal distributional representations
Abstract: ​ 
Reference -- the ability to talk about things -- is one of the most fundamental features of human language. Still, we are far from understanding how this ability comes about. In formal semantics, reference is theoretically well-defined but fails to provide a generic account of learnability. In data-driven approaches such as distributional semantics, the very notion of entity and event remains undefined, making those accounts unsuitable to even conceptualise reference. In this talk, I will hypothesise omniscient individuals with a fully shared lexicon and formal semantic model, showing that in such a 'godly' setup, reference can trivially be defined as perfect semantic alignment between interlocutors. Turning to a more realistic setting, I will then propose that distributional semantics provides the necessary tools to infer speaker-dependent models that are as close as possible to that idealisation. The suggested framework requires a tight integration of formal and distributional accounts at the representational level, whilst capitalising on the learning algorithms specific to distributional approaches.

Picture
11:00 - 11:45   Invited Talk (Presenter: Grzegorz Chrupała)

​Title: Linguistic interpretability in neural models of grounded language learning
Abstract:  Modeling language learning with neural networks and analyzing the nature of the emerging representations have a long tradition going back to the seminal papers by Elman in the early 1990s. In this talk I will present the modern take on this enterprise. Specifically I will focus on the setting where language is learned in a visually- grounded scenario, from naturalistic text or speech coupled with visually correlated input. This task of learning language in a multisensory setting, with weak and noisy supervision, is of interest to scientists trying to understand the human mind as well as to engineers trying to build smart conversational agents or robots.

I will discuss what representations recurrent neural network models learn in this setting and present analytics tools to better understand them. I will show to what extent they encode structures posited by linguists such as phonemes, words and constructions, and explore the role of network depth in the encoding of these different levels of linguistic abstraction.

Research carried out in collaboration with Afra Alishahi, Ákos Kádár, Lieke Gelderloos and Marie Barking.

Picture
14:00 - 14:45   Invited Talk (Presenter: Martha Palmer)

​Title: Beyond English Semantics
Abstract:  Abstract Meaning Representations (AMRs) provide a single, graph-based semantic representation that abstracts away from the word order and syntactic structure of a sentence, resulting in a more language-neutral representation of its meaning. AMRs implements a simplified, standard neo-Davidsonian semantics. A word in a sentence either maps to a concept  or a relation  or is omitted if it is already inherent in the representation or it conveys inter-personal attitude (e.g., stance or distancing). The basis of AMR is PropBank’s lexicon of coarse-grained senses of verb, noun and adjective relations as well as the roles associated with each sense (each lexicon entry is a ‘roleset’). By marking the appropriate roles for each sense, this level of annotation provides information regarding who is doing what to whom. However, unlike PropBank, AMR also provides a deeper level of representation of discourse relations, non-relational noun phrases, prepositional phrases, quantities and time expressions (which PropBank largely leaves unanalyzed), as well as Named Entity tags with Wikipedia links. Additionally, AMR makes a greater effort to abstract away from language-particular syntactic facts.  The latest version of AMR includes adding coreference links across sentences, including links to implicit arguments.  This talk will explore the differences between PropBank and AMR, the current and future plans for AMR annotation, and the potential of AMR as a basis for machine translation.  It will end with a discussion of areas of semantic representation that AMR is not currently addressing, which remain as open challenges.


Powered by Create your own unique website with customizable templates.
  • Home
  • CFP
  • Schedule
  • Organization
  • Accepted Papers
  • Invited Talks