Home
Invited Speakers
Important Dates
Program
Venue
Registration
Contact
Supported by
TTIJ.jpg   TTIC.jpg
   AIP.jpg     AIRC.png
osaka-u.jpg
Additional cooperation from
ISM2.jpg tokyotech2.png

Fifth International Workshop on Symbolic-Neural Learning (SNL-2021)

SNL2021 will be held Online because of the COVID-19.

Note that SNL2021 is free of charge.

June 29-July 2, 2021

Symbolic-neural learning involves deep learning methods in combination with symbolic structures. A "deep learning method" is taken to be a learning process based on gradient descent on real-valued model parameters. A "symbolic structure" is a data structure involving symbols drawn from a large vocabulary; for example, sentences of natural language, parse trees over such sentences, databases (with entities viewed as symbols), and the symbolic expressions of mathematical logic or computer programs.
Symbolic-neural learning has an innovative feature that allows to model interactions between different modals: speech, vision, and language. Such multimodal information processing is crucial for realizing research outcomes in real-word.
For growing needs and attention to multimodal research, SNL workshop this year features researches on "Beyond modality: Researches across speech, vision, and language boundaries."
Topics of interests include, but are not limited to, the following areas:

  • Speech, vision, and natural language interactions in robotics
  • Multimodal and grounded language processing
  • Multimodal QA and translation
  • Dialogue systems
  • Language as a mechanism to structure and reason about visual perception
  • Image caption generation and image generation from text
  • General knowledge question answering
  • Reading comprehension
  • Textual entailment
Deep learning systems across these areas share various architectural ideas. These include word and phrase embeddings, self-attention neural networks, recurrent neural networks (LSTMs and GRUs), and various memory mechanisms. Certain linguistic and semantic resources may also be relevant across these applications. For example, dictionaries, thesauri, WordNet, FrameNet, FreeBase, DBPedia, parsers, named entity recognizers, coreference systems, knowledge graphs and encyclopedias.

The workshop consists of invited oral presentations.

Organizing Committee:

Yasushi Yagi (Chair) Osaka University, Osaka, Japan
David McAllester Toyota Technological Institute at Chicago, Chicago, USA
Tomoko Matsui The Institute of Statistical Mathematics, Tokyo, Japan
Yutaka Sasaki (Treasurer) Toyota Technological Institute, Nagoya, Japan
Koichi Shinoda Tokyo Institute of Technology, Tokyo, Japan
Masashi Sugiyama RIKEN Center for AIP and the University of Tokyo, Tokyo, Japan
Jun'ichi Tsujii AIST AI Research Center, Tokyo, Japan and
the University of Manchester, Manchester, UK

Program Committee:

Yuki Arase (Chair) Osaka University, Osaka, Japan
Nakamasa Inoue Tokyo Institute of Technology, Tokyo, Japan
Daichi Mochihashi The Institute of Statistical Mathematics, Tokyo, Japan
David McAllester Toyota Technological Institute at Chicago, Chicago, USA
Hiroya Takamura AIST AI Research Center, Tokyo, Japan and Tokyo Institute of Technology, Tokyo, Japan
Norimichi Ukita Toyota Technological Institute, Nagoya, Japan
Kazuyoshi Yoshii RIKEN Center for AIP and Kyoto University, Kyoto, Japan

Local Arrangements Committee

Shuqiong Wu (Local Chair), Osaka University, Osaka, Japan

Previous Workshops: