15 Dec 2017 Moscow - On December 8, 2017, the Conference on Neural Information Processing Systems (NIPS) hosted the finals of Conversational Intelligence Challenge. The challenge was organized within the collaboration of AI research universities.
Conversational artificial intelligence (AI) is a task of having meaningful dialogue with a user and acting as human as possible. This challenging and complex problem is considered to be one of the core tasks that true AI should pass. Dialogue agents - programmes capable for having a conversation - surround us everywhere: from bots that we chat with on messengers to personal assistants and voice control interfaces. While dialogue systems are becoming more widespread, there is still a great deal of work involved in making conversational intelligence more sophisticated.
To create dialogue systems, we need two things: More data to train on (examples of conversations), and a reliable and fast ways to evaluate the quality of these dialogues. The competition gathered chatbots developed to define which techniques would make a chatbot appear intelligent. The competition also aimed at working out approaches for chatbots evaluation and at human-to-machine dialogues collection for further end-to-end dialog systems training.
During the competition, chatbots had to talk with users, who then assessed the quality of these conversations. The competition was held in two stages: At first round, organized by the Neural Networks and Deep Learning Lab this summer, the participants of the hackathon talked to the systems. The final stage took place at the NIPS conference, where the scientific community tested chatbots. Reflecting the wider AI boom, over the past few years the NIPS conference has grown from a small gathering of a few hundred academics to a sprawling event with thousands of attendees, big-name corporate recruiters, and lavish parties.
During the Challenge, the bots conducted about 3500 dialogues, about 400 of which were judged by users as meaningful. The dialogues will be further examined to find out what factors involved in maintaining meaningful conversation and how we could measure its quality. This knowledge will help to improve tools for creating dialogue systems.
Six teams from the University of Wroclaw, Moscow Institute of Physics and Technology, McGill University, KAIST & AIBrain & Crosscert, UMass Lowell's Text Machine Lab & Trinity College, Hong Kong Polytechnic University, and Fudan University took part in the competition finals.
The first prize was awarded to 2 winning teams: Moscow Institute of Physics and Technology team (Idris Yusupov, Yurii Kuratov) and University of Wroclaw team (Jan Chorowski, Adrian Lancucki, Szymon Malik, Maciej Pawlikowski, Pawel Rychlikowski, Pawel Zykowski).
Both teams used similar approaches: they developed modules that processed different scenarios - having a small talk, answering factoid questions asked by a user, engaging a user into talk about himself/herself, processing unrecognized input from a user, and a command module that decides which routine should step in. This is now considered a state-of-the-art technique for chatbot development. Even though these chatbots do not contain any novel methods - i.e. methods not published anywhere before, they represent good proofs of concept for existing approaches. Moreover, these solutions will be open-sourced and can now serve as a starting point for further research in dialogue systems.
The Conversational Intelligence Challenge attracted the interest of frontiers in conversational AI - Google AI, Facebook research, Amazon, Microsoft, and will be held annually.
The competition was organized by Moscow Institute of Physics and Technology, Université de Montréal, McGill University, and Carnegie Mellon University in partnership with Facebook Research, Flint Capital, IVADO, Maluuba, and Element AI.