The PASCAL challenge is now officially closed and the results are available here. The challenge instructions and data can be found on this site for the benefit of groups wishing to compare their algoriths with those that have been submitted. If citing the challenge please use the following reference,

Barker, J., Vincent, E., Ma, N., Christensen, C. and Green, P. (2013) The PASCAL CHiME speech separation and recognition challenge. Computer Speech and Language 27(3):621-633 doi:10.1016/j.csl.2012.09.001


  • New! RESULTS of the PASCAL CHiME challenge are now available
  • Proceedings of the CHiME 2011 Workshop are now online.
  • 16 Mar 2011. Complete final test set available for download


University of Sheffield, UK, INRIA Rennes, France,


Welcome to the PASCAL 'CHiME' Challenge

In 2006 the PASCAL network funded the 1st Speech Separation challenge which addressed the problem of separating and recognising speech mixed with speech. We are now launching a successor to this challenge that aims to tackle speech separation and recognition in more typical everyday listening conditions.

The challenge employs noise background that has been collected from a real family living room using binaural microphones. Target speech commands have been mixed into the environment at a fixed position using genuine room impulse responses. The task is to separate the speech and recognise the commands being spoken using systems that have been trained on noise-free commands and room noise recordings.

On this web site you will find everything you need to get started. The background section explains the general motivation for the challenge. The instructions section describes the separation/recognition task in more detail and what you need to do in order to take part. The datasets are available for download already, and the evaluation tools will become available by the end of October. Further important dates are listed in the schedule.

This is a multidisciplinary challenge. We hope to encourage participants from the machine learning, source separation and speech recognition communities. Although the ultimate evaluation will be through speech recognition scores, participants may submit either separated signals, robust feature extractors or complete recognition systems. All entrants will be invited to submit papers describing their work to a dedicated satellite workshop hosted at Interspeech 2011. Participants will also be invited to submitted longer versions of their work for a Special Issue of Computer Speech and Language on the theme of Machine Listening in Multisource Environments has been organised.

If you have any questions regarding the challenge please do not hesitate to contact us.

Table of contents