Instructions

In order to reach a broad audience we have tried to avoid setting rules that might artificially disadvantage one research community over another. However, to keep the task as close to an application scenario as possible, and to allow systems to be broadly comparable, there are some guidelines that we expect participants to follow.

Which information can I use?

You are allowed to use the fact that the four classes of acoustic environments (BUS, CAF, PED, STR) are shared across datasets.

You are also allowed to use the environment and speaker labels in the training data, and the speaker labels in the development and test data.

You are encouraged to use the embedded training and development data and the corresponding noise-only recordings in any way that may help, e.g., to learn models of the acoustic environments and use them to recognize the test environment and/or to enhance the signal. The embedded test data may also be used in the limit of the immediate acoustic context of each test utterance, that is the 5 s preceding the utterance. Note that these 5 s may also contain speech, that is not always annotated.

Which information shall I not use?

The systems should not exploit the following information in order to transcribe a given test utterance:

  • the environment label,
  • more than 5 seconds of context.
Automatic identification of the environment of the test utterance and the immediate acoustic context is allowed, though. The rationale is that a commercial ASR system to be deployed on a tablet should work in any environment just after the tablet has been switched on.

Similarly, manual refinement of the speech start and end times or manual annotation of the unnotated speech data are not allowed, but automatic refinement and automatic detection of the speech data in the 5 s context are allowed.

All parameters should be tuned on the training set or the development set. The system should not use different tuning parameters depending on different noisy environments and different data types (real or simulation). For example the baseline script tunes the system with a single language model weight, which is optimized by the average WER of overall recognition results in the development set including all noisy environments and data types.

Which results should I report?

For every tested system, you should report 4 WERs (%), namely:

  • the WER on the real development set
  • the WER on the simulated development set
  • the WER on the real test set
  • the WER on the simulated test set
For instance, here are the 4 WERs (%) achieved by the 4 baseline GMM models (the WERs on test data will be available in May). These results were obtained for one run on one machine. If you run the baseline yourself, you will probably obtain slightly different results (up to several percent absolute for large WERs) due to random initialisation and to machine-specific issues.

Acoustic modelTest dataTraining dataDevelopment setTest set
RealSimulatedRealSimulated
GMMnoisyclean55.6550.2579.8463.30
noisy18.7018.7133.2321.59
enhancedclean41.8821.7278.1225.63
enhanced20.559.7937.3610.59
DNN+sMBRnoisynoisy16.1314.3033.4321.51
DNN+sMBRenhancedenhanced17.728.1733.7611.19

Such results will make it possible to assess whether simulated data are a reliable way of predicting ASR performance on real data, for development and/or for test. This currently appears to be approximately true for noisy data, but not for enhanced data due to the limitations of the acoustic simulation baseline. You are encouraged to address these limitations, so that real and simulated ASR performance become more similar after enhancement.

Eventually, only the results of the best system on the real test will be taken into account in the final WER ranking of all systems. The best system is taken to be the one that performs best on the real development set.

For that system, you should report 16 WERs (one for every development/test set and for every environment). The participants should also provide the recognized transcriptions for all the sets, when applicable with time alignment information (if the format of the transcriptions is not standard it must be described).

For instance, here are the WERs achieved by the best baseline GMM system (with noisy training and test data).

EnvironmentDevelopment setTest set
RealSimulatedRealSimulated
BUS26.1218.9149.9018.58
CAF17.8223.1334.0924.02
PED13.0115.5327.9722.54
STR17.8317.2920.9921.20

Can I use different features, a different recogniser or more data?

You are entirely free in the development of your system, from the front end to the back end and beyond, and you may even use extra data, including clean data, additional noisy data created by running the provided simulation baseline (or an improved version thereof), or any other data.

However, you should provide enough information, results and comparisons, such that one can understand where the performance gains obtained by your system come from. For example, if your system is made of multiple blocks, we encourage you to separately evaluate and report the influence of each block on performance.

Specifically:

  • if you use extra training data, please also report the results of your system using the official training set, which consists of 1,600 real utterances and 7,138 simulated utterances; each utterance of the official training set can be considered in as many versions as needed (clean, noisy, enhanced...); you are even allowed to modify the acoustic simulation baseline provided that you mix each speech signal with the same noise signal as in the original simulated set (i.e., only the impulse responses can change, not the noise instance).
  • similarly, if you use extra development data, please also report the results of your system using the official development set, which consists of 410 real utterances and 410 simulated utterances for each environment;
  • any language model or language model rescoring technique (e.g., MBR, DLM) can be used and reported as an official result as long as it is trained using official training data only, i.e. data in CHiME3/data/WSJ0/wsj0/doc/lng_modl/lm_train/. If you do decide to change the language model then please also report the result of your system with the provided baseline model and without rescoring;
  • in the case when your system can be split into a front end and a back end and your front end differs from the baseline, please also report the results obtained by combining the baseline front end with your back end;
  • similarly, in the case when your back end differs from the baseline, please also report the results obtained by combining your front end with the baseline back end.

The interface between front and back end is taken to be either at the signal or feature level, depending whether your front end operates in the signal or feature domain.

Only the results obtained using the official training and development sets (including possible modifications of the acoustic simulation baseline as specified above) will be taken into account in the final WER ranking of all systems.