Instructions
In order to reach a broad audience we have tried to avoid setting rules that might artificially disadvantage one research community over another. However, to keep the task as close to an application scenario as possible, and to allow systems to be broadly comparable, there are some guidelines that we expect participants to follow.
Which information can I use?
You are allowed to use the fact that the four classes of acoustic environments (BUS, CAF, PED, STR) are shared across datasets.
You are also allowed to use the environment and speaker labels in the training data, and the speaker labels in the development and test data.
You are encouraged to use the embedded training and development data and the corresponding noise-only recordings in any way that may help, e.g., to learn models of the acoustic environments and use them to recognize the test environment and/or to enhance the signal. The embedded test data may also be used in the limit of the immediate acoustic context of each test utterance, that is the 5 s preceding the utterance. Note that these 5 s may also contain speech, that is not always annotated.
Which information shall I not use?
The systems should not exploit the following information in order to transcribe a given test utterance:
- the environment label,
- more than 5 seconds of context.
Similarly, manual refinement of the speech start and end times or manual annotation of the unnotated speech data are not allowed, but automatic refinement and automatic detection of the speech data in the 5 s context are allowed.
All parameters should be tuned on the training set or the development set. The system should not use different tuning parameters depending on different noisy environments and different data types (real or simulation). For example the baseline script tunes the system with a single language model weight, which is optimized by the average WER of overall recognition results in the development set including all noisy environments and data types.
Which results should I report?
For every tested system, you should report 4 WERs (%), namely:
- the WER on the real development set
- the WER on the simulated development set
- the WER on the real test set
- the WER on the simulated test set
Acoustic model | Test data | Training data | Development set | Test set | ||
Real | Simulated | Real | Simulated | |||
GMM | noisy | clean | 55.65 | 50.25 | 79.84 | 63.30 |
noisy | 18.70 | 18.71 | 33.23 | 21.59 | ||
enhanced | clean | 41.88 | 21.72 | 78.12 | 25.63 | |
enhanced | 20.55 | 9.79 | 37.36 | 10.59 | ||
DNN+sMBR | noisy | noisy | 16.13 | 14.30 | 33.43 | 21.51 |
DNN+sMBR | enhanced | enhanced | 17.72 | 8.17 | 33.76 | 11.19 |
Such results will make it possible to assess whether simulated data are a reliable way of predicting ASR performance on real data, for development and/or for test. This currently appears to be approximately true for noisy data, but not for enhanced data due to the limitations of the acoustic simulation baseline. You are encouraged to address these limitations, so that real and simulated ASR performance become more similar after enhancement.
Eventually, only the results of the best system on the real test will be taken into account in the final WER ranking of all systems. The best system is taken to be the one that performs best on the real development set.
For that system, you should report 16 WERs (one for every development/test set and for every environment). The participants should also provide the recognized transcriptions for all the sets, when applicable with time alignment information (if the format of the transcriptions is not standard it must be described).
For instance, here are the WERs achieved by the best baseline GMM system (with noisy training and test data).
Environment | Development set | Test set | ||
Real | Simulated | Real | Simulated | |
BUS | 26.12 | 18.91 | 49.90 | 18.58 |
CAF | 17.82 | 23.13 | 34.09 | 24.02 |
PED | 13.01 | 15.53 | 27.97 | 22.54 |
STR | 17.83 | 17.29 | 20.99 | 21.20 |
Can I use different features, a different recogniser or more data?
You are entirely free in the development of your system, from the front end to the back end and beyond, and you may even use extra data, including clean data, additional noisy data created by running the provided simulation baseline (or an improved version thereof), or any other data.
However, you should provide enough information, results and comparisons, such that one can understand where the performance gains obtained by your system come from. For example, if your system is made of multiple blocks, we encourage you to separately evaluate and report the influence of each block on performance.
Specifically:
- if you use extra training data, please also report the results of your system using the official training set, which consists of 1,600 real utterances and 7,138 simulated utterances; each utterance of the official training set can be considered in as many versions as needed (clean, noisy, enhanced...); you are even allowed to modify the acoustic simulation baseline provided that you mix each speech signal with the same noise signal as in the original simulated set (i.e., only the impulse responses can change, not the noise instance).
- similarly, if you use extra development data, please also report the results of your system using the official development set, which consists of 410 real utterances and 410 simulated utterances for each environment;
- any language model or language model rescoring technique (e.g., MBR, DLM) can be used and reported as an official result as long as it is trained using official training data only, i.e. data in CHiME3/data/WSJ0/wsj0/doc/lng_modl/lm_train/. If you do decide to change the language model then please also report the result of your system with the provided baseline model and without rescoring;
- in the case when your system can be split into a front end and a back end and your front end differs from the baseline, please also report the results obtained by combining the baseline front end with your back end;
- similarly, in the case when your back end differs from the baseline, please also report the results obtained by combining your front end with the baseline back end.
The interface between front and back end is taken to be either at the signal or feature level, depending whether your front end operates in the signal or feature domain.
Only the results obtained using the official training and development sets (including possible modifications of the acoustic simulation baseline as specified above) will be taken into account in the final WER ranking of all systems.