Handling unexpected acoustic data Organizer: Hynek Hermansky (IDIAP) Panelist: Li Deng, Jeff Bilmes, Jordan Cohen, Ralf Schlueter, Herve Bourlard Most of the "knowledge" in ASR come from training data, (be it text or acoustic data). How should we handle unexpected acoustic inputs that has not been seen in the training data or not supported by a prior knowledge ? Such inputs do not severely impair human speech communication, often are of high information value, but present significant problems to ASR. Examples could be out-of-vocabulary, out-of-language, out-of-domain words, accented speech, children speech, accented speech, unexpected noises, e.t.c. Do we expect that gradual improvement of the current stochastic approach to ASR will alleviate this problem?