Show simple item record

Statistical models for noise-robust speech recognition

dc.contributorGales, Mark
dc.creatorvan Dalen, Rogier Christiaan
dc.date.accessioned2018-11-24T13:11:22Z
dc.date.available2012-01-23T12:23:40Z
dc.date.available2018-11-24T13:11:22Z
dc.date.issued2011-11-08
dc.identifierhttp://www.dspace.cam.ac.uk/handle/1810/241107
dc.identifierhttps://www.repository.cam.ac.uk/handle/1810/241107
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/123456789/2939
dc.description.abstractA standard way of improving the robustness of speech recognition systems to noise is model compensation. This replaces a speech recogniser's distributions over clean speech by ones over noise-corrupted speech. For each clean speech component, model compensation techniques usually approximate the corrupted speech distribution with a diagonal-covariance Gaussian distribution. This thesis looks into improving on this approximation in two ways: firstly, by estimating full-covariance Gaussian distributions; secondly, by approximating corrupted-speech likelihoods without any parameterised distribution. The first part of this work is about compensating for within-component feature correlations under noise. For this, the covariance matrices of the computed Gaussians should be full instead of diagonal. The estimation of off-diagonal covariance elements turns out to be sensitive to approximations. A popular approximation is the one that state-of-the-art compensation schemes, like VTS compensation, use for dynamic coefficients: the continuous-time approximation. Standard speech recognisers contain both per-time slice, static, coefficients, and dynamic coefficients, which represent signal changes over time, and are normally computed from a window of static coefficients. To remove the need for the continuous-time approximation, this thesis introduces a new technique. It first compensates a distribution over the window of statics, and then applies the same linear projection that extracts dynamic coefficients. It introduces a number of methods that address the correlation changes that occur in noise within this framework. The next problem is decoding speed with full covariances. This thesis re-analyses the previously-introduced predictive linear transformations, and shows how they can model feature correlations at low and tunable computational cost. The second part of this work removes the Gaussian assumption completely. It introduces a sampling method that, given speech and noise distributions and a mismatch function, in the limit calculates the corrupted speech likelihood exactly. For this, it transforms the integral in the likelihood expression, and then applies sequential importance resampling. Though it is too slow to use for recognition, it enables a more fine-grained assessment of compensation techniques, based on the KL divergence to the ideal compensation for one component. The KL divergence proves to predict the word error rate well. This technique also makes it possible to evaluate the impact of approximations that standard compensation schemes make.
dc.languageen
dc.publisherUniversity of Cambridge
dc.publisherDepartment of Engineering
dc.subjectSpeech recognition
dc.subjectNoise-robustness
dc.titleStatistical models for noise-robust speech recognition
dc.typeThesis


Files in this item

FilesSizeFormatView
thesis_online.pdf3.256Mbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record