Connectionist Speech Recognition: A Hybrid ApproachSpringer Science & Business Media, 2012. gada 6. dec. - 313 lappuses Connectionist Speech Recognition: A Hybrid Approach describes the theory and implementation of a method to incorporate neural network approaches into state of the art continuous speech recognition systems based on hidden Markov models (HMMs) to improve their performance. In this framework, neural networks (and in particular, multilayer perceptrons or MLPs) have been restricted to well-defined subtasks of the whole system, i.e. HMM emission probability estimation and feature extraction. The book describes a successful five-year international collaboration between the authors. The lessons learned form a case study that demonstrates how hybrid systems can be developed to combine neural networks with more traditional statistical approaches. The book illustrates both the advantages and limitations of neural networks in the framework of a statistical systems. Using standard databases and comparison with some conventional approaches, it is shown that MLP probability estimation can improve recognition performance. Other approaches are discussed, though there is no such unequivocal experimental result for these methods. Connectionist Speech Recognition is of use to anyone intending to use neural networks for speech recognition or within the framework provided by an existing successful statistical approach. This includes research and development groups working in the field of speech recognition, both with standard and neural network approaches, as well as other pattern recognition and/or neural network researchers. The book is also suitable as a text for advanced courses on neural networks or speech processing. |
No grāmatas satura
1.–5. rezultāts no 42.
vii. lappuse
... ANNs • 5.3 Valid Reasons for Using ANNs 5.4 Neural Nets and Time Sequences • 5.4.1 Static Networks with Buffered Input Recurrent Networks 5.4.2 5.4.3 Partial Feedback of Context Units 5.4.4 Approximating Recurrent Networks by MLPs 5.4.5 ...
... ANNs • 5.3 Valid Reasons for Using ANNs 5.4 Neural Nets and Time Sequences • 5.4.1 Static Networks with Buffered Input Recurrent Networks 5.4.2 5.4.3 Partial Feedback of Context Units 5.4.4 Approximating Recurrent Networks by MLPs 5.4.5 ...
3. lappuse
... ( ANNs ) have been used for difficult problems in pattern recognition [ Viglione , 1970 ] . Some of these problems , such as the pattern analysis of brain waves , have been characterized by a low signal - to - noise ratio ; in some cases ...
... ( ANNs ) have been used for difficult problems in pattern recognition [ Viglione , 1970 ] . Some of these problems , such as the pattern analysis of brain waves , have been characterized by a low signal - to - noise ratio ; in some cases ...
4. lappuse
... ANNs into an ASR system for continuous speech . This has been done for an important recognition subtask , phonetic probability estimation . These probabilities are used as parameters for Hidden Markov Models ( HMMs ) , currently the ...
... ANNs into an ASR system for continuous speech . This has been done for an important recognition subtask , phonetic probability estimation . These probabilities are used as parameters for Hidden Markov Models ( HMMs ) , currently the ...
5. lappuse
... detected fairly easily , and since the words are not strongly coarticulated . Continuous Speech Recognition ( CSR ) systems can recognize a 1.1 . AUTOMATIC SPEECH RECOGNITION ( ASR ) сл 5 SPEECH RECOGNITION USING ANNS.
... detected fairly easily , and since the words are not strongly coarticulated . Continuous Speech Recognition ( CSR ) systems can recognize a 1.1 . AUTOMATIC SPEECH RECOGNITION ( ASR ) сл 5 SPEECH RECOGNITION USING ANNS.
10. lappuse
... ANNs without a clear understanding of the underlying principles and tasks . After a basic review of statistical pattern classification ( Chapter 2 ) , Chapter 3 recalls the main features and underlying hypotheses of Hid- den Markov ...
... ANNs without a clear understanding of the underlying principles and tasks . After a basic review of statistical pattern classification ( Chapter 2 ) , Chapter 3 recalls the main features and underlying hypotheses of Hid- den Markov ...
Saturs
3 | |
5 | |
8 | |
III | 12 |
STATISTICAL PATTERN CLASSIFICATION | 15 |
CONCLUSIONS | 16 |
6 | 52 |
MULTILAYER PERCEPTRONS | 59 |
CONTEXTDEPENDENT MLPs | 201 |
5 | 209 |
TRAINING HARDWARE AND SOFTWARE | 223 |
HIDDEN MARKOV MODELS | 243 |
1 | 253 |
15 | 259 |
FINAL SYSTEM OVERVIEW | 267 |
Bibliography | 281 |
6 | 105 |
STATISTICAL INFERENCE IN MLPs | 115 |
Network Outputs Sum to | 125 |
1 | 154 |
Methods | 161 |
Segmentation of Training Data | 170 |
2223 | 283 |
59 | 287 |
207 | 301 |
4 | 303 |
Acronyms | 311 |
Citi izdevumi - Skatīt visu
Connectionist Speech Recognition: A Hybrid Approach Hervé A. Bourlard,Nelson Morgan Ierobežota priekšskatīšana - 1994 |
Connectionist Speech Recognition: A Hybrid Approach Hervé A. Bourlard,Nelson Morgan Priekšskatījums nav pieejams - 2012 |
Bieži izmantoti vārdi un frāzes
acoustic vectors ANNs associated assumptions autoregressive Bayes Bourlard & Wellekens cepstral Chapter class qk computation connectionist constraints context context-dependent contextual input continuous speech recognition convergence covariance matrix criterion cross-validation database described discriminant functions dynamic emission probabilities error feature vector Forward-Backward algorithm frame level Gaussian given gradient hidden layer Hidden Markov Models hidden units hybrid HMM/MLP approach improve input features input field input pattern input vector iteration MAP probabilities matrix Maximum Likelihood minimization MLP outputs MLP training Multilayer Perceptrons neural networks nonlinear function number of parameters optimal output layer output units output values perceptron performance phone models phoneme possible posterior probabilities prior probabilities probability density functions problem prototype vector RBFs recurrent Section segmentation shown sigmoid function speaker-independent speech units standard HMMs statistical techniques test set tion topology training data training patterns training set transition probabilities triphone Viterbi algorithm Viterbi search weight word recognition