Connectionist Speech Recognition: A Hybrid ApproachSpringer Science & Business Media, 2012. gada 6. dec. - 313 lappuses Connectionist Speech Recognition: A Hybrid Approach describes the theory and implementation of a method to incorporate neural network approaches into state of the art continuous speech recognition systems based on hidden Markov models (HMMs) to improve their performance. In this framework, neural networks (and in particular, multilayer perceptrons or MLPs) have been restricted to well-defined subtasks of the whole system, i.e. HMM emission probability estimation and feature extraction. The book describes a successful five-year international collaboration between the authors. The lessons learned form a case study that demonstrates how hybrid systems can be developed to combine neural networks with more traditional statistical approaches. The book illustrates both the advantages and limitations of neural networks in the framework of a statistical systems. Using standard databases and comparison with some conventional approaches, it is shown that MLP probability estimation can improve recognition performance. Other approaches are discussed, though there is no such unequivocal experimental result for these methods. Connectionist Speech Recognition is of use to anyone intending to use neural networks for speech recognition or within the framework provided by an existing successful statistical approach. This includes research and development groups working in the field of speech recognition, both with standard and neural network approaches, as well as other pattern recognition and/or neural network researchers. The book is also suitable as a text for advanced courses on neural networks or speech processing. |
No grāmatas satura
1.–5. rezultāts no 49.
xiv. lappuse
... Hidden layer is used for continuous ( real - valued ) input X , no hidden layer re- quired for discrete ( binary ) X. In the simplest case , X is the feature vector from a single frame , but it can include features from surrounding ...
... Hidden layer is used for continuous ( real - valued ) input X , no hidden layer re- quired for discrete ( binary ) X. In the simplest case , X is the feature vector from a single frame , but it can include features from surrounding ...
xv. lappuse
... hidden units and d output units . The size of the output layer was kept fixed at 50 units , corresponding to the 50 phonemes to be recognized . • • 111 132 134 6.3 Phonetic classification rates at the frame level obtained from ...
... hidden units and d output units . The size of the output layer was kept fixed at 50 units , corresponding to the 50 phonemes to be recognized . • • 111 132 134 6.3 Phonetic classification rates at the frame level obtained from ...
xxi. lappuse
... hidden layer of an MLP , given îʼn at the input ; l = 1 , ... , n - he ( n ) : augmented activation vector of the l - th hidden layer of an MLP ( to take the bias into account ) - · F ( · ) : sigmoid function ( applied componentwise if ...
... hidden layer of an MLP , given îʼn at the input ; l = 1 , ... , n - he ( n ) : augmented activation vector of the l - th hidden layer of an MLP ( to take the bias into account ) - · F ( · ) : sigmoid function ( applied componentwise if ...
xxiii. lappuse
... layer perceptron networks containing more than 150,000 weights were trained and integrated into a state - of - the - art Hidden Markov Model ( HMM ) recognizer to provide im- proved acoustic - phonetic modeling and improved recognition ...
... layer perceptron networks containing more than 150,000 weights were trained and integrated into a state - of - the - art Hidden Markov Model ( HMM ) recognizer to provide im- proved acoustic - phonetic modeling and improved recognition ...
64. lappuse
Esat sasniedzis šīs grāmatas aplūkošanas reižu limitu.
Esat sasniedzis šīs grāmatas aplūkošanas reižu limitu.
Saturs
3 | |
5 | |
8 | |
III | 12 |
STATISTICAL PATTERN CLASSIFICATION | 15 |
CONCLUSIONS | 16 |
6 | 52 |
MULTILAYER PERCEPTRONS | 59 |
CONTEXTDEPENDENT MLPs | 201 |
5 | 209 |
TRAINING HARDWARE AND SOFTWARE | 223 |
HIDDEN MARKOV MODELS | 243 |
1 | 253 |
15 | 259 |
FINAL SYSTEM OVERVIEW | 267 |
Bibliography | 281 |
6 | 105 |
STATISTICAL INFERENCE IN MLPs | 115 |
Network Outputs Sum to | 125 |
1 | 154 |
Methods | 161 |
Segmentation of Training Data | 170 |
2223 | 283 |
59 | 287 |
207 | 301 |
4 | 303 |
Acronyms | 311 |
Citi izdevumi - Skatīt visu
Connectionist Speech Recognition: A Hybrid Approach Hervé A. Bourlard,Nelson Morgan Ierobežota priekšskatīšana - 1994 |
Connectionist Speech Recognition: A Hybrid Approach Hervé A. Bourlard,Nelson Morgan Priekšskatījums nav pieejams - 2012 |
Bieži izmantoti vārdi un frāzes
acoustic vectors ANNs associated assumptions autoregressive Bayes Bourlard & Wellekens cepstral Chapter class qk computation connectionist constraints context context-dependent contextual input continuous speech recognition convergence covariance matrix criterion cross-validation database described discriminant functions dynamic emission probabilities error feature vector Forward-Backward algorithm frame level Gaussian given gradient hidden layer Hidden Markov Models hidden units hybrid HMM/MLP approach improve input features input field input pattern input vector iteration MAP probabilities matrix Maximum Likelihood minimization MLP outputs MLP training Multilayer Perceptrons neural networks nonlinear function number of parameters optimal output layer output units output values perceptron performance phone models phoneme possible posterior probabilities prior probabilities probability density functions problem prototype vector RBFs recurrent Section segmentation shown sigmoid function speaker-independent speech units standard HMMs statistical techniques test set tion topology training data training patterns training set transition probabilities triphone Viterbi algorithm Viterbi search weight word recognition