Advances in Neural Information Processing Systems 8: Proceedings of the 1995 Conference

Pirmais vāks
David S. Touretzky, Michael C. Mozer, Michael E. Hasselmo
MIT Press, 1996 - 1098 lappuses

The past decade has seen greatly increased interaction between theoretical work in neuroscience, cognitive science and information processing, and experimental work requiring sophisticated computational modeling. The 152 contributions in NIPS 8 focus on a wide variety of algorithms and architectures for both supervised and unsupervised learning. They are divided into nine parts: Cognitive Science, Neuroscience, Theory, Algorithms and Architectures, Implementations, Speech and Signal Processing, Vision, Applications, and Control. Chapters describe how neuroscientists and cognitive scientists use computational models of neural systems to test hypotheses and generate predictions to guide their work. This work includes models of how networks in the owl brainstem could be trained for complex localization function, how cellular activity may underlie rat navigation, how cholinergic modulation may regulate cortical reorganization, and how damage to parietal cortex may result in neglect. Additional work concerns development of theoretical techniques important for understanding the dynamics of neural systems, including formation of cortical maps, analysis of recurrent networks, and analysis of self- supervised learning. Chapters also describe how engineers and computer scientists have approached problems of pattern recognition or speech recognition using computational architectures inspired by the interaction of populations of neurons within the brain. Examples are new neural network models that have been applied to classical problems, including handwritten character recognition and object recognition, and exciting new work that focuses on building electronic hardware modeled after neural systems. A Bradford Book

No grāmatas satura

Saturs

Learning the Structure of Similarity
3
Human Reading and the Curse of Dimensionality
17
Harmony Networks Do Not Work
31
Rapid Quality Estimation of Neural Network Input Representations
45
Modeling Interactions of the Rats Place and Head Direction Systems
61
Information through a Spiking Neuron
75
A Dynamical Model of Context Dependencies for the VestibuloOcular Reflex
89
When Is an Integrateandfire Neuron like a Poisson Neuron?
103
Not All Weights Are Created Equal
563
Investment Learning with Hierarchical PSOMS
570
T LIN B G HORNE P TIÑO C L GILES
584
A Practical Monte Carlo Implementation of Bayesian Learning
598
Finite State Automata that Recurrent CascadeCorrelation Cannot Represent
612
Benchmarks in Combinatorial Optimization
626
Is Learning the nth Thing Any Easier Than Learning the First?
640
Learning Sparse Perceptrons
654

The Geometry of Eye Rotations and Listings Law
117
Cholinergic Suppression of Transmission May Allow Combined Associative
131
Independent Component Analysis of Electroencephalographic Data
145
Plasticity of CenterSurround Opponent Receptive Fields in Real and Artificial
159
Statistical Theory of OvertrainingIs CrossValidation Asymptotically
176
How Overfitting Can Be Useful
190
Neural Networks with Quadratic VC Dimension
197
On the Computational Power of Noisy Spiking Neurons
211
Stable Dynamic Parameter Adaptation
225
Recursive Estimation of Dynamic Modular RBF Networks
239
Modern Analytic Techniques to Solve the Dynamics of Recurrent Neural
253
Generalisation of a Class of Continuous Neural Networks
267
Optimization Principles for the Neural Code
281
Active Learning in Multilayer Perceptrons
295
Worstcase Loss Bounds for Single Neurons
309
Adaptive BackPropagation in Online Learning of Multilayer Networks
323
Quadratictype Lyapunov Functions for Competitive Neural Networks with
337
Bayesian Methods for Mixtures of Experts
351
Geometry of Early Stopping in Linear Networks
365
Adaptive Mixture of Probabilistic Transducers
381
Recurrent Neural Networks for Missing or Asynchronous Data
395
Discriminant Adaptive Nearest Neighbor Classification and Regression
409
Generalized Learning Vector Quantization
423
Symplectic Nonlinear Component Analysis
437
Universal Approximation and Learning of Trajectories Using Oscillators
451
EM Optimization of LatentVariable Density Models
465
Boosting Decision Trees
479
Hierarchical Recurrent Neural Networks for Longterm Dependencies
493
Using Pairs of Data Points to Define Splits for Decision Trees
507
YOBD YOBS
523
W OPITZ J W SHAVLIK
536
Explorations with the Dynamic Wave Model
549
Improved Silicon Cochlea Using Compatible Lateral Bipolar Transistors
671
NeuronMOS Temporal Winner Search Hardware for Fullyparallel Data
685
Silicon Models for Auditory Scene Analysis
699
Model Matching and SFMD Computation
713
Onsetbased Sound Segmentation
729
Forwardbackward Retraining of Recurrent Neural Networks
743
A New Learning Algorithm for Blind Signal Separation
757
B LEMARIÉ M GILLOUX M LEROUX
771
The Gamma MLP for Speech Phoneme Recognition
785
A Framework for Nonrigid Matching and Correspondence
795
Unsupervised Pixelprediction
809
Classifying Facial Action
823
A Model of Transparent Motion and Nontransparent Motion Aftereffects
837
Empirical Entropy Manipulation for Realworld Problems
851
A Viewbased Approach to 3D Object Recognition Using Multiple
865
Improving Committee Diagnosis with Resampling Techniques
882
Memorybased Learning of
896
Visual Gesturebased Robot Guidance with a Modular Neural System
903
Prediction of Beta Sheets in Proteins
917
Using Feedforward Neural Networks to Monitor Alertness from Changes in
931
A Memorybased Reinforcement Learning Approach
945
Rankprop and Multitask Learning
959
Experiments with Neural Networks for Real Time Implementation of Control
973
A Dynamical Systems Approach for a Learnable Autonomous Robot
989
Learning Fine Motion by Markov Mixtures of Experts
1003
Improving Elevator Performance Using Reinforcement Learning
1017
Competence Acquisition in an Autonomous Mobile Robot Using Hardware
1031
Stable Linear Approximations to Dynamic Programming for Stochastic Control
1045
Improving Policies without Measuring Merits
1059
Temporal Difference in Learning in Continuous Time and Space
1073
Author Index
1087
Autortiesības

Citi izdevumi - Skatīt visu

Bieži izmantoti vārdi un frāzes

Par autoru (1996)

Michael C. Mozer is a Professor in the Department of Computer Science and the Institute of Cognitive Science at the University of Colorado, Boulder. In 1990 he received the Presidential Young Investigator Award from the National Science Foundation. Michael E. Hasselmo is Professor of Psychology and Director of the Computational Neurophysiology Laboratory at Boston University, where he is also a faculty member in the Center for Memory and Brain and the Program in Neuroscience and principal investigator on grants from the National Institute of Mental Health and the Office of Naval Research.

Bibliogrāfiskā informācija