11. Learning Equations

Neurons in the central nervous system form a complex network with a high
degree of plasticity. In the previous chapter we have discussed synaptic
plasticity from a phenomenological point of view. We now ask `What are the
consequences for the connectivity between neurons if synapses are plastic?'.
To do so we consider a scenario known as unsupervised learning. We assume
that some of the neurons in the network are stimulated by input with certain
statistical properties. Synaptic plasticity generates changes in the
connectivity pattern that reflect the statistical structure of the input. The
relation between the input statistics and the synaptic weights that evolve due
to Hebbian plasticity is the topic of this chapter. We start in
Section 11.1 with a review of unsupervised learning in
a rate-coding paradigm. The extension of the analysis to spike-time
dependent synaptic plasticity is made in
Section 11.2. We will see that spike-based learning
naturally accounts for spatial *and* temporal correlations in the input
and can overcome some of the problems of a simple rate-based learning rule.

- 11.1 Learning in Rate Models
- 11.1.1 Correlation Matrix and Principal Components
- 11.1.2 Evolution of synaptic weights
- 11.1.3 Weight Normalization
- 11.1.4 Receptive Field Development

- 11.2 Learning in Spiking Models
- 11.2.1 Learning Equation
- 11.2.2 Spike-Spike Correlations
- 11.2.3 Relation of spike-based to rate-based learning
- 11.2.4 Static-Pattern Scenario
- 11.2.5 Distribution of Synaptic Weights

- 11.3 Summary

Cambridge University Press, 2002

© Cambridge University Press

** This book is in copyright. No reproduction of any part
of it may take place without the written permission
of Cambridge University Press.**