Abstract
Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are well-understood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP. Numerical experiments verify our analytical derivations and show robust behavior for artificial and natural stimuli. Our results link IP to non-trivial input statistics, in particular the statistics of stimulus intensities for classes to which a neuron is sensitive. More generally, our work paves the way toward new classification algorithms that are robust to intensity variations.
| Original language | English |
|---|---|
| Pages (from-to) | 4285-4293 |
| Number of pages | 9 |
| Journal | Advances in Neural Information Processing Systems |
| Volume | 29 |
| Publication status | Published - 2016 |
Keywords
- mathematical models
- neural networks (neurobiology)
- neuroplasticity
- stimulus intensity
Fingerprint
Dive into the research topics of 'Neurons equipped with intrinsic plasticity learn stimulus intensity statistics'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver