TY - JOUR
T1 - Neurons equipped with intrinsic plasticity learn stimulus intensity statistics
AU - Monk, Travis
AU - Savin, Cristina
AU - Lücke, Jörg
PY - 2016
Y1 - 2016
N2 - Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are well-understood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP. Numerical experiments verify our analytical derivations and show robust behavior for artificial and natural stimuli. Our results link IP to non-trivial input statistics, in particular the statistics of stimulus intensities for classes to which a neuron is sensitive. More generally, our work paves the way toward new classification algorithms that are robust to intensity variations.
AB - Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are well-understood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP. Numerical experiments verify our analytical derivations and show robust behavior for artificial and natural stimuli. Our results link IP to non-trivial input statistics, in particular the statistics of stimulus intensities for classes to which a neuron is sensitive. More generally, our work paves the way toward new classification algorithms that are robust to intensity variations.
KW - mathematical models
KW - neural networks (neurobiology)
KW - neuroplasticity
KW - stimulus intensity
UR - http://handle.westernsydney.edu.au:8081/1959.7/uws:47499
UR - https://papers.nips.cc/paper/6582-neurons-equipped-with-intrinsic-plasticity-learn-stimulus-intensity-statistics
M3 - Article
SN - 1049-5258
VL - 29
SP - 4285
EP - 4293
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
ER -