Loading…
NIPS 2015 has ended
Thursday, December 10 • 11:00 - 15:00
Enforcing balance allows local supervised learning in spiking recurrent networks

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

To predict sensory inputs or control motor trajectories, the brain must constantlylearn temporal dynamics based on error feedback. However, it remainsunclear how such supervised learning is implemented in biological neural networks.Learning in recurrent spiking networks is notoriously difficult because localchanges in connectivity may have an unpredictable effect on the global dynamics.The most commonly used learning rules, such as temporal back-propagation,are not local and thus not biologically plausible. Furthermore, reproducing thePoisson-like statistics of neural responses requires the use of networks with balancedexcitation and inhibition. Such balance is easily destroyed during learning.Using a top-down approach, we show how networks of integrate-and-fire neuronscan learn arbitrary linear dynamical systems by feeding back their error asa feed-forward input. The network uses two types of recurrent connections: fastand slow. The fast connections learn to balance excitation and inhibition using avoltage-based plasticity rule. The slow connections are trained to minimize theerror feedback using a current-based Hebbian learning rule. Importantly, the balancemaintained by fast connections is crucial to ensure that global error signalsare available locally in each neuron, in turn resulting in a local learning rule forthe slow connections. This demonstrates that spiking networks can learn complexdynamics using purely local learning rules, using E/I balance as the key ratherthan an additional constraint. The resulting network implements a given functionwithin the predictive coding scheme, with minimal dimensions and activity.


Speakers

Thursday December 10, 2015 11:00 - 15:00 EST
210 C #4

Attendees (0)