10.5061/DRYAD.KV5R4
Burbank, Kendra S.
University of Chicago
Data from: Mirrored STDP implements autoencoder learning in a network of
spiking neurons
Dryad
dataset
2016
2016-12-07T00:00:00Z
2016-12-07T00:00:00Z
en
https://doi.org/10.1371/journal.pcbi.1004566
2763921502 bytes
1
CC0 1.0 Universal (CC0 1.0) Public Domain Dedication
The autoencoder algorithm is a simple but powerful unsupervised method for
training neural networks. Autoencoder networks can learn sparse
distributed codes similar to those seen in cortical sensory areas such as
visual area V1, but they can also be stacked to learn increasingly
abstract representations. Several computational neuroscience models of
sensory areas, including Olshausen & Field’s Sparse Coding
algorithm, can be seen as autoencoder variants, and autoencoders have seen
extensive use in the machine learning community. Despite their power and
versatility, autoencoders have been difficult to implement in a
biologically realistic fashion. The challenges include their need to
calculate differences between two neuronal activities and their
requirement for learning rules which lead to identical changes at
feedforward and feedback connections. Here, we study a biologically
realistic network of integrate-and-fire neurons with anatomical
connectivity and synaptic plasticity that closely matches that observed in
cortical sensory areas. Our choice of synaptic plasticity rules is
inspired by recent experimental and theoretical results suggesting that
learning at feedback connections may have a different form from learning
at feedforward connections, and our results depend critically on this
novel choice of plasticity rules. Specifically, we propose that plasticity
rules at feedforward versus feedback connections are temporally opposed
versions of spike-timing dependent plasticity (STDP), leading to a
symmetric combined rule we call Mirrored STDP (mSTDP). We show that with
mSTDP, our network follows a learning rule that approximately minimizes an
autoencoder loss function. When trained with whitened natural image
patches, the learned synaptic weights resemble the receptive fields seen
in V1. Our results use realistic synaptic plasticity rules to show that
the powerful autoencoder learning algorithm could be within the reach of
real biological networks.
Archive of code and parameter filesA zip file containing the code used to
run the simulations, as well as the specific parameter files used to
generate the data shown in the paper.Archive.zipMNIST main params network
over timeThe data file where network state is recorded throughout the
simulation training period, for the MNIST parameters used for the main
results in the paper.MNIST_pp_0p001.hdf5MNIST main params calculated
valuesA data file containing calculated values measuring quantities such
as reconstruction performance, average neuronal activity, etc for
different points in the training sequence. These values were used directly
to generate the figures in the manuscript.MNIST_pp_0p03.pkl.pklNatural
images main params network over timeThe data file where network state is
recorded throughout the simulation training period, for the natural image
parameters used for the main results in the paper.Natim.hdf5Natural images
very sparse params network over timeThe data file where network state is
recorded throughout the simulation training period, for the very sparse
natural image parameters used for the Figure 10b in the
paper.pp_0p001.hdf5Natural image very sparse main params calculated
valuesA data file containing calculated values measuring quantities such
as reconstruction performance, average neuronal activity, etc for
different points in the training sequence. These values were used directly
to generate Figure 10b in the manuscript.pp_0p001.pklMNIST very sparse
params network over timeThe data file where network state is recorded
throughout the simulation training period, for the very sparse MNIST
parameters used for Figure 10a in the paper.MNIST_pp_0p0010.hdf5MNIST very
sparse params calculated valuesA data file containing calculated values
measuring quantities such as reconstruction performance, average neuronal
activity, etc for different points in the training sequence. These values
were used directly to generate Figure 10a in the
manuscript.MNIST_pp_0p001.pklMNIST non-sparse params network over timeThe
data file where network state is recorded throughout the simulation
training period, for the MNIST parameters used for the Figure 10c in the
paper.MNIST_pp_0p30.hdf5MNIST non-sparse params calculated valuesA data
file containing calculated values measuring quantities such as
reconstruction performance, average neuronal activity, etc for different
points in the training sequence. These values were used directly to
generate Figure 10c in the manuscript.MNIST_pp_0p3.pklNatural images main
params calculated valuesA data file containing calculated values measuring
quantities such as reconstruction performance, average neuronal activity,
etc for different points in the training sequence. These values were used
directly to generate the figures in the
manuscript.pp_0p02_main_results.pkl