10.5061/DRYAD.S38N5
Ellefsen, Kai Olav
Norwegian University of Science and Technology
Mouret, Jean-Baptiste
French National Centre for Scientific Research
Clune, Jeff
University of Wyoming
Data from: Neural modularity helps organisms evolve to learn new skills
without forgetting old
Dryad
dataset
2016
artificial neural network
catastrophic forgetting
evolutionary algorithm
neural modularity
2016-03-18T00:00:00Z
2016-03-18T00:00:00Z
en
https://doi.org/10.1371/journal.pcbi.1004128
322863641 bytes
1
CC0 1.0 Universal (CC0 1.0) Public Domain Dedication
A long-standing goal in artificial intelligence is creating agents that
can learn a variety of different skills for different problems. In the
artificial intelligence subfield of neural networks, a barrier to that
goal is that when agents learn a new skill they typically do so by losing
previously acquired skills, a problem called catastrophic forgetting. That
occurs because, to learn the new task, neural learning algorithms change
connections that encode previously acquired skills. How networks are
organized critically affects their learning dynamics. In this paper, we
test whether catastrophic forgetting can be reduced by evolving modular
neural networks. Modularity intuitively should reduce learning
interference between tasks by separating functionality into physically
distinct modules in which learning can be selectively turned on or off.
Modularity can further improve learning by having a reinforcement learning
module separate from sensory processing modules, allowing learning to
happen only in response to a positive or negative reward. In this paper,
learning takes place via neuromodulation, which allows agents to
selectively change the rate of learning for each neural connection based
on environmental stimuli (e.g. to alter learning in specific locations
based on the task at hand). To produce modularity, we evolve neural
networks with a cost for neural connections. We show that this connection
cost technique causes modularity, confirming a previous result, and that
such sparsely connected, modular networks have higher overall performance
because they learn new skills faster while retaining old skills more and
because they have a separate reinforcement learning module. Our results
suggest (1) that encouraging modularity in neural networks may help us
overcome the long-standing barrier of networks that cannot learn new
skills without forgetting old ones, and (2) that one benefit of the
modularity ubiquitous in the brains of natural animals might be to
alleviate the problem of catastrophic forgetting.
data_and_plotting_scriptsThe data were generated by experiments in the
Sferes evolutionary algorithm framework. Source code for our experiments
are available at: http://evolvingai.org/code/modularity_forgetting.tar.gz