Rodrigo Laje, Ph.D.

Sections

Rodrigo Laje, Ph.D.
Rodrigo Laje
Title
Professor
Department
Department of Science and Technology
Institution
National University of Quilmes
Address
R.S. Pe�a 352, Bernal (B1876BXD)
City
Bernal
Country
Argentina
Email
[email protected]
Website
www.lajelab.com.ar
Research field
Neuroscience
Award year
2010
Country of origin
Argentina
Mentor name
Dean V. Buonomano, Ph.D.

Research

The brain�s ability to tell time is of fundamental importance for sensory and motor processing, including speech and music perception, and motor coordination. Yet, the neural bases of the generation of timed motor responses on the scale of seconds remain unknown. Few biologically plausible computational models have been proposed, and a mechanistic explanation of timing in terms of the dynamics of actual neuronal populations is missing. This work builds on the proposal that time is inherently encoded in the complex, continuously changing pattern of self-sustained activity in recurrent neural networks- a “population clock”. A readout unit can be trained to detect a particular state of activity in the network and thus spike at a given time. The training of the network- that is, modifying its initial weights for the readout unit to reproduce a target function or spike at a desired time- is usually done by applying a learning algorithm to the readout weights. Although it is known that the synaptic weights of recurrent cortical networks are plastic, it has proven challenging to incorporate plasticity into the recurrent network. Furthermore, some models have suggested that changing either the recurrent weights or the weights onto feedback units is equally effective. To date the approach to train recurrent weights has been to teach the recurrent units some version of the desired output behavior. This procedure, however, commonly makes the internal dynamics of the network change completely- often making it difficult for the training to converge. Similar results are obtained when the modifications are applied only to the readout weights but in the presence of a feedback loop. My work is focused on training the internal weights of the network to robustly reproduce its own natural or “innate” dynamics leads to more stable dynamics and better timing behavior when compared to more traditional training approaches where the internal dynamics of the network is changed after training.

Search Pew Scholars