René Doursat
 PhD, Habil.

Professor of Complex Systems & Deputy
   Head, Informatics Research Centre,
   School of Computing, Math & Digital Tech,
   Manchester Metropolitan University, UK

Research Affiliate, BioEmergences Lab,
   CNRS (USR3695), Gif s/Yvette, France

Steering Committee & Fmr. Director,
   Complex Systems Institute, Paris (ISC)

Officer (Secretary), Board of Directors,
   International Society for Artificial Life



Growing Adaptive

Springer 2014

Springer 2012

Peter Lang 2011
Edited Proceedings

Artificial Life
ALife'14, ECAL'15

MITPress 2014,2015
Evolution. Comp.
GECCO'12, '13

ACM 2012,2013
Artificial Life

MITPress 2011
Swarm Intell.

Springer 2010
IT Revolutions

Springer 2009

Home Page
Activities, Grants
Education, Career
   • Degrees
   • Career Summary
   • Academic Positions
   • PhD Dissertation
       1. Bias/Variance
       2. Elastic Matching
       3. Synfire Chains

PhD Dissertation  
A contribution to the study of representations in the nervous system and in artificial neural networks
The central theme of my 1991 doctoral thesis under the guidance of Elie Bienenstock was the relationship between neural code and mental representation. If we make the assumption that all mental "entities" (sensation, perception, concept, word, external object, action, etc.) are represented in the nervous system as states of neuronal activities, a fundamental problem of cognitive neuroscience is to elucidate the structure and properties of such representational states.
I conducted three different, yet interrelated studies advocating Christoph von der Malsburg's theory of temporal correlations as the basis of the neural code: a handwritten character classifier (see 2. Elastic Matching), a model of cortical self-organization (see 3. Synfire Chains), and a review of the limits of statistical learning in neural networks (see 1. Bias/Variance). More →
1. Bias/Variance  
The bias/variance dilemma in formal neural networks
The first part, in collaboration with Stuart Geman, does not offer a new method or algorithm but rather aims at bringing to light general problems and limitations encountered by statistical learning processes, especially of the generalist or "nonparametric" kind. The main goal of this study is to stress the crucial importance of identifying the right format of representation and giving it priority over other concerns about the "adaptability" or power of generalization of a learning system. It became an oft-cited paper published in Neural Computation in 1992.
In this work, we addressed the issue of representation within the framework of statistical estimation theory. During the renewal of interest for connectionist models in the 1980's, the great majority of neural network methods focused on classification or estimation problems, especially regression. More →
2. Elastic Matching  
Elastic matching for handwritten character recognition
We put into practice the first part's recommendation in the second part by designing a handwritten character classification method based on order-2 correlations. Images are represented by 2-D deformable lattices instead of unstructured lists of pixels, while the "distance" between two input images is defined as the cost-functional of a graph-matching process. The success rates achieved by this criteria are superior to feed-forward neural classifiers, implicitly based on Hamming or Euclidean metrics.
In this part we described a concrete implementation of a shape recognition model inspired by von der Malsburg (1981). This author offers an original representation format in the nervous system based on an order-2 neural coding. More →
3. Synfire Chains  
An epigenetic development model of the nervous system
The third part approached the issue of neural representation from a more abstract and speculative viewpoint. We wanted to address the compositionality of cognitive processes and language, i.e., the faculty of assembling elementary constituent features into complex representations. Answering Fodor and Pylyshyn's (1988) influential criticism about the lack of structured representations in neural networks, we showed that compositionality can arise from the simultaneous self-organization of connectivity and activity in an initially random network.
Already apparent in invariant perceptual tasks, where objects are categorized according to the relationships among their parts, compositionality is particularly striking in language and is also referred to as constituency. Language is often described as a "building block" system, in which the operative objects are symbols endowed with an internal combinatorial structure. More →