Theory of spike initiation, sensory systems, autonomous behavior, epistemology
Editor Romain Brette
Because we are so used to think of the brain as a computer, some physical thing implementing computations (hence the name of the field « computational neuroscience »), a typical reaction to criticisms of that view is : « what else could it be, if not computation ? ». The ambition of this paper (and also van Gelder, 1998) is to provide an alternative view. Specifically, the author argues that that another valuable way to understand cognition is dynamical systems. An algorithm is a type of dynamical systems (more precisely, a computer executing an algorithm), but not all dynamical systems are algorithmic. An important type of such systems is control systems, which are dynamical systems with a function, and therefore might be relevant to understand cognition. He gives the example of the centrifugal governor, which is a physical control system, where variables are related not by algorithmic steps but by physical laws. There is this interesting definition of homuncularity, which he attributes to algorithmic descriptions : « Homuncularity is a special kind of breakdown of a system into parts or components, each of which is responsible for a particular subtask. Homuncular components are ones that, like departments or committees within bureaucracies, interact by communication (that is, by passing meaningful messages). ». Algorithmic systems are homuncular, but a dynamical system (including a control system) does not need to be.
Another very important point is the question of temporality : the temporality of an algorithm is unrelated to that of a physical dynamical system like the centrifugal governor, which is coupled to the environment with which it interacts. The discrete temporality of an algorithm is abstract and arbitrary, not related to the temporality of the environment with which it is supposed to interact.
Thus, algorithmic systems have two key properties that make them questionable metaphors of cognition : homuncularity and arbitrary temporality. It seems clear that the brain is a kind of dynamical system coupled to the environment, but it is not clear at all that it can be mapped to algorithmic descriptions. Thus, the brain can only be thought to « compute » in a loose way, i.e., it solves problems. In the same sense, then, a bubble computes a minimal surface. But this does not mean the bubble implements an algorithm ; nor does it mean that the brain implements algorithms (some more thoughts on my blog).