Theory of spike initiation, sensory systems, autonomous behavior, epistemology
Editor Romain Brette
Big data and the industrialization of neuroscience: A safe roadmap for understanding the brain? (2017)
Yves Frégnac
PubMed: 29074766 DOI: 10.1126/science.aan8866
In this essay, Yves Frégnac criticizes the industrial-scale projects that have recently appeared in neuroscience, for example the Human Brain Project and the Brain Initiative. This type of critiscim is often heard among scientists but more rarely read in academic journals. The essay uses different angles of attack. One is that these large-scale data mining projects are driven by technology and not by concepts, and while technological tools are obviously useful in science, the risk is that a lot of data will be produced, but quite possibly not the right data. This is a very tangible risk since no one seems to have any idea what to do with the data. The data-driven view is based on the epistemological fallacy that data preexist to theory (see my blog series on computational neuroscience). This is wrong: data can be surprising, but they always rely on a choice of what data to acquire, and that choice is theoretically motivated. Here is one example from the historical literature on excitability. To demonstrate the ionic theory of action potentials, Hodgkin thought of the following experiment: immerge an axon in oil, and measure conduction velocity; it should decrease because of the increase in extracellular resistance (it does). You might observe the electrical activity of the whole brain with voltage sensors all over neurons, but you would still not have those data. A second argument is that purely reductionist approaches (or “bottom-up”) are not appropriate to study complex systems: this study must be guided be the understanding of higher levels (eg Marr’s “algorithmic” or “computational” level). Here is a relevant quote: “the danger of the large-scale neuroscience initiatives is to produce purely descriptive ersatz of brain, sharing some of the internal statistics of the biological counterpart, at best the two first order moments (mean and variance), but devoid of self-generated cognitive abilities.” Such approaches are probably doomed to fail. A third argument is that studies in invertebrates suggest that bottom-up approaches vastly underestimate the complexity of the problem; for example, we know from those studies that neuromodulators can totally change the function of neural circuits, and so knowing the connectome will not be sufficient to infer function (see for example this talk by Vladimir Brezina). Generally, industrial-scale bottom-up approaches will not work because we do not have the beginning of a strong brain theory, which is necessary to produce the relevant data and to subsequently integrate them.
One of the dangers identified in this article is that funds will be captured by those large-scale efforts. I think there is a broader threat, which is that it will also impact the criteria for hiring academics, and as an indirect result of those incentives, push all young scientists towards that kind of science, whether they are funded or not by those large-scale efforts. With the attraction of “high-impact” journals for flashy techniques, with papers showing impressive technologies but limited scientific results, this seems to be already happening.