connectome3

A recent comment in the journal Nature makes a bold proposal – to form a true multi-lab cooperative to perform collective research into the deep questions of neuroscience. There are two aspects of this proposal that are extremely interesting: the potential to make significant progress in answering the biggest questions in neuroscience, and the collaborative approach to research being proposed.

How does the brain work?

It is often difficult to answer questions about the current state of scientific knowledge to the general public because there is a lot of background knowledge about science itself that is necessary. As a result I find two common tropes – that scientific knowledge is binary (we either understand something or we don’t) and that we “have no idea” how something works, even when we have lots of ideas. Communicating about the brain almost always falls for these tropes.

Scientific knowledge is better understood using a metaphor of depth. The real question is not whether or not we understand how the brain works, but how deep is our knowledge about brain function. Because the brain is so complex we can simultaneously say that we know a great deal about brain function and there is a great deal we don’t understand.

We know basically how neurons work, how they communicate with each other, and how they store information. We understand how the brain is organized and mostly what different parts of the brain do. We have a very good idea how the organization in some parts of the brain relate to the information they store and process (the visual cortex, for example, is very well understood).

Using modern tools we are building a map of all the connections in the brain, the connectome. We already have low resolution connectomes, and as the research progresses our map of the human brain will get higher and higher resolution. So again this will not be a binary question, but one of detail and depth.

The authors of the current proposal want to take our understanding of neuroscience to the next level:

The plan now proposed by Zach Mainen, director of research at the Champalimaud Centre for the Unknown, in Lisbon, Portugal; Michael Häusser, professor of Neuroscience at University College London, United Kingdom; and Alexandre Pouget, professor of neuroscience at the University of Geneva, Switzerland, is inspired by the way particle physics teams nowadays mount their huge accelerator experiments to discover new subatomic particles and ultimately to understand the evolution of the Universe.

They propose that researchers focus on one very specific question, for example how the mouse brain forages for food. They will then try to understand this function at every level, from the neuron, to brain circuits, to behavior. Ultimately the researchers want to understand how the brain generates consciousness, which is probably the ultimate question of neuroscience.

Their approach is to work from the neuron up to try to understand how higher level phenomena emerge from lower level functions.

Collaborative science

As interesting as the neuroscience question, in my opinion, is the call for truly collaborative research:

To help push neuroscience research to take the leap into the future, the three neuroscientists propose some simple principles, at least in theory: “focus on a single brain function”; “combine experimentalists and theorists”; “standardize tools and methods”; “share data”; “assign credit in new ways”. And one of the fundamental premises to make this possible is to “engender a sphere of trust within which it is safe [to share] data, resources and plans”, they write.

They want to make researchers comfortable with sharing their data as it is gathered. They want experiments to be performed at different labs simultaneously, and for the creative process of research to be fully collaborative. They also want to combine different types of researchers so they can approach the problem from different perspectives while working together.

This all sounds great, but will require, as they say, new ways to assign credit. This means they will need buy in from the academic institutions at which the researchers work. I am curious to see how this will work out. I certainly hear a lot from academia about valuing collaboration, and this will be the ultimate test of that commitment.

I am excited about this idea because I think it has the potential to improve many of the current problems within the institutions of science. We write about these issues all the time here – the publish or perish environment encourages one-off studies of new ideas that are likely to be wrong, and does not sufficiently support the replications that are necessary to know if the results are reliable.

While the system grinds forward slowly, there is a lot of waste, in my opinion. In medicine this is particularly troubling because we have to decide what treatments to use while the research is going through this messy process, and our patients are hearing about exciting but probably wrong findings from the media on a regular basis.

It sounds like the current proposal will build replication into the research process itself. It is an attempt, it seems, to optimize the research infrastructure for actually answering questions, rather than getting published, advancing careers, bringing glory to institutions, and selling journals.

Further, as science advances and our questions get deeper and more complex, we may need a better approach to research in order to answer those questions. Medicine is a great example. It may simply be getting too complex for individual researchers or practitioners to fully grapple with that complexity. We need more collaborative models.

Humans in general perform better when they “crowd source” problems. One person will likely plug holes in the knowledge and vision of another person. Get many people together, and more holes will be plugged. Creativity can also be a collaborative process – people play off each other and the result is greater than the sum of the parts.

We often call such efforts a “Manhattan project” and I think the analogy is apt. The Manhattan project was a group of diverse theorists and experimentalists brought together for a common purpose, without regard for academic credit or anything other than achieving their goal. So, in a way, this is a proven model.

Author Mainen says:

“By collaboration, we don’t mean business as usual; we really mean it. We’ll have 10 labs doing the same experiments, with the same gear, the same computer programs. The data we will obtain will go into the cloud and be shared by the 20 labs. It’ll be almost as a global lab, except it will be distributed geographically.”

Punching through to the next level of understanding of how our brains work may just require this level of collaboration. Obviously, not every research question is this complex and will require such a model. The big questions of science, however, likely do.

Posted by Steven Novella

Founder and currently Executive Editor of Science-Based Medicine Steven Novella, MD is an academic clinical neurologist at the Yale University School of Medicine. He is also the president and co-founder of the New England Skeptical Society, the host and producer of the popular weekly science podcast, The Skeptics’ Guide to the Universe, and the author of the NeuroLogicaBlog, a daily blog that covers news and issues in neuroscience, but also general science, scientific skepticism, philosophy of science, critical thinking, and the intersection of science with the media and society. Dr. Novella also contributes every Sunday to The Rogues Gallery, the official blog of the SGU.

Loading...