These days, I spend a lot of time in Moore, the hulking building that Dartmouth has constructed for its ever-growing Psychological and Brain Sciences Department. My favorite time to be there is on weekend afternoons. I climb the four flights of stairs up into the pristine, white hallway with “Center for Cognitive Neuroscience” posted in metal letters on one end. Motion-activated lights illuminate the remainder of the hallway as I round the corner, glancing at posters from past research projects that decorate the stretch of otherwise blank wall—fuzzy fMRI images of brains, models of computer displays from face perception experiments, and neat little conclusions to go along with the diagrams. My lab looks north, out over the top of Geisel and into the woods. Sitting there in the silence with nothing but the soft hum of the air conditioning, I learned to question everything.

Neuroscience research is a lot like searching for a needle in a haystack in the dark with all of your fingers duct-taped together. Just this past week, a group of neuroscientists wrote a letter to the European group funding the Human Brain Project, one of the largest efforts to expand our knowledge about the brain in the world, alleging that the project is asking all of the wrong questions. “The controversy serves as a reminder that we scientists are not only far from a comprehensive explanation of how the brain works; we’re also not even in agreement about the best way to study it, or what questions we should be asking,” a New York Times op-ed on the controversy explained. Nearly every neuroscientist has a different idea as to what those questions actually are.

There isn’t much that I can be sure of after a couple of years of Psychological and Brain Sciences classes and work in two neuroscience labs. Every professor that I have had has a different idea about what is most important in the brain, about what we should take away. They encourage us to ask our own questions, but most of us, like most of them, remain clueless about where to begin. The brain poses its own unique problem when it comes to research; our entire society, everything that we are, has been created by masses of neurons masquerading as people. So extracting anything unbiased, unaffected by the inner workings of the brain, from the mess that is the world around us is one of the most challenging problems that scientists face.

The fundamental problem with brain science is that the brain is reliably unreliable. There are countless quirks that scientists have attempted to unravel over the years: why our brain so often sees what it wants rather than what it should, why we regularly defy rules that are considered “logical,” why our brain takes in so much information but makes only a fraction of it available to our consciousness. But even the scientists studying our brains are limited by the organs residing within their own skulls. By the nature of our world, one that cannot be separated from the brains of the humans that have populated it.

Our problem seems to be that we are imposing models created by our own brains upon brain science. Take the three questions above. When we decide that our brains are “fooling us,” that we are seeing something that isn’t real, it is only because someone else’s brain perceived it another way. Yes, we could use strength in numbers: the minority must be hallucinating, for the physical world should appear the same to all of us, or so we think. But seeing as it is completely impossible at this point in time to know exactly how someone else perceives something in comparison to our own perception, we could never be sure. We are limited in studying another’s mental experience by the confines of our own.

The same could be said for imposing logical rules upon the brain. The budding field of neuroeconomics (a fairly self-explanatory attempt to reconcile the fields of neuroscience and economics) seeks to explain human decision making, among other things, with neuroscience. We compare what we perceive as “logical” with what the brain actually does, and then we proceed to try to find explanations for any errant behavior. But the logic that we use as the backbone for our research comes from observation of the world around us, a world created by the interactions of billions of similarly flawed brains. The endless cycle of deduction and induction means that we may not really know anything about how our brain makes decisions at all. We cannot impose logic created by other brains on our studies of neuroscience without getting trapped in the weakness of our own circular logic.

The way in which our brain takes in information, only allowing us to be aware of some of it, poses an issue in itself. How can we study what we cannot be consciously aware of? We will forever be restricted by our own inability to know what is happening subconsciously within our own minds. Computers may be able to mitigate some of the issues with the limited capacity of our consciousness, but even computers are limited in the sense that they are human creations. What is perhaps most intriguing is the fact that we have no idea how much is happening in the remainder of the brain that we are not using. Studies have shown that we only use about 10% of our brains, and the remaining 90% may forever remain a mystery to us. Only being able to access a tenth of what is happening within the brain makes trying to study the organ as a whole problematic.

So I often wonder why we do it. I spend hours and hours a week poring over data from my experiments, staring at computer screens, and running pilots. What is it all for? In some ways, I am glad that the neuroscientists are challenging the Human Brain Project to do better. But I also wonder what better looks like. Since there is so little consensus on what a good study of the brain looks like, is it best not to do anything until there is? Considering the factors that may keep us from reaching agreement for a very long time, I think something is better than nothing. Sure, the Human Brain Project might not be perfect. But we are all plagued by the imperfections of our involvement in neuroscience research, and time might not change that.  Fumbling in the dark certainly has its flaws, but with neuroscience, flaws are simply a part of the game.