Neuroscience Newspeak, Or How to Publish Meaningless Facts.

Screen Shot 2018-12-30 at 10.21.08 AM
The Texas sharp-shooter fitting his data to his model: A post hoc definition of “sharp-shooting” (Cartoon by Dirk-Jan Hoek)

Mainstream neuroscience has long abandoned conceptual thinking in favor of isolated “correlation studies” that, as Konrad Kording has pointed out, mean “preciously little.” At the same time, the language used to describe these futile exercises convey an entirely different and misleading picture. Below I list some of the terms coopted and implicitly redefined by the field as essentially their opposites.

Predict

Among the most basic of these is  “predict.” We all know what it means in normal usage; to assert that some event will happen in the future. A successful prediction is one that actually comes to pass after we have claimed it will.

In neuroscience (and not only neuroscience), “predict” has acquired the opposite meaning. It means that you have, after the fact,” linked” your experimental conditions with your data via algorithms employing multiple free parameters and various untested, ad hoc assumptions, so that, for a particular study and only that study, you can plug in one of your independent variables and it will spit out (even if ever so approximately) your dependent variable. (This is my rough description; I believe it is accurate in principle).

The “prediction,” in other words, is entirely retrospective and the methods to achieve it Procrustean. Not surprisingly, it is widely understood that these fitted “models” are not even capable of predicting the results of identical experiments, let alone generalizing with respect to any derived principles. Again, this is distinctly unsurprising; but Mehler and Kording (2018) feel the need to point out at length, for the benefit of contemporary neuroscientists (another euphemism, to tell the truth), that correlation doesn’t license claims about causation when confounds are legion.

Explain

The word “explain” is similarly used in neuroscience to denote the ability of a fitted algorithm to link (very approximately) conditions to data for one particular experiment; no concepts, principles, references to mechanism are involved. This clearly a very impoverished, barren definition of “explain.”

Both “predict” and “explain” imply that investigators have uncovered a reliable structure to phenomena, the latter involving hypotheses describing unseen mechanisms, leading to  a new ability to control events and produce formerly unpredicted/unpredictable outcomes. This is clearly not a fair description of post hoc correlation-fishing.

“Drive, alter, promote, etc”

Mehler and Kording (2018) list additional terms used inappropriately to imply that products of data-mining reflect causal relations:

The current publication system almost forces authors to make causal statements using filler verbs (e.g. to drive, alter, promote) as a form of storytelling (Gomez-Marin, 2017); without such a statement they are often accused of just collecting meaningless facts. In the light of our discussion this is a major mistake, which incites the field to misrepresent its findings.”

So the current publication system almost forces authors to lie. (Going the rest of the way is up to them). One might note that if authors’ activities were actually discovering principles of brain function, they wouldn’t need to lie. They should be grateful that the current publication system tolerates this cheap linguistic cover-up.

Causality

The term “causality” itself is being exploited and misused to make statistical techniques appear to have value in discerning the way things work:

“Anil Seth, a pioneer of Granger Causality approaches to neuroscience explicitly states on Twitter “I am NOT saying that Granger Causality (GC) indicates causality in the colloquial sense of “if I did A then B would happen” (Seth, 2018).”(Quoted in Mehler & Kording, (2018))

It’s a good thing Seth clarified this on Twitter; better yet if he didn’t lie by implicitly redefining an important term. What he refers to as “the colloquial sense” is the dictionary definition of causal relationships, not some slang used on the streets.

Computer scientist Judea Pearl, considered a pioneer of AI and “credited for developing a theory of causal and counterfactual inference based on structural models” also redefines “causality” in a manner that drains it of its meaning -for him it means asking “counterfactual questions – what if we did things differently?” This is not a “why” question in the normal sense of the word. The (unanswered) why problem is just transferred to “why does whatever happens if we do things differently happen?” (So I guess we can add “why” to this list).

Double-Blind

The above misuse of words may seem like nuanced subterfuges compared to the lie being told by authors investigating the effects of mind-altering drugs.

In the Methods section of their recent article in the Journal of Neuroscience, Gabay et al (2018) describe their study of the mind-altering drug Ecstasy as follows:

“This study followed a double-blind, placebo-controlled, cross-over, counter-balanced design.”

But in their Discussion section, the point out that:

“A limitation of the current study is the use of an inactive placebo. Given the clear subjective effects of MDMA, participants became aware that they had been given the active compound.”

They might as well just have said, “We were lying when we called this study “double-blind;” our participants knew what was up. The study was hyped by Nature as a “Research highlight.”

The literature is full of studies of MDMA calling themselves “double-blind.”

This is called “scientific peer review.”

        Neuroscience

It occurs to me that the word “neuroscience” itself has become hollowed out. I say this because there is an institute at Stanford University called the “Center for Reproducible Neuroscience.” This implies that the findings of “neuroscience” aren’t necessarily reproducible. But “irreproducible” science isn’t really science in any meaningful sense.


Adding to the collection…

I see that even the word “neuron” has been redefined as a piece of computer code. From Twitter:

“Interesting to see that after fine-tuning to ImageNet lots of neurons become dog detectors due to dataset bias.”

“Highly tuned neurons [eg, strong ‘cat detectors‘] in Deep Neural Networks are not super important to object classification.”

No real neuron need apply for the position of “neuron.” The term “neural is similarly misused.

Durschmid et al (2018) pump up their language to announce in Cerebral Cortex that they have found “Direct Evidence for Prediction Signals in Frontal Cortex…” I don’t even know how to interpret this term; is there really any such thing as direct evidence? In any case, here the authors employ post hoc data analysis with liberal selection of data, which seems, on the contrary, the most indirect and non-credible of methods.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s