Why contemporary neuroscience experiments ALWAYS WORK the first time (and ALWAYS FAIL the second).

The title and text of this post are part of an attempt to clarify and amplify a point I’ve been hammering on in previous posts, i.e. that neuroscience, as it is practiced today, is a pseudoscience, largely because it relies on post hoc correlation-fishing. For this reason, studies (so-called) have no path to failure the first time they are performed, and always fail the second.

As previously detailed, practitioners simply record some neural activity within a particular time frame; describe some events going on in the lab during the same time frame; then fish around for correlations between the events and the “data” collected. Correlations, of course, will always be found. Even if, instead of neural recordings and “stimuli” or “tasks” we simply used two sets of random numbers, we would find correlations, simply due to chance. What’s more, the bigger the dataset, the more chance correlations we’ll turn out (Calude & Longo (2016)). So this type of exercise will always yield “results;” and since all we’re called on to do is count and correlate, there’s no way we can fail. Maybe some of our correlations are “true,” i.e. represent reliable associations; but we have no way of knowing; and in the case of complex systems, it’s extremely unlikely. It’s akin to flipping a coin a number of times, recording the results, and making fancy algorithms linking e.g. the third throw with the sixth, and hundredth, or describing some involved pattern between odd and even throws, etc. The possible constructs, or “models” we could concoct are endless. But if you repeat the flips, your results will certainly be different, and your algorithms invalid.

Which is why the popular type of study I’ve just described is known not to replicate. And while a lot of ink has been spilled (not least in the pages of Nature) over the ongoing “replication crisis” in neuroscience; while we even have a “Center for Reproducible Neuroscience” at Stanford; while paper after paper has pointed out the barrenness of the procedure (Jonas & Kording’s (2017) “Can a neuroscientist understand a microprocessor?” was a popular one); while the problems with post hoc inferences have been known to philosophers and scientists for hundreds of years; the technique remains the dominant one. As Konrad Kording has admitted, practitioners get around the non-replication problem simply by avoiding doing replications.

So there you have it; a sure-fire method for learning…nothing.

 

Rising neurosci star Steve Ramirez admits comparing two brain states probably isn’t like flipping a coin twice. But like his colleagues, he’s willing to pretend it is.

Screen Shot 2019-12-17 at 9.33.32 AM
“We think of it as the brain’s way of flipping a coin twice and landing at heads twice, but you’re spot on that there’s no a priori reason that brain indeed operates in such a quantitative manner.

By a happy accident, I was able to slip into PubPeer the type of comment moderators for several months now have been routinely censoring sans explanation. What happened next shows quite clearly that these comments aren’t being censored because they lack relevance or substance, but because they hit too close to the mark.

In reading over a paper on hippocampal activity by Chen et al (2019)/Current Biology, I had difficulty finding any reference to sample sizes. I thought this was odd, and posted a brief comment about it on PubPeer. Because I’m grey-listed (which happened without warning or explanation), my comments never post right away, if at all. Soon after, I realized that the authors had actually provided sample sizes in figure captions, whereupon I deleted (or so I thought) the comment from my still-awaiting-moderation post. But it ended up posting. So I edited it, modifying it to ask a different sample-size-related question.

I was also curious about how the behavioral data was collected – I didn’t think the text gave enough information on certain issues. The article says that data are available on request, I emailed Steve Ramirez, senior and corresponding author, to make the request.

Surprisingly, his initial response didn’t refer to this request at all; rather, it consisted of a reaction to my PubPeer comment, as follows:

Thank you so much for your email and question! We posted the individual N, stats, and so on in our figure legends, and chose our N values for the histology and behavior based on previous engram papers that demonstrated such N provided sufficient statistical power (e.g. Liu et al, Nature, 2012; Denny et al. Neuron, 2014; Tanaka et al. Neuron, 2014). These N values and the corresponding stats were also taken as a standard for circuit level / behavioral optogenetic papers (e.g. Tye et al. Nature, 2012; Stuber et al. Nature, 2011) in which we compare across animals for histology and utilize, for instance, a T-test, or across animals and across light on-off-on-off epochs for behavioral data and utilize two-way anovas with repeated measures.

That said, I’d be more than happy to help in any capacity hereafter and thank you again! Other groups have analyzed their data with both similar and diverging sets of statistics and corresponding justifications for such analyses that, too, have yielded pleasantly nuanced results that I’m always thrilled to chat about and brainstorm over. I hope you had a wonderful Thanksgiving and upcoming holiday season as well!

I didn’t find his answer very satisfactory – all he was doing was passing the buck, but I what I really wanted was the dataset, so I simply thanked him for his comments and asked again, saying:

Thanks for your reply, I’d also be happy to chat about the issues you mention, but in this email I was just asking for the info that, at the end of your article, under “Data and software availability,” you say can be made available:

“For full behavioral datasets and cell counts, please contact the Lead Contact, Dr. Steve Ramirez (dvsteve@bu.edu).”

Steve replied:

Absolutely! Are there any in particular I can send your way? They’ll all be straightforward and annotated excel files too to make life easier — always happy to share and help!

I replied that I’d be interested in the behavioral data.

Meanwhile, having been able to edit my PubPeer comment once, and in light of Steve’s enthusiastic reaction to my first one, I went back in and added some more comments. These edits apparently flew under the moderators’ radar; they typically would have nipped such comments in the bud, (as in fact they did a little while later).

Steve responded enthusiastically both to my request and to my new comments, which he evidently found valuable. What follows are his complete responses, which included excerpts from my PubPeer comments (which I’ve placed in italics) and his replies. I’ve bolded a few sections for emphasis, and added some reactions.

Absolutely. I’ll send [the dataset] over shortly (on my commute to work) when I’m back on my work laptop, and in the meantime I’d be very happy to clarify some points raised on pubpeer — thank you so much for the comments, as these always help us to continue to perform as rigorous of science as possible. Very much appreciated!

Steve delayed, then left the country without sending the dataset, assuring me when I followed up that he would send it when he got back.  I think he may have realized that I’d actually, maybe, be intendint to look at it critically. (After not receiving it I emailed Current Biology, but it looks like they’ve decided not to respond either). He did, however, respond to my comments point by point:

In order:

The authors say that: “No statistical methods were used to determine sample size; the number of subjects per group were based on those in previously published studies and are reported in figure captions.” To which previously published studies the researchers are referring, and on what basis do they consider those studies’ sample sizes to be valid? If they (or their editors) don’t feel that sample sizes should be selected based on some type of statistical test, then why mention this issue at all? If it is important, then the reference to other, unnamed previous publications is rather inadequate.

Addressed in previous email – thank you again! [See above]

As mentioned earlier, all the email does is pass the buck – and if you dig back you find that there’s no there there, either.

“Exploration of the context while off Dox increased eYFP-expressing (eYFP+) cells in both the dorsal and ventral DG relative to on-Dox controls (Figures 1G, 1H, 1J, and 1K). The following day, mice that explored the same context showed a significant increase in the number of overlapping eYFP+ (i.e., cells labeled by the 1st exposure) and c-Fos+ cells (i.e., cells labeled by the 2nd exposure) in the dorsal but not ventral DG (relative to chance overlap)…”

How is “chance” ascertained here? The brain is highly-condition-sensitive, to both external and internal events, in ways we don’t understand. There’s really no possible “no context” condition; and there was no “explored a different context” control. When the mice are returned to their normal cages, they are also returning to familiar territory; when they are being moved from one place to another, these may also be familiar experiences. All of this affects the brain. Given our level of ignorance about the brain and the countless confounds, I don’t see what the authors could validly be using as their “chance” baseline.

I 100% agree that chance is a tricky thing when it comes to the brain, since it’s a statistical measure applied to a system as complex as the brain, in which we don’t know what true chance would look like. So in that sense, we took the next best approach and utilized statistical chance, i.e. the odds of a set of cells being labeled by one fluorophore (N[number of cells labeled] / Total number of cells in the area) multiplied by the odds a set of cells are labeled by the second flourophore (N[number of cells labeled] / Total number of cells in the area), and we use the resulting number as statistical chance. We think of it as the brain’s way of flipping a coin twice and landing at heads twice, but you’re spot on that there’s no a priori reason that brain indeed operates in such a quantitative manner.

What Ramirez is admitting here is truly, and I mean truly, astonishing. He’s saying that two sets of measurements are being compared on the assumption that these measurements of a bunch of neurons – whichever bunch of neurons we choose to record from at whatever time we choose to record from them – out of the billions of neurons in an organ we don’t understand  – will be distributed in the same random, decontextualized way the results of a series of coin flips would be distributed. We have no reason for making the assumption – but hey, we’ll just do it anyway! Steve doesn’t seem to think there’s anything wrong with that, or anything embarrassing about admitting it.

“Together, these data demonstrate that the dorsal DG is reactivated following retrieval of both a neutral or aversive context memory, whereas cells in the ventral DG show reactivation only in a shock-paired environment.” Sentences like this should raise alarm bells. They are passive (they don’t corroborate or falsify predictions made before examining the data) descriptions of correlations in a sample examined post hoc, meaning they can’t count as evidence of causal connections. In other words, data analyzed in this way can’t be taken to “demonstrate” anything. Claims are as speculative post as they were prior to experiment.

I again totally agree here and thank you for the great point! I have no idea what causality truly would look like in the brain or what a ground truth looks like when it comes to a principle of the brain, and we hesitate to use the word “causality” for that reason. I believe most of our data our correlative or can be interpreted as a result of a given perturbation, but this by no means has to equate to causal. I do believe that term gets thrown around a lot these days with optogenetic / chemogenetic studies, and the reality is that once we perturb an area and networks respond accordingly, perhaps causality can be observed as a brain-wide phenomenon which we’re just started to test out thankfully.

He has no idea how the brain works (“what causality would truly look like in the brain”), yet in the paper he is making assertions about how one thing affects another – assertions about causal relationships. He doesn’t seem to see the contradiction.

In addition: “Post hoc analyses (Newman-Keuls) were used to characterize treatment and interaction effects, when statistically significant (alpha set at p < 0.05, two-tailed).”According to graphpad.com, maker of Prism, one of the software packages employed in this study: “It is difficult to articulate exactly what null hypotheses the Newman-Keuls test actually tests, so difficult to interpret its results.” And from the same source: “Although the whole point of multiple comparison post tests is to keep the chance of a Type I error in any comparison to be 5%, in fact the Newman-Keuls test doesn’t do this.”

“Acute stimulation of a fear memory via either the dorsal or the ventral DG drove freezing behavior and promoted place avoidance (Figures 2I, 2J, 2L, and 2M). Acute stimulation in the female exposure groups promoted place preference but did not affect fear behavior (Figures 2I, 2J, 2L, and 2M).”

While the hippocampus clearly has a role in enabling memory formation and/or retrieval, it cannot possibly contain memorie. Even if we take Chen et al’s report at face value, all we would be able to say is that certain cells display a certain type of activity when the mouse has a sharp fear reaction, and also may instigate an acute fear reaction when stimulated. The claim that this activity is causing the mouse to experience mental imagery corresponding to some particular “context” – out of the infinite number and variety of contexts a normal mouse may experience in its lifetime – is a bridge too far. There aren’t enough cells in the hippocampus to accommodate all possible “contexts.” The authors need to be a little more modest in their claims.

I couldn’t agree more! I personally think that memories are a distributed brain-wide phenomenon in which circuits and networks utilize spatial-temporal codes to process information, as opposed to having a memory localized to a single X-Y-Z coordinate point. Even within the hippocampus with over 1M cells, the permutations possible of a defined set of cells utilizing a temporal code to process contexts is an astronomically, perhaps wonderfully, big number of experiences that it can be involved in — not to say that therefore memories are located in the hippocampus because it technically can process them, but that it’s contribution to enabling numerous memories I don’t believe has an upper limit that we know of, but this is total speculation on my end! However, I do actually believe that our set of experiments beginning with Liu et al. 2012 up to 2019 really hint that we can partly predict what the animal’s internal “experience” (used very loosely here) is actually like (see Joselyn et al. 2015 Nature Neuroscience) for a fantastic review. In short, we “tag” cells active during a defined period of time, say, exposure to context A, and we’ve done numerous experiments that suggest these cells are specific to that environment with minimal “noise”, i.e. without other contexts that the animal experiences spilling over, given the time period of our tagging system. And when we manipulate these sets of cells, the animals show fear responses specific to that context, i.e. Figure 3 of Chen et al., that suggests that at least some aspects of context A are coming back “online,” which dovetails which previous data from a 2013 false memory paper we had as well. That said, I’m fully on board that we’re not at the stage where we can say with certainty what the animal’s mental imagery looks like, though a handful of papers from the Deisseroth lab recently have hinted that we can really force a mental “image” to come back online and force the animal to behave as those it’s experiencing that image. In our hands, we believe that stimulating these cells in the hippocampus has a sort of domino effect in which downstream circuits become activated and this ultimately leads to memory recall, and that the hippocampus is a key node involved in bringing the brain-wide networks involved in memory back online. So it’s not that the memory is located in the hippocampus, it’s more that the hippocampus contains a set of cells which, when activated, are sufficient to activate memory recall by engaging the rest of the systems in the brain involved in that discrete experience too.

Notice that he never addresses the basic point about problems with the Newman-Keuls test, right at the top. Notice also that his claims are far more vague and speculative than his published paper makes it sound.

I hope this helps and thank you again for the fantastic back and forth!

I was actually avoiding getting into a back and forth until I got the dataset. I waited three days, then wrote this:

Just following up on the dataset request.

Thanks for your responses; I think it would be useful if I incorporated them into my PubPeer comments.

I waited several more days in case he wanted to object to the use of his comments in the PubPeer thread, but received no further response. I went ahead and posted his replies on PubPeer. I was truly amazed  by them – not because I didn’t already know the score when it comes to contemporary neuroscience practice, but because I couldn’t believe how casually he admitted what to me was obvious malpractice. He apparently took me for an insider with (compromised) skin in the game, and dropped his guard. I began tweeting right and left, hoping to raise just a tiny bit of the concern and indignation I feel over this state of affairs. PubPeer got wind of my post and duly removed both my original comments and Steve’s cheerful, appreciative responses. I suspect Steve or allies got in touch with them and, despite his private candour, made sure PubPeer readers, and outsiders in general, wouldn’t learn about neuroscience’s coin-flipping approach to science.

As seen in a previous post, the ability to tolerate not knowing what you’re doing  or privileging career over good practice, or playing casino statistics, or adopting absurd, though “traditional,”  assumptions, etc, seem to be prerequisites for contemporary neuroscience practitioners.

Bad for science? Why did PubPeer reject these comments?

After having posted hundreds of comments on vision science and visual neuroscience without a problem, I was suddenly grey-listed by PubPeer without explanation, without apparent reason. Requests for clarification of moderation criteria go unanswered. It’s now a virtual sure thing that my comments won’t post. Below is an ongoing list of comments. as they’re rejected, with most recent additions added first.

Update: PubPeer has added to its FAQs, without revising the earlier ones, which they flatly contradict. For example, “PP does not review comments scientifically (see above), so factual comments conforming to our guidelines may still be wrong, misguided or unconvincing. For this reason we insist that readers… make up their own minds about comment content” vs. new stipulations giving moderators license to bar comments deemed “misguided” “erroneous” “unclear” “potentially malicious” “disinformation,” w/out explanation, and admitting decisions may be “arbitrary” due to lack of moderator expertise.

Update 2: Two days after contradiction was pointed out on Twitter, PubPeer removed its  open commenting stipulations. In practice, they now reject even publicly verifiable and relevant information (see below) without specifying what their objection is.

Geleris et al (2020) New England Journal of Medicine

Attempted comment on PubPeer “Bug Report” thread May 8/2020

Guest & Martin (2020)/Psyrxiv

Musall, Kaufmann, Juavinett, Gluf, Churchland (2019)/Nature Neuroscience

“We characterized movements…and measured neural activity…Cortex-wide activity was dominated by movements, especially uninstructed movements not required for the task. Some uninstructed movements were aligned to trial events. Accounting for them revealed that neurons with similar trial-averaged activity reflected utterly different combinations of cognitive and movement variables.

In the first bolded section above, the authors seem to be making two huge claims. First, that activity across the cortex of their subject animals was “dominated” – meaning, presumably, was responsible for, or caused – the movements observed; and second, that they are in a position to interpret that neural activity as subserving movement rather than mental events, emotions, perceptual events, etc.

Similarly, when they state, in the second bolded section, that activity reflected different combinations of cognitive and movement “variables,” one has to ask how they are in a position to parse neural activity in this way. Their method is based purely on observing behavior; they’re not in a position to observe animals thinking.

John Greenwood & Michael Parsons (2020)/PNAS

“The visible contours in these elements enabled the perception of motion wtih minimal ambiguity given their orientation variance (i.e. avoiding the aperture problem; ref 24 [Adelson & Movshon (1982)]”

I would ask the authors to clarify in what way A & M (1982) supports the claim of “minimal ambiguity” in their stimuli.

In the stimuli used by Adelson and Movshon (stripes in circles), ambiguity is maximal; a set of stripes seen through a round aperture will be seen as moving perpendicularly to their long edges regardless of the true direction of motion. Moreover, their stimuli also have “visible contours” – the long edges of the stripes, which appear incomplete, cut off by the edge of the aperture. Looking at G & P’s illustrations, it is clear that many of their stimuli similarly consist of figures with visible long contours that are perceptually incomplete, as if cut off by the edges of the aperture. In other words, their stimuli contain wonky stripes, instead of simple, straight-edged stripes (whose behavior, perceptually, may be more complex than that of the latter). So, again, I’m not sure where the claim of “minimal ambiguity” is grounded.

George Sperling , Peng Sun , Dantian Liu , Ling Lin (2020) Psychological Review

[Attempt #1] In their “Theory of the perceived motion direction of equal-spatial-frequency-plaid-stimuli” the authors have forgotten to add one more important qualifier. The correct title should be “Theory of the perceived motion direction of equal-spatial-frequency-plaid-stimuli viewed through a circular aperture.” The authors are treating the perceptual effect of viewing this type of figure through a round aperture as the general case, when, as has been known at least since Wallach (1935), the shape of the aperture has a dispositive effect on the direction of perceived motion. The same set of bars, for example, that seen through a round aperture appear to be moving perpendicularly to their length will, when seen through a rectangular one, appear to be moving along the edge of the aperture (as in the barberpole illusion.)

[Attempt #2] The authors need to qualify their claims, which are tailored to apply only to circular apertures. It is well-known that perceived motion direction varies substantially with aperture shape.

[Attempt #3] In the first sentence of their abstract, the authors state that: “At an early stage, 3 different systems independently extract visual motion information from visual inputs. At later stages, these systems combine their outputs.”

It sounds as though they’re relating matters of fact, or, at worst, matters of accepted, well-corroborated theory. But this is not the case. This seemingly solid base is actually a reference to an old conjecture, for which only one rather old reference is provided:

“Three motion systems have been proposed for human vision, (e.g. Lu & Sperling, 1995a).”

The term “proposed” constitutes a clear signal from the authors of the still highly-conjectural status of the claim, while no further citations are offered vis a vis subsequent experimental corroboration or replication of the initial proposals and results. Clearly, the single experimental Lu and Sperling paper isn’t enough to establish such an ambitious claim. Thus, I would caution that the statements quoted above are misleading.”

[Attempt #4]

“There is intrinsic ambiguity in determining the motion diretion of a one-dimensinal stimulus, such as a sine-wave grating. Consider a snapshot of a sine-wave grating displayed on a peice of paper, and the paper is set into motion. Observing through a circular window, the motion is perceived as being perpendicular to the orientation of the grating no matter what arbitrary direction the piece of paper may be physcially moving in. Indeed, all directions of motion of the paper that happen to have the same motion component perpendicular to the stripes of the grating produce precisely the same image inside the aperture…This is the “aperture problem.”

Sperling et al’s description of the “aperture problem is incomplete and possibly misleading, as it refers only to a “circular window.” When the window is not circular, then what is perceived may well not be “perpendicular to the stripes of the grating,” but will vary widely depending on the shape of the aperture (Wallach, 1935).

[I would add that “equal-spatial-frequency-plaid-stimuli” have precisely zero special status in visual perception. They are a zombie historical artefact kept alive because it makes the math easy (as does limiting discussion to circular apertures). The idea that there are “plaid processing” systems is correspondingly artificial. “Theories” like these don’t generalize even a little bit to ordinary conditions, but they have been the norm for many decades now.]

Barlow (1972)

“In his influential article, Barlow employs what Romain Brette (2019) calls the “coding metaphor” of neural function. Brette is highly critical of this metaphor, and offers cogent logical objections.”

[I believe Barlow’s paper is considered “seminal,” it’s still being cited, and Brette’s critique shows how it’s basic premise is nonsense. He wasn’t the first – Teller also criticized it. PubPeer is evidently protecting a “sacred beast” at the expense of contemporary theoretical critics.]

Weiss, Simoncelli & Adelson Nature Neuroscience

[This next comment is interesting because the moderator included a note explaining why the comment was censored. The moderators comment was: “claims an important assumption is invalidated, but doesn’t explain what the assumption is or why it is important for the analysis or what the analysis is.” I’ll leave it to readers to judge whether I “explain what the assumption is.” As with the Nakayama comment above, the substance is identical to comments I have made on a number of papers repeating the same erroneous “fact.” My question is, why is a PubPeer moderator treating a comment on the site like a reviewer of a letter to the editor, (and without expertise in the particular topic, to boot?) The comment clearly isn’t frivolous; and is explicitly not the role of PubPeer to micromanage content. I subsequently attempted to make the comment more explicit, in case this helped, but the moderator is determined that the same comment I posted for the Adelson & Movshon (1982) “classic” (blogged here) won’t be allowed to post on Yair et al.]

“The integration process is essential because the initial local motion measurements are ambiguous. For example, in the vicinity of a contour, only the motion component perpendicular to the contour can be determined (a phenomenon referred to as the‘aperture problem’)2,4–7.”

The quote above, in particular the bolded section, mirrors a common misunderstanding about aperture motion. Among the citations provided two – 4 and 5 (Wuerger, S., Shapley, R. & Rubin, N. On the visually perceived direction of motion by Hans Wallach: 60 years later. Perception 25, 1317–1367 (1996); Wallach, H. Ueber visuell whargenommene bewegungrichtung. Psychol.Forsch.20, 325–380 (1935), contradict it, in the sense that they clarify that it is not valid as a general statement, applying to circular apertures but not, for example, to oblong ones (as we may easily confirm in the case of the Barberpole Illusion). As such, it lacks the generality that the authors imply it to possess.

Similarly:

“When viewing a single drifting grating in isolation, subjects typically perceive it as translating in a direction normal to its contours (Fig.1b).”

Again, the authors fail to mention the shape-contingency of the perceived direction of motion, making what appears to be an invalid general statement. Their analysis and proposals seem intended to be general – they are offering “a model of visual motion perception.” Given that a fundamental assumption on which the analysis rests is based on a claim about aperture motion that is easily falsified (given differently-shaped apertures), the proposals offered here, whether or not they may be said to apply to the special case, will likely not apply to the general case.”

[Another important thing to note here is that, as in the case of Adelson & Movhson (1982), Weiss et al adopt a false fact because it makes the math easier. If they were to attempt to address the true nature – as illustrated by known fact – of the phenomena they’re purporting to model, the tools – concepts and techniques – they currently have at their disposal would be next to useless. The false fact was inherited by the pro “bayesian” contingent for the same reasons it was adopted originally. All of “computational neuroscience” functions this way. As was made explicit by Carandini et al (2005), blogged here, choices of assumptions are made less based on their validity and more on whether they render the math “tractable.”]

Nakayama (1985) Vision Research

[The content of this comment is identical to that of several comments I have succeeded in posting on PubPeer. I have to wonder whether the status of the author made it too sensitive this time? The issue is very straightforward.]

“This is one of many papers which refer to a special case of aperture motion perception as though it were the general case:

[Nakayama says] “This is essentially equivalent to viewing a moving object through an aperture. Suppose an extended line moves through this aperture [see Fig. IO(A)]. The velocity in the aperture can be described by VL, the local velocity orthogonal to the orientation of the line.”

What Nakayama is referring to as the “local velocity” is the apparent velocity of a line moving within a circular aperture. The text doesn’t specify that the statement is restricted to circular apertures – on the contrary, the discussion is framed in terms of general principles of perceived motion. The distinction is important, since if we were talking about an oblong aperture, the “local motion” would not be “orthogonal to the orientation of the line,” and the equations and inferences offered would not hold. As explained by Wallach (1935), a paper Nakayama cites, the direction of local (i.e. perceived) motion is highly contingent on aperture shape.”

Chen et al (2019) Current Biology

This latest case is perhaps the most interesting and revealing, and I’ve written a separate blog post about about my original post – which went up briefly, as well as the senior author’s extended emailed responses. I tried again with the comment below, which was censored:

“McNish et al (1997) Journal of Neuroscience: “Post-training lesions of the dorsal hippocampus attenuated contextual freezing, consistent with previous reports in the literature; however, these same lesions had no effect on fear-potentiated startle, suggesting preserved contextual fear. These results suggest that lesions of the hippocampus disrupt the freezing response but not contextual fear itself.”

Chen et al have not discussed the distinction being made above, in the cited article and elsewhere, but seem to be using freezing as an operational definition of contextual fear.”

I tried yet again, isolating a simple comment about the statistical tests used – a comment the senior author admitted via email was legitimate:

Jun, Ruff, Tokdar, Cohen & Groh (2019) biorxiv

“In both experimental datasets, drifting gratings were presented at locations that overlapped with the receptive fields of the recorded V1 neurons.”

I can’t find where the authors describe how they determined the recorded neurons “receptive fields.”

Because this is a post hoc re-analysis of two older papers (Ruff and Cohen (2016) and Ruff, Alberts and Cohen (2016)), referenced in the Results section of this paper, I checked those, too, for information. But like Jun et al, no methodological descriptions and no citations are provided to clarify either technique or rationale.

Pack, Livingstone, Duffy & Born (2003) Neuron

“Humans and other primates are able to perceive the speed and direction of moving objects with great precision. This is remarkable in light of the fact that, at the earliest levels of the visual system, local measurements of velocity are often confounded with the object’s shape. The problem is based on simple geometry: if the middle of a moving contour is viewed through a small aperture, only the component of velocity perpendicular to the contour can be recovered Wallach 1935, Wuerger et al. 1996.”

The claim that, as a general principle, “only the component of velocity perpendicular to the contour can be recovered” when seen through a “small aperture” is false, and it is not made by Wallach (1935) (or the intro to the English translation in Wuerger et al (1996)), who emphasizes the importance of the shape of the aperture, not its size.

The claim is applicable to circular apertures; as we can see from figure 1, the aperture characteristics of small size and circular shape are confounded in this analysis. The relevant distinctions need to be made if the theoretical discussion is to be well-founded.

This paper is cited uncritically by the one commented on below, as well as over 100 others.

Hataji, Kuroshima & Fujita (2019) Scientific Reports

Musall, Kaufmann, Juavinett, Gluf & Churchland (2019) Nature Neuroscience

“We characterized movements using video and other sensors and measured neural activity using widefield and two-photon imaging. Cortex-wide activity was dominated by movements, especially uninstructed movements not required for the task. Some uninstructed movements were aligned to trial events. Accounting for them revealed that neurons with similar trial-averaged activity often reflected utterly different combinations of cognitive and movement variables.”

In the first bolded section from the section of the abstract quoted above, the authors seem to be making two huge claims First, that activity across the cortex of their subject animals was “dominated” – meaning, presumably, was responsible for, or caused – the movements observed; and second, that they are in a position, as investigators, to interpret that neural activity as subserving movement rather than mental events, emotions, perceptual events, etc.

Similarly, when they state in the second bolded section, that activity reflected different combinations of cognitive and movement variables, one has to ask how they are in a position to parse neural activity in this way. Their method is based purely on observing behavior; they’re not in a position to observe animals thinking.

  Van Zyl (2018) New Ideas in Psychology 

It is my understanding from reading various texts that the chief distinction made by “Bayesians” between their practices and those of “frequentists” is that frequentists are referring to the actual probabilities or relative probabilities of events resulting from a test of a predictive hypothesis, while “Bayesians” are referring to “subjective” probabilities encompassing a much broader framework (extending beyond the experimental test).

While there have been attempts to carve out a distinction between “objective” and “subjective” “Bayesian analysis,” these efforts seem, so far, unconvincing. I quote below from the Wikipedia entry on “Bayesian probability”:

“Broadly speaking, there are two interpretations on Bayesian probability. For objectivists, interpreting probability as extension of logic, probability quantifies the reasonable expectation everyone (even a “robot”) sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Cox’s theorem.[2][8] For subjectivists, probability corresponds to a personal belief.[3] Rationality and coherence allow for substantial variation within the constraints they pose; the constraints are justified by the Dutch book argument or by the decision theory and de Finetti’s theorem.[3] The objective and subjective variants of Bayesian probability differ mainly in their interpretation and construction of the prior probability.”

Thus, we see that even the “objective” version of “Bayesianism” entails subjective cognitive evaluations – “reasonable expectations,” “rationality constraints,” “interpretation of the prior probability.”

Continuing in the Wikipedia entry, we see that, indeed, the “objective” category is the subject of much (subjective) disagreement:

“[Many theorists] have suggested several methods for constructing “objective” priors. (Unfortunately, it is not clear how to assess the relative “objectivity” of the priors proposed under these methods)…The quest for “the universal method for constructing priors” continues to attract statistical theorists.”

Given all the above, given the absence of a “universal method for constructing [objective] priors” I would say that a strong case for “Bayesian probability,” as regards its use in objective science, is still incomplete.

 Wei Wei (2018) Annual Review of Vision Science

I am puzzled by the author’s unqualified use of the term “receptive field,” a term whose meaning appears to be in flux. The simple notion that there is a circumscribed part of the retina, and a corresponding circumscribed part of perceived space, events in which affect particular neurons, has long been debunked and labelled “classical receptive field.” The understanding that the early concept was problematic came almost from the start, as indicated by the excerpt from Spillma’s (2015/JOV) article, “Beyond the classical receptive field (the two paragraphs are consecutive in the original text):

“Our perception relies on the interaction between proximal and distant points in visual space, requiring short- and long-range neural connections among neurons responding to different regions within the retinotopic map. Evidently, the classical center-surround RF can only accommodate short-range interactions; for long-range interactions, more powerful mechanisms are needed. Accordingly, the hitherto established local RF properties had to be extended to take distant global inputs into account.”

“The idea of an extended (called nonclassical or extraclassical today) RF was not new. Kuffler (1953, p. 45) already wrote, “… not only the areas from which responses can actually be set up by retinal illumination may be included in a definition of the receptive field but also all areas which show a functional connection, by an inhibitory or excitatory effect on a ganglion cell. This may well involve areas which are somewhat remote from a ganglion cell and by themselves do not set up discharges.””

The language of the above text is a bit misleading, implying as it does that the “hitherto established” local RF remained in place and merely needed elaboration. It should be clear that if the firing rate of a neuron “x” can be altered by the conditions of stimulation applying to the whole retina, then it is not possible to experimentally define a local area as in any way special based on the local conditions of stimulation. Or rather, it is artificial, privileging one set of global conditions other an infinite number of alternatives in producing a definition (which even then has not been proven to replicate). Even the verbal expansion of the term to include “non-classical receptive fields” does not rescue the concept from this problem.

The extreme confusion that the concept has produced as researchers have attempted to specify its elusive properties may be appreciated in reading Carandini et al’s (2005) “Do we know what the early visual system does?” The discussion includes reference to a black box “saving device.”

The concept of “direction-selectivity” is closely tied to the receptive field concept. It is difficult for me to understand how Wei can avoid addressing these theoretical problems.

Unqualified use of the term “receptive field” and associated concept is quite common; I’ve highlighted it in several PubPeer comments, including on El Boustani et al (2018)/Science; onBeltramo & Scanziani (2019)/Science; on Olshausen & Field (1996)/Nature

A second stab at Wei Wei also failed:

As noted in the PubPeer comment on Olshausen & Field (1996) as well as other PubPeer comments linked therein, the concept of “receptive field” is currently missing a theoretical definition. Various researchers employ different de facto definitions of the term, strictly tied to the procedures they happen to use. The use of the term by Wei in this review, without qualification or clarification, renders the discussion incomplete.

 Bakkour, Palombo, Zylberberg, Kang, Reid, Verfaellie , Shadlen , Shohamy (2019) eLife “The hippocampus supports deliberation during value-based decisions.” 

(In addition to the comment submitted to PubPeer, I note here that the authors’ use of the term “supports” is an example of Neuroscience Newspeak.)

“Bakkour et al state:

We fit a one-dimensional drift diffusion model to the choice and RT on each decision. The model assumes that choice and RT are linked by a common mechanism of evidence accumulation, which stops when a decision variable reaches one of two bounds.

I’m confused about what the authors are claiming. Experiments are based on two-alternative forced choices and structured so that the data produced may be “modelled” based on the “drift diffusion model.” The fitting procedures allow modellers quite a bit of leeway in adjusting free parameters, and many quantitative choices are unconstrained by theory. The above-stated assumptions of the “drift diffusion model”, i.e. that “choice and RT are linked by a common mechanism of evidence accumulation” are vague; no concrete description (even a vague one) in terms of neural function has ever been proposed. The drift diffusion model is an extension of “signal detection theory;” and the assumptions of this “theory” seem to lack face validity. SDT curves tend to be specific to particular experiments and not to generalize.

In short, under the circumstances I’m not sure that fitting the data acquired to the model under consideration is enough to license inferences about brain function.”

 Mueller & Weidemann (2008) Psychonomic Bulletin and Review

“SDT assumes that percepts are noisy.”

The term “percept” refers to what is consciously experienced, and generally to what is experienced visually. What we experience visually is not noisy, and does not necessitate any conscious decision-making on the viewer’s part. Conscious decision-making is, both implicitly and explicitly, what we are talking about here. Implicitly, because if the conscious perceptual experience (the percept), is noisy, then the viewer must be called on to make a conscious decision as to how to interpret it. Explicitly, because the associated experiments refer to participants’ decisions, usually binary forced-choice decisions often requiring guesses.

Given all of this, the statement that “SDT assumes percepts are noisy” is hard to interpret. The assumption seems to lack face validity, and no explanation or references, or proposals of how to test it, are offered. On what basis is the assumption considered valid?

Oberauer & Lewandowsky (2019) Psychonomic Bulletin and Review.

[It seems particularly unkind of PubPeer moderators to censor my reply to another commenter’s reply to my initial comment, which did post.]

The text you cite seems very confused and waffling to me. Which of Mayo’s arguments have you found compelling with respect to making the fruits of post hoc correlation-fishing reliable – something that, as mentioned, is virtually never the case? Has she proposed and tested a method of post hoc statistical inference that produces replicable outcomes?

Chen, Yeh & Tyler (2019) Journal of Vision

I’d like to discuss the authors’ dichotomization of the images used in their procedures into “target” and “noise.” It seems to me that this dichotomy is not a valid one, for fairly obvious reasons.

In this comment, I’ll be using the term “image” to refer to a surface reflecting light into the eyes of a seeing human.

By target, the authors are referring to certain more-or-less smoothly-changing bands of light and dark which they call “Gabors.” These patterns are typically perceived either as alternating bars or, if the transitions are fairly gradual, as partly-shaded cylinders. By “noise,” they mean a different type of pattern, consisting of variously-arranged dots.

Collections of dots always tend to be grouped spontaneously by both human and species of non-human viewers to produce various perceived shapes or patterns, among the simplest examples being the simple “rows” or “columns” of dots typically used to demonstrate principles of organization. We may see the same tendency in the grouping of stars into constellations, and the perception of objects in clouds.

Here, we have two types of patterns; one rather orderly both physically and perceptually, the other less so – but both requiring and eliciting perceptual/neural organizing processes in order for perceived structures, stable or unstable, to arise in consciousness.

When two patterns are super-imposed in an image, the structures that will emerge in consciousness are not necessarily the sum of these two. They may be – it depends on the combined pattern, and how it is interpreted by the perceptual organizing processes. The combination of the two may destroy the structural coherence of one or the other or both; and new structures may be perceived. A classic series of experiments by Gottschaldt demonstrated, many decades ago, that “targets” may not be perceived in certain contexts, even when observers expect and are actively watching for them.

It seems to me the above facts are relevant and should be addressed in the authors’ theoretical discussion.

This comment was just rejected by PubPeer. Why?

 

Image source: https://www.google.com/search?q=censorship&client=firefox-b-d&source=lnms&tbm=isch&sa=X&ved=0ahUKEwjFopHXhZzkAhWDzKQKHbzwAiEQ_AUIESgB&biw=1297&bih=507#imgrc=vG8ssZ1qZ_JrRM:

Is this April 2019 Science article an example of fake visual neuroscience?

Screen Shot 2019-08-01 at 2.28.00 PM

In the past two posts I’ve tried to explain why prevailing methods in visual neuroscience amount to a fake science even less demanding than astrology. A recent Science article by Stringer, Pachitariu, Steinmetz, Reddy, Carandini & Harris (2019) seems a perfect example of such methods. The article is titled “Spontaneous behaviors drive multidimensional, brainwide activity.”

Stringer et al (2019) wave some objects around in the lab while recording from a few thousand neurons, then mine their data for coincidences between (their partial descriptions of) those external events and the electrical activity they have recorded from 10,000 neurons.

As is well-known, even if those external events had been random numbers from a random number generator, correlations would be found.

In other words, there’s no necessary, rational link between Stringer et al’s experimental conditions and the “data” they collect. The same methodological principle could be used to support any thesis whatsoever, e.g. to identify supposed psychics in our midst.

Big data doesn’t help, either; it just makes things worse, as Calude and Longo (2016) recently showed in a paper titled The Deluge of Spurious Correlations in Big Data.

The hallmark of scientific practice is, of course, an investigator’s ability to show a tight, necessary link between theory and experimental conditions, and experimental conditions and results. Again, that crucial connection here is completely lacking. Their method, in other words, does not allow the authors to distinguish between chance and necessity.

Relatedly: As Gary Smith explains in The AI Delusion, the principle component analysis (PCA) technique used by Stringer et al is a tool for data reduction whose output – the “components” – need have no predictive value:

“A goal of summarizing data is very different from a goal of using the data to make predictions….the principle components are chosen based on the statistical relationships [in the sample] among the explanatory variables, with no consideration whatsoever of what these components will be used to predict. For example, a person’s birth month or favorite candy might end up being included among the principal components used to predict whether someone will be involved in a car accident. Moreover, if the principal components are based on a data-mining exploration of hundreds or thousands of variables, it is virtually certain that the components will include nonsense. Once again, Big Data is the problem, and principal components is not the solution.”

To avoid confusion, it should be noted that Smith is using the word “predict” in the normal, forward-looking sense, not in the neuroscience newspeak, post hoc manner of Stringer et al. (2019) (see below).

The “thousands of variables” here correspond to the 10,000-plus putative neurons being recorded from. They constitute only a small subset of a highly integrated system involving billions or trillions of synapses. The idea that meaningful inferences about how a such a complex system, whose basic functional principles are as yet unknown, may be drawn via random correlation-fishing beggars belief.

Correlation-fishing is also, naturally, the basis of the literature Stringer et al inappropriately cite.

They state, for example, that “The firing of sensory cortical [] neurons correlates with behavioral variables such as locomotion…,” citing DiPoppa et al (2018). But the claims of DiPoppa et al were arrived at via straightforward p-hacking.

The discovery of such correlations in a sample of data is, again, no basis for making causal neuroscientific claims (as pointed out recently by Mehler and Kording (2018)), due to the obvious problem of massive confounding. (One of Mehler and Kording’s main points was the impropriety of employing causal language – like the term “drive” used in the title of the present paper – to describe correlation-fished neuron-stimulus associations as though they implied a causal relationship). And such associations are known not to replicate.

Stringer et al also tell us that:

“[N]eurons’ responses to classical grating stimuli revealed robust orientation tuning as expected in visual cortex (fig. S1).”

As someone who has studied this literature closely, this statement reads to me like a lie. Claims of orientation-tuning have always been correlation-fished, exactly in the way we could identify psychics based on a series lucky guesses of the results of dice rolls. If we go to Stringer et al’s figure S1, the situation becomes quite clear:

“Orientation tuning curves of the 400 most tuned neurons in each experiment (as assessed by orientation selectivity index)…” As in DiPoppa et al, neurons that happen to be firing at highish rates (according to some arbitrary criterion) coincidentally with the presentation of the “stimulus” are defined as tuned, and their firing is causally attributed to the “stimulus.” Practitioners of such methods seem to be totally unaware of the massive confounds involved.

Finally, I have to note the reference to “classical grating stimuli.” The only meaning of “classical” here is to indicate stimuli that have been used continuously for at least fifty years, so that the correlation-fishing nature of the neuron-stimulus correlations will not be obvious. The method could just as (in)validly be used to identify “kitty-tuned” neurons. Even more plausibly, perhaps, given the utter absurdity of the rationale underlying the use of gratings.

Why It’s Easier To Be A Neuroscientist Today Than An Astrologer

Screen Shot 2019-06-26 at 2.58.57 PM

Believe it or not, by today’s standards, the demands made on astrologers are more stringent than those made on “neuroscientists.”

Sure, neuroscientists today use lots of high-tech equipment and fussy, complicated techniques; but they’ve arranged things so that they’ll will always (seem to) turn out the way they want; so that their experiments can never prove their most basic assumptions wrong, even if they are.

If you’re an astrologer, you believe that people’s personality traits are determined by the stars. If you’re a Gemini, you’re a certain way, a Taurus, another, and so on. Lots of times, these “predictions” come true – no wonder Joe has a hot temper, he’s an Aries! But they can also be challenged; Shannon is hot-tempered, too, which is strange, because she was born under a milder star. Of course, astrologers can always resort to more detailed astrological analyses to rationalize apparent discrepancies; but at each step, their “predictions” may be falsified or challenged. The lack of reliability of astrologers’ predictions are one reason we don’t let them publish in scholarly journals.

Now, imagine this:

All records of birth dates disappear, as well as memories. The astrologers step in; they can fix it! They ask everyone for a detailed bio, and perform analyses based on their astrological assumptions about the connection between the stars and personality. Approximate dates of birth are then assigned based on the results of the test; if the test says you’re a Pisces, then we’ll presume you were born in February or March, and so on. Note the difference between this scenario and the previous one. In the previous case, astrologers assumptions could be shown to be wrong, based on failure to make accurate predictions. In this one, the assumptions are taken as true a priori, and analyses simply lie on top. The assumption, in other words, that having a particular personality/behavior is caused by particular alignment of the stars at a certain moment (of your birth) is used to label/define you as an Aquarian, etc. What if your future behavior doesn’t align with the label assigned? Well, in that case the astrologers, as mentioned above, are allowed to keep the label, but argue that the discrepancy is due to other, complicating factors.

As I discussed in the previous post, neuroscientists do pretty much the same thing: They assume that a particular neuron’s (part of a network of billions or trillions of connections) “personality” falls into a small number of simple categories of “preference,” and that an instance of a coincidence in time between a neuron’s high activity and the presence of an exemplar of that “preferred” category licenses them to label the neuron (post hoc) as, e.g. an “orientation detector” with a particular “tuning curve.” The fact that such findings do not replicate (the coincidences don’t repeat) is treated as “variability” in neural activity due to complicating factors. The “orientation preference” assumption, in other words, is carved in stone, and violations are explained away.

That this technique may be used to support any assumptions, even the most untenable, is evidenced dramatically by the continued claims of the existence of “spatial filters.” In general, the small set of claimed “preferences” of “visual” neurons are historical artifacts dating back many decades. They survive because never challenged.

Going back to our astrologers: Let’s imagine that, having been given license to treat their assumptions about birth and stars as true, astrologers then decided to expand their research program. They could, for example, ask question about the role of star sign in determining success in various professions. They could collect data on professional success, employing various parameters, and perform linear regressions to find whether it’s better to be a Virgo or a Libra if you want to be a neuroscientist. Or they could dig deeper to see whether there is an interaction between the sign of a student and their PI. Naturally, they would couch their results in probability terms, employing Bayesian “default priors” to fit in with the current zeitgeist.

Note, again, the astrologers would be taking no risks here; again, their underlying assumptions are not on the line. They aren’t required to test them by making any predictions about the results of their investigation; they simply describe certain arbitrary parameters of their sample, with whatever mathematical techniques and assumptions they choose to assess them. To data-mine/p-hack their sample, in other words. This is what neuroscientists are doing when they collect “data” and then mine it for correlations with “behaviors” on certain “tasks,” etc.

Will such correlations found post hoc apply in general? As a rule, they don’t.

This is the case with the analogous practices in neuroscience, as many have acknowledged. They include Konrad Kording of UPenn, speaking on Waterloo Brain Day 2017. Addressing the issue of why generalization studies of correlation-fished results are “never, or almost never” performed, he replies:

I’ll tell you why…All my generalization studies fail, almost all of them, both in psychophysics and in data analysis.”

If our astrologers were, as a result of their inability to achieve reliable results via correlation-fishing built on arbitrary assumptions, to engage in years of earnest discussions about their field experiencing “replication” or “reproducibility” crises, and found (and fund) a “Center for Reproducible Astrology,” and still continue on with business as usual…they would be acting like neuroscientists in good standing.

FURTHER READING FROM THE BLOG

Contemporary Neuroscience Depends on Outright p-hacking

Bondy, Haefner & Cumming Base Their Post Hoc Correlational Study on Correlations They Say (Correctly) Don’t Exist

It Is Bullshit: None of it Replicates

Neuroscience Newspeak, Or How to Publish Meaningless Facts

The Miracle of Spatial Filters

Why Correlational Studies Are Fake Science

Nature Neuroscience Starts Year Strong With Correlation-Fishing from Yale, Mount Sinai

 

Why “Correlational Studies” Are Fake Science

Screen Shot 2019-06-14 at 9.53.35 AM
The brain as “neuroscientists” crystal ball; they see what they want to see.

It seems that the dominant practice in “visual neuroscience” today is to take some “stimulus,” wave it in front of a human or animal subject, and record brain activity.  Correlations in time between this activity (as defined by some arbitrary metric, e.g. averaging over arbitrary time intervals), and exposure to the “stimulus” are then described as “responses” to the “stimulus.”

The metrics are ad hoc and flexible. In Lau et al (2019), for example, we have this:

Responses that fell above the top 2.5%, or below the bottom 2.5%, of this distribution were considered significantly excitatory or inhibitory respectively.

Even the neurons to which the “data” are supposed to correspond are “putative:”

…neurons with waveforms that had an interval of 0.5 ms or less and a trough/peak amplitude ratio of >0.5 were designated as putative PV neurons.

Do you see what’s going on here?  The expectation that there exist certain neurons that “respond” to whatever investigators imagine their “preference” to be is, in the circumstances, a sure-fire prediction. There will always be more or less electrical activity at any given brain location at any given moment. We could “link” highish or lowish points of activity with any external event we like –  it’s a low-to-no-risk operation. The method doesn’t punish you for being wrong, for not understanding anything about your system.. If you want to rack up a relatively higher number of coincidental correlations, you simply use a low p-value, such as the p < .05 criterion (understandably) still very popular in the “neuroscience” literature.

And voila – Nature paper, Science paper, Neuron paper, Current Biology paper, etc.

The procedure is exactly like trying to discover psychics among a group of people. First, you assume that some people are psychic. Then, you choose a decision criterion – on what basis will you classify certain people as psychic? You could, for example, ask them to guess at the the number on a playing card without looking, and classify the ones that got a certain proportion correct as psychic. The idea that some people are psychic wouldn’t need to be true for us to be able to classify some of our subjects as such. It wouldn’t matter that the idea violates the known laws of physics.

Similarly, it doesn’t matter that the idea of neurons as “detectors” “signalling” things (the notion is implicitly homuncular) via highish firing rates violates basic logic and known facts; post hoc correlation-fishing doesn’t care about fact, doesn’t care about logic, doesn’t care about truth. It’s a racket.

 

Image credit: Screenshot taken from video by shaihulud.