After having been told by PubPeer that I “do a very good job” of following their guidelines, I was grey-listed on the basis of new, “evolving” guidelines which have yet to appear on their site. It seems to me they are using a very heavy hand, at odds with the free discussion implied by their stated role as an “online journal club. Below is a comment that was just rejected, on an article by Kim, Rouault, Druckmann & Jayaraman (2017/ Ring attractor dynamics in the Drosophila central brain) in Science. (A slightly edited version of the section in italics was submitted independently, and was also rejected).
This paper came up in a Twitter discussion with a PubPeer co-founder, so I had another look at both my [original] comment and the paper.
The issue seems to be chiefly my objection to the use of arbitrary “priors,” and specifically whether it would have made a difference to the outcome.
Shouldn’t the choice of values that we plug into equations have an effect on outcomes? I would think it would.
Then again, given the fact that conclusions were based on post hoc correlation-fishing with many degrees of researcher freedom, perhaps researchers could have organized their model to achieve what they would describe as “consistency” even given a wider range of “prior” values. This wouldn’t make things any better.
On looking at it again, I noticed something about the way this paper is written, which in a way reinforces what I have just said. The problem becomes visible in the very first sentence of the abstract:
“Ring attractor dynamics are a class of recurrent networks hypothesized to underlie the representation of heading direction.”
Reading this sentence, do we conclude that “ring attractor dynamics” are, in general, a real phenomenon, or a hypothetical one?
“Ring attractor dynamics are…” predisposes us to treat the concept as fact.
“Ring attractor dynamics are hypothesized…” reverses that – they are hypothetical.
“Ring attractor dynamics are hypothesized to underlie the representation of heading direction” leaves us thinking that “ring attractor dynamics” are a real thing whose role in “representing heading direction” is the thing that’s in question.
Reading further, the answer seems to be that “ring attractor dynamics” are, in fact, and in general, purely hypothetical constructs, as indicated by the two sentences quoted below:
“Ring attractor dynamics have long been invoked in theoretical work; our study provides physiological evidence of their existence and functional architecture.”
“Theoretically, this can be accomplished by ring attractor networks..However, whether the brain uses these hypothesized networks is still unknown.”
One wishes that authors of scientific papers would strive for greater clarity in their writing, to avoid confusion of the type produced here.
The title also seems inappropriately confident, treating “ring attractor dynamics” as an established principle. But a single experiment, let alone one based on post hoc modelling and long-distance interpretation via many intermediaries of unknown credibility, isn’t sufficient ground for such a major claim.
The claims regarding “heading direction representation” also seem problematic. We are told, first, that there is a correlation between “heading direction” and “bump”-like neuron activity in “E-PG neurons (a “population of neurons” – but we’re not told how “population” is defined here).
How are correlations evaluated? As far as I can see, we can’t find this information in the body of the text. In the Supplementary material, under “Correlation analysis” the sum of the info provided is:
“‘Unwrapped’ time series were first computed as a cumulative sum of all angular displacements (Fig. 1K). Pearson’s correlation coefficients were then computed between two entire ‘unwrapped’ time series.”
Am I missing something, or are we in the dark as to the significance criterion used here? Should this information have been provided?
Treating correlations as causal in the way Kim et al are doing seems to me to open them up to the criticism levelled at much of neuroscience by Mehler & Kording (2018), in which they point out that many authors of neuroscience papers inappropriately describe highly confounded correlations between some experimental condition and some neural activity as causal.
This is even more the case as the term “heading direction” is rather vague.
Sure, we can define it in the context of a specific experimental setup, but even here, it is an abstraction. It can only refer to the relationship between the fly’s head and the perceived (by the fly) structure of the setting. In the real world of a moving fly, this setting is constantly changing. What are supposed to be the constants in relation to which this “heading direction” is defined? The concepts “location of activity in a ring of neurons” and “location and movement in a real, changing environment” differ wildly in complexity. The concept “location x of high firing activity within a set of neurons” is many orders of magnitude simpler than “location in the world.” How can the one represent the other? In other words, is the authors’ conceptual or verbal description a. valid and/or b. concrete enough to support informative experimentation?
Data can be very accommodating when hypotheses are vague and analysis permissive. To return to my original point, about how the vagueness of a hypothesis and permissive analysis can enable claims of “consistency” for a favored view, I offer this is from the paper’s concluding paragraph:
“We found that the effective network connectivity profile was consistent with ring attractor models characterized by narrow local excitation and flat long-range inhibition.”
The “narrow local excitation and flat long-range inhibition” features were not a prediction of the “ring attractor” view; they are the product of trying to make the favored model fit the available data, an effort, again, built on countless ad hoc assumptions and weak (perhaps) correlations. There is no real evidence that any of this is true.