After having posted hundreds of comments on vision science and visual neuroscience without a problem, I was suddenly grey-listed by PubPeer without explanation, without apparent reason. Requests for clarification of moderation criteria go unanswered. It’s now a virtual sure thing that my comments won’t post. Below is an ongoing list of comments. as they’re rejected, with most recent additions added first.
Update: PubPeer has added to its FAQs, without revising the earlier ones, which they flatly contradict. For example, “PP does not review comments scientifically (see above), so factual comments conforming to our guidelines may still be wrong, misguided or unconvincing. For this reason we insist that readers… make up their own minds about comment content” vs. new stipulations giving moderators license to bar comments deemed “misguided” “erroneous” “unclear” “potentially malicious” “disinformation,” w/out explanation, and admitting decisions may be “arbitrary” due to lack of moderator expertise.
Update 2: Two days after contradiction was pointed out on Twitter, PubPeer removed its open commenting stipulations. In practice, they now reject even publicly verifiable and relevant information (see below) without specifying what their objection is.
Geleris et al (2020) New England Journal of Medicine
“We conducted a secondary analysis that used propensity-score matching…”
In a Twitter discussion on this article, Judea Pearl offered the following comment:
“The bias reducing power of PS matching is equal precisely to the bias reducing power of adjusting for the covariates that PS deploys. It is proven here https://ucla.in/2NnfGPQ#page=18“
According to M.M. Garrido commenting in JAMA (2016):
“covariate adjustment is not considered a best practice in propensity score methods. Researchers should interpret results of analyses done in this manner with extreme caution.”
Attempted comment on PubPeer “Bug Report” thread May 8/2020
Please be aware of the “current” PubPeer guidelines as expressed in the Faq section “How to write a high-quality comment? Specificity, analysis, support and context.” They are pretty counter-intuitive and seem to afford moderators a latitude at odds with the website’s original purpose. The last one, for example, reads:
“Conspiracy theories and disinformation will be blocked, including links to external sources containing such content.”
What is missing is any explanation of what criteria the moderators are using to evaluate what content is to be classed as a conspiracy theory/disinformation.
Guest & Martin (2020)/Psyrxiv
“Computational modeling is able to demonstrate how one cannot always trust one’s gut. To start, one must create: a) a verbal description, a conceptual analysis, and/or a theory;b) a formal(isable) description,i.e., a specification using mathematics, pseudocode, flow-charts, etc.; and c) an executable implementation written in programming code. This process is the cornerstone of computational modeling and by extension of modern scientific thought, enabling us to refine our gut instincts through experience. Experience is seeing our ideas being executed by a computer, giving us the chance to “debug” scientific thinking in a very direct way. If we do not make explicit our thinking through formal modeling, and if we do not bother to execute, i.e., implement and run our specification, we can have massive inconsistencies in our understanding of our own model(s).”
Guest and Martin’s description of the scientific method, in which they equate it with “computational modelling” ( “This process is the cornerstone of computational modeling and by extension of modern scientific thought”) is odd. Traditionally (that is, the way science has worked thus far), when we refer to “experience,” we are referring to experiment. Feynman, for example, describes the scientific method in these terms:
“First, we guess. Then we compute the consequences of our guess, to see what it would imply; then we compare those computation results to nature, or experiment or experience…”
In other words, the “experience” that allows us to “debug” our scientific thinking (our theories) in a direct way has always been, and can only be, experiment, not “seeing our ideas executed on a computer.” No novel scientific finding could be made this way, unless our computer somehow contained the world and all its principles, known and unknown.
Because it does not, modellers are obliged to make ad hoc assumptions, often assumptions of convenience, i.e. assumptions that are amenable to their preferred method. Here, for example, Guest and Martin make the claim that humans base their choice on a proposed “pairwise decision rule…the [mathematical] model that everybody would have claimed to be running in their heads…”
Is their proposed model really the one that “everybody would have claimed to be running in their heads?” Or just an ad hoc assumption by the authors? My mental model would be much more visual, and wouldn’t involve actual geometric equations. As Gigerenzer has pointed out in his book, Rationality for Mortals, psychologists have tended to assume unconscious mental math computations where in fact people often employ rules of thumb (e.g. in catching a fly ball – can Guest and Martin model that!?). Even if the modeller’s assumptions were based on the best rationale possible given available info, we would still be trapped inside an untested theoretical bubble, bugs and all, until our assumptions had been tested by…experiment, not by seeing them executed on a computer.
To my knowledge, no novel (and reliable) scientific finding has ever depended on an “executable implementation written in programming code” for its achievement. I’d be interested if the authors could provide an example of such a finding.
Ultimately, all Guest and Martin have done is to show that math may be useful in correcting a (supposedly common) false assumption (common, at least, based on some responses to a tweet). No question math has always been useful in science, allowing better tests of predictions and thus “debugging” of theory. But again, these tests take place in the world, not in a computer. Also, their proposed method of addressing their “cartoon” example makes a relatively simple problem much more complicated than it needs to be (converting real-world situations and simple math problems into code). I wonder if there is any situation where computer models would actually make a problem easier rather than more convoluted and clunky.
Musall, Kaufmann, Juavinett, Gluf, Churchland (2019)/Nature Neuroscience
“We characterized movements…and measured neural activity…Cortex-wide activity was dominated by movements, especially uninstructed movements not required for the task. Some uninstructed movements were aligned to trial events. Accounting for them revealed that neurons with similar trial-averaged activity reflected utterly different combinations of cognitive and movement variables.“
In the first bolded section above, the authors seem to be making two huge claims. First, that activity across the cortex of their subject animals was “dominated” – meaning, presumably, was responsible for, or caused – the movements observed; and second, that they are in a position to interpret that neural activity as subserving movement rather than mental events, emotions, perceptual events, etc.
Similarly, when they state, in the second bolded section, that activity reflected different combinations of cognitive and movement “variables,” one has to ask how they are in a position to parse neural activity in this way. Their method is based purely on observing behavior; they’re not in a position to observe animals thinking.
John Greenwood & Michael Parsons (2020)/PNAS
“The visible contours in these elements enabled the perception of motion wtih minimal ambiguity given their orientation variance (i.e. avoiding the aperture problem; ref 24 [Adelson & Movshon (1982)]”
I would ask the authors to clarify in what way A & M (1982) supports the claim of “minimal ambiguity” in their stimuli.
In the stimuli used by Adelson and Movshon (stripes in circles), ambiguity is maximal; a set of stripes seen through a round aperture will be seen as moving perpendicularly to their long edges regardless of the true direction of motion. Moreover, their stimuli also have “visible contours” – the long edges of the stripes, which appear incomplete, cut off by the edge of the aperture. Looking at G & P’s illustrations, it is clear that many of their stimuli similarly consist of figures with visible long contours that are perceptually incomplete, as if cut off by the edges of the aperture. In other words, their stimuli contain wonky stripes, instead of simple, straight-edged stripes (whose behavior, perceptually, may be more complex than that of the latter). So, again, I’m not sure where the claim of “minimal ambiguity” is grounded.
George Sperling , Peng Sun , Dantian Liu , Ling Lin (2020) Psychological Review
[Attempt #1] In their “Theory of the perceived motion direction of equal-spatial-frequency-plaid-stimuli” the authors have forgotten to add one more important qualifier. The correct title should be “Theory of the perceived motion direction of equal-spatial-frequency-plaid-stimuli viewed through a circular aperture.” The authors are treating the perceptual effect of viewing this type of figure through a round aperture as the general case, when, as has been known at least since Wallach (1935), the shape of the aperture has a dispositive effect on the direction of perceived motion. The same set of bars, for example, that seen through a round aperture appear to be moving perpendicularly to their length will, when seen through a rectangular one, appear to be moving along the edge of the aperture (as in the barberpole illusion.)
[Attempt #2] The authors need to qualify their claims, which are tailored to apply only to circular apertures. It is well-known that perceived motion direction varies substantially with aperture shape.
[Attempt #3] In the first sentence of their abstract, the authors state that: “At an early stage, 3 different systems independently extract visual motion information from visual inputs. At later stages, these systems combine their outputs.”
It sounds as though they’re relating matters of fact, or, at worst, matters of accepted, well-corroborated theory. But this is not the case. This seemingly solid base is actually a reference to an old conjecture, for which only one rather old reference is provided:
“Three motion systems have been proposed for human vision, (e.g. Lu & Sperling, 1995a).”
The term “proposed” constitutes a clear signal from the authors of the still highly-conjectural status of the claim, while no further citations are offered vis a vis subsequent experimental corroboration or replication of the initial proposals and results. Clearly, the single experimental Lu and Sperling paper isn’t enough to establish such an ambitious claim. Thus, I would caution that the statements quoted above are misleading.”
“There is intrinsic ambiguity in determining the motion diretion of a one-dimensinal stimulus, such as a sine-wave grating. Consider a snapshot of a sine-wave grating displayed on a peice of paper, and the paper is set into motion. Observing through a circular window, the motion is perceived as being perpendicular to the orientation of the grating no matter what arbitrary direction the piece of paper may be physcially moving in. Indeed, all directions of motion of the paper that happen to have the same motion component perpendicular to the stripes of the grating produce precisely the same image inside the aperture…This is the “aperture problem.”
Sperling et al’s description of the “aperture problem is incomplete and possibly misleading, as it refers only to a “circular window.” When the window is not circular, then what is perceived may well not be “perpendicular to the stripes of the grating,” but will vary widely depending on the shape of the aperture (Wallach, 1935).
[I would add that “equal-spatial-frequency-plaid-stimuli” have precisely zero special status in visual perception. They are a zombie historical artefact kept alive because it makes the math easy (as does limiting discussion to circular apertures). The idea that there are “plaid processing” systems is correspondingly artificial. “Theories” like these don’t generalize even a little bit to ordinary conditions, but they have been the norm for many decades now.]
“In his influential article, Barlow employs what Romain Brette (2019) calls the “coding metaphor” of neural function. Brette is highly critical of this metaphor, and offers cogent logical objections.”
[I believe Barlow’s paper is considered “seminal,” it’s still being cited, and Brette’s critique shows how it’s basic premise is nonsense. He wasn’t the first – Teller also criticized it. PubPeer is evidently protecting a “sacred beast” at the expense of contemporary theoretical critics.]
Weiss, Simoncelli & Adelson Nature Neuroscience
[This next comment is interesting because the moderator included a note explaining why the comment was censored. The moderators comment was: “claims an important assumption is invalidated, but doesn’t explain what the assumption is or why it is important for the analysis or what the analysis is.” I’ll leave it to readers to judge whether I “explain what the assumption is.” As with the Nakayama comment above, the substance is identical to comments I have made on a number of papers repeating the same erroneous “fact.” My question is, why is a PubPeer moderator treating a comment on the site like a reviewer of a letter to the editor, (and without expertise in the particular topic, to boot?) The comment clearly isn’t frivolous; and is explicitly not the role of PubPeer to micromanage content. I subsequently attempted to make the comment more explicit, in case this helped, but the moderator is determined that the same comment I posted for the Adelson & Movshon (1982) “classic” (blogged here) won’t be allowed to post on Yair et al.]
“The integration process is essential because the initial local motion measurements are ambiguous. For example, in the vicinity of a contour, only the motion component perpendicular to the contour can be determined (a phenomenon referred to as the‘aperture problem’)2,4–7.”
The quote above, in particular the bolded section, mirrors a common misunderstanding about aperture motion. Among the citations provided two – 4 and 5 (Wuerger, S., Shapley, R. & Rubin, N. On the visually perceived direction of motion by Hans Wallach: 60 years later. Perception 25, 1317–1367 (1996); Wallach, H. Ueber visuell whargenommene bewegungrichtung. Psychol.Forsch.20, 325–380 (1935), contradict it, in the sense that they clarify that it is not valid as a general statement, applying to circular apertures but not, for example, to oblong ones (as we may easily confirm in the case of the Barberpole Illusion). As such, it lacks the generality that the authors imply it to possess.
“When viewing a single drifting grating in isolation, subjects typically perceive it as translating in a direction normal to its contours (Fig.1b).”
Again, the authors fail to mention the shape-contingency of the perceived direction of motion, making what appears to be an invalid general statement. Their analysis and proposals seem intended to be general – they are offering “a model of visual motion perception.” Given that a fundamental assumption on which the analysis rests is based on a claim about aperture motion that is easily falsified (given differently-shaped apertures), the proposals offered here, whether or not they may be said to apply to the special case, will likely not apply to the general case.”
[Another important thing to note here is that, as in the case of Adelson & Movhson (1982), Weiss et al adopt a false fact because it makes the math easier. If they were to attempt to address the true nature – as illustrated by known fact – of the phenomena they’re purporting to model, the tools – concepts and techniques – they currently have at their disposal would be next to useless. The false fact was inherited by the pro “bayesian” contingent for the same reasons it was adopted originally. All of “computational neuroscience” functions this way. As was made explicit by Carandini et al (2005), blogged here, choices of assumptions are made less based on their validity and more on whether they render the math “tractable.”]
Nakayama (1985) Vision Research
[The content of this comment is identical to that of several comments I have succeeded in posting on PubPeer. I have to wonder whether the status of the author made it too sensitive this time? The issue is very straightforward.]
“This is one of many papers which refer to a special case of aperture motion perception as though it were the general case:
[Nakayama says] “This is essentially equivalent to viewing a moving object through an aperture. Suppose an extended line moves through this aperture [see Fig. IO(A)]. The velocity in the aperture can be described by VL, the local velocity orthogonal to the orientation of the line.”
What Nakayama is referring to as the “local velocity” is the apparent velocity of a line moving within a circular aperture. The text doesn’t specify that the statement is restricted to circular apertures – on the contrary, the discussion is framed in terms of general principles of perceived motion. The distinction is important, since if we were talking about an oblong aperture, the “local motion” would not be “orthogonal to the orientation of the line,” and the equations and inferences offered would not hold. As explained by Wallach (1935), a paper Nakayama cites, the direction of local (i.e. perceived) motion is highly contingent on aperture shape.”
Chen et al (2019) Current Biology
This latest case is perhaps the most interesting and revealing, and I’ve written a separate blog post about about my original post – which went up briefly, as well as the senior author’s extended emailed responses. I tried again with the comment below, which was censored:
“McNish et al (1997) Journal of Neuroscience: “Post-training lesions of the dorsal hippocampus attenuated contextual freezing, consistent with previous reports in the literature; however, these same lesions had no effect on fear-potentiated startle, suggesting preserved contextual fear. These results suggest that lesions of the hippocampus disrupt the freezing response but not contextual fear itself.”
Chen et al have not discussed the distinction being made above, in the cited article and elsewhere, but seem to be using freezing as an operational definition of contextual fear.”
I tried yet again, isolating a simple comment about the statistical tests used – a comment the senior author admitted via email was legitimate:
“”Exploration of the context while off Dox increased eYFP-expressing (eYFP+) cells in both the dorsal and ventral DG relative to on-Dox controls (Figures 1G, 1H, 1J, and 1K). The following day, mice that explored the same context showed a significant increase in the number of overlapping eYFP+ (i.e., cells labeled by the 1st exposure) and c-Fos+ cells (i.e., cells labeled by the 2nd exposure) in the dorsal but not ventral DG (relative to chance overlap)…”
“How is “chance” ascertained here? The brain is highly-condition-sensitive, to both external and internal events, in ways we don’t understand. There’s really no possible “no context” condition; and there was no “explored a different context” control. When the mice are returned to their normal cages, they are also returning to familiar territory; when they are being moved from one place to another, these may also be familiar experiences. All of this affects the brain. Given our level of ignorance about the brain and the countless confounds, I don’t see what the authors could validly be using as their “chance” baseline.”
Jun, Ruff, Tokdar, Cohen & Groh (2019) biorxiv
“In both experimental datasets, drifting gratings were presented at locations that overlapped with the receptive fields of the recorded V1 neurons.”
I can’t find where the authors describe how they determined the recorded neurons “receptive fields.”
Because this is a post hoc re-analysis of two older papers (Ruff and Cohen (2016) and Ruff, Alberts and Cohen (2016)), referenced in the Results section of this paper, I checked those, too, for information. But like Jun et al, no methodological descriptions and no citations are provided to clarify either technique or rationale.
Pack, Livingstone, Duffy & Born (2003) Neuron
“Humans and other primates are able to perceive the speed and direction of moving objects with great precision. This is remarkable in light of the fact that, at the earliest levels of the visual system, local measurements of velocity are often confounded with the object’s shape. The problem is based on simple geometry: if the middle of a moving contour is viewed through a small aperture, only the component of velocity perpendicular to the contour can be recovered Wallach 1935, Wuerger et al. 1996.”
The claim that, as a general principle, “only the component of velocity perpendicular to the contour can be recovered” when seen through a “small aperture” is false, and it is not made by Wallach (1935) (or the intro to the English translation in Wuerger et al (1996)), who emphasizes the importance of the shape of the aperture, not its size.
The claim is applicable to circular apertures; as we can see from figure 1, the aperture characteristics of small size and circular shape are confounded in this analysis. The relevant distinctions need to be made if the theoretical discussion is to be well-founded.
This paper is cited uncritically by the one commented on below, as well as over 100 others.
Hataji, Kuroshima & Fujita (2019) Scientific Reports
“Although the VA solution is dominant for humans in situations such as peripheral viewing or brief presentation of stimuli3, humans typically perceive plaid motion in the IOC direction (Adelson & Movshon, 1982).”
As I’ve discussed on PubPeer and in a blog post, Adelson & Movshon’s claims about the perceived direction of motion are factually incorrect. Specifically, they inappropriately describe an effect applying in a special case as a general principle. In not qualifying their “”IOC direction of motion” claim accordingly, Hataji et al are repeating Adelson & Movshon’s false claim.
Oddly enough, in the text the authors refer to one of the many possible violations of the Adelson & Movhson claim – the barberpole illusion, which they refer to as though it were a distinct kind of phenomenon requiring a distinct theoretical solution. Specifically, the barberpole illusion (for which the aperture is described as elliptical) is supposed to combine foveal and peripheral vision, while the Adelson & Movshon solution (which was based on effects observed using circular apertures) is supposed to be based on foveal processes. This dichotomy, though, makes no sense, since the effects in question don’t depend on the size of the aperture, and since eye movements, in any case, constantly shift the areas of the field projecting to the small foveal area.
Eskiskand, Kameneva, Burkitt, Grayden, & Ibbotson (2019)/Frontiers in Neural Circuits
I’d like to call attention to the first sentence of the abstract of this paper:
“Based on stimulation with plaid patterns, neurons in the Middle Temporal (MT) area of primate visual cortex are divided into two types: pattern and component cells.”
The plaid patterns being referred to are orthogonal superpositions (plaids) of stimuli consisting of fuzzy bars (components).
There are a number of problems here, but the one that jumps out immediately is the idea that functional descriptions of neurons are being made “based on” the putative effects of a very narrow set of stimuli. No argument is provided (in this or any publication) to explain this choice. Plaid and sine wave patterns of light and dark are rarely found in nature; why then, should we find this type of neural specialization?
But, one might ask, what i f experimenters – serendipitously, perhaps – have stumbled onto just this functional dichotomy? Aren’t claims like this based on the results of experiments examining the response of individual neurons to plaids and fuzzy bars?
It is a widely-acknowledged fact that experiments of this type do not replicate. So on this basis alone, claims like this one are orphaned of evidence.
The same applies to the statement below:
“When a bar or grating is moved through the receptive field (RF) of an MT neuron, it responds only to a restricted range of directions orthogonal to the grating’s orientation, making the cell direction selective.”
To the extent that their premises – however conventional – are unsupported by evidence, the authors’ computational analysis can be of little interest to vision science.
Musall, Kaufmann, Juavinett, Gluf & Churchland (2019) Nature Neuroscience
“We characterized movements using video and other sensors and measured neural activity using widefield and two-photon imaging. Cortex-wide activity was dominated by movements, especially uninstructed movements not required for the task. Some uninstructed movements were aligned to trial events. Accounting for them revealed that neurons with similar trial-averaged activity often reflected utterly different combinations of cognitive and movement variables.”
In the first bolded section from the section of the abstract quoted above, the authors seem to be making two huge claims First, that activity across the cortex of their subject animals was “dominated” – meaning, presumably, was responsible for, or caused – the movements observed; and second, that they are in a position, as investigators, to interpret that neural activity as subserving movement rather than mental events, emotions, perceptual events, etc.
Similarly, when they state in the second bolded section, that activity reflected different combinations of cognitive and movement variables, one has to ask how they are in a position to parse neural activity in this way. Their method is based purely on observing behavior; they’re not in a position to observe animals thinking.
Van Zyl (2018) New Ideas in Psychology
It is my understanding from reading various texts that the chief distinction made by “Bayesians” between their practices and those of “frequentists” is that frequentists are referring to the actual probabilities or relative probabilities of events resulting from a test of a predictive hypothesis, while “Bayesians” are referring to “subjective” probabilities encompassing a much broader framework (extending beyond the experimental test).
While there have been attempts to carve out a distinction between “objective” and “subjective” “Bayesian analysis,” these efforts seem, so far, unconvincing. I quote below from the Wikipedia entry on “Bayesian probability”:
“Broadly speaking, there are two interpretations on Bayesian probability. For objectivists, interpreting probability as extension of logic, probability quantifies the reasonable expectation everyone (even a “robot”) sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by Cox’s theorem. For subjectivists, probability corresponds to a personal belief. Rationality and coherence allow for substantial variation within the constraints they pose; the constraints are justified by the Dutch book argument or by the decision theory and de Finetti’s theorem. The objective and subjective variants of Bayesian probability differ mainly in their interpretation and construction of the prior probability.”
Thus, we see that even the “objective” version of “Bayesianism” entails subjective cognitive evaluations – “reasonable expectations,” “rationality constraints,” “interpretation of the prior probability.”
Continuing in the Wikipedia entry, we see that, indeed, the “objective” category is the subject of much (subjective) disagreement:
“[Many theorists] have suggested several methods for constructing “objective” priors. (Unfortunately, it is not clear how to assess the relative “objectivity” of the priors proposed under these methods)…The quest for “the universal method for constructing priors” continues to attract statistical theorists.”
Given all the above, given the absence of a “universal method for constructing [objective] priors” I would say that a strong case for “Bayesian probability,” as regards its use in objective science, is still incomplete.
Wei Wei (2018) Annual Review of Vision Science
I am puzzled by the author’s unqualified use of the term “receptive field,” a term whose meaning appears to be in flux. The simple notion that there is a circumscribed part of the retina, and a corresponding circumscribed part of perceived space, events in which affect particular neurons, has long been debunked and labelled “classical receptive field.” The understanding that the early concept was problematic came almost from the start, as indicated by the excerpt from Spillma’s (2015/JOV) article, “Beyond the classical receptive field (the two paragraphs are consecutive in the original text):
“Our perception relies on the interaction between proximal and distant points in visual space, requiring short- and long-range neural connections among neurons responding to different regions within the retinotopic map. Evidently, the classical center-surround RF can only accommodate short-range interactions; for long-range interactions, more powerful mechanisms are needed. Accordingly, the hitherto established local RF properties had to be extended to take distant global inputs into account.”
“The idea of an extended (called nonclassical or extraclassical today) RF was not new. Kuffler (1953, p. 45) already wrote, “… not only the areas from which responses can actually be set up by retinal illumination may be included in a definition of the receptive field but also all areas which show a functional connection, by an inhibitory or excitatory effect on a ganglion cell. This may well involve areas which are somewhat remote from a ganglion cell and by themselves do not set up discharges.””
The language of the above text is a bit misleading, implying as it does that the “hitherto established” local RF remained in place and merely needed elaboration. It should be clear that if the firing rate of a neuron “x” can be altered by the conditions of stimulation applying to the whole retina, then it is not possible to experimentally define a local area as in any way special based on the local conditions of stimulation. Or rather, it is artificial, privileging one set of global conditions other an infinite number of alternatives in producing a definition (which even then has not been proven to replicate). Even the verbal expansion of the term to include “non-classical receptive fields” does not rescue the concept from this problem.
The extreme confusion that the concept has produced as researchers have attempted to specify its elusive properties may be appreciated in reading Carandini et al’s (2005) “Do we know what the early visual system does?” The discussion includes reference to a black box “saving device.”
The concept of “direction-selectivity” is closely tied to the receptive field concept. It is difficult for me to understand how Wei can avoid addressing these theoretical problems.
Unqualified use of the term “receptive field” and associated concept is quite common; I’ve highlighted it in several PubPeer comments, including on El Boustani et al (2018)/Science; onBeltramo & Scanziani (2019)/Science; on Olshausen & Field (1996)/Nature
A second stab at Wei Wei also failed:
As noted in the PubPeer comment on Olshausen & Field (1996) as well as other PubPeer comments linked therein, the concept of “receptive field” is currently missing a theoretical definition. Various researchers employ different de facto definitions of the term, strictly tied to the procedures they happen to use. The use of the term by Wei in this review, without qualification or clarification, renders the discussion incomplete.
Bakkour, Palombo, Zylberberg, Kang, Reid, Verfaellie , Shadlen , Shohamy (2019) eLife “The hippocampus supports deliberation during value-based decisions.”
(In addition to the comment submitted to PubPeer, I note here that the authors’ use of the term “supports” is an example of Neuroscience Newspeak.)
“Bakkour et al state:
We fit a one-dimensional drift diffusion model to the choice and RT on each decision. The model assumes that choice and RT are linked by a common mechanism of evidence accumulation, which stops when a decision variable reaches one of two bounds.
I’m confused about what the authors are claiming. Experiments are based on two-alternative forced choices and structured so that the data produced may be “modelled” based on the “drift diffusion model.” The fitting procedures allow modellers quite a bit of leeway in adjusting free parameters, and many quantitative choices are unconstrained by theory. The above-stated assumptions of the “drift diffusion model”, i.e. that “choice and RT are linked by a common mechanism of evidence accumulation” are vague; no concrete description (even a vague one) in terms of neural function has ever been proposed. The drift diffusion model is an extension of “signal detection theory;” and the assumptions of this “theory” seem to lack face validity. SDT curves tend to be specific to particular experiments and not to generalize.
In short, under the circumstances I’m not sure that fitting the data acquired to the model under consideration is enough to license inferences about brain function.”
Mueller & Weidemann (2008) Psychonomic Bulletin and Review
“SDT assumes that percepts are noisy.”
The term “percept” refers to what is consciously experienced, and generally to what is experienced visually. What we experience visually is not noisy, and does not necessitate any conscious decision-making on the viewer’s part. Conscious decision-making is, both implicitly and explicitly, what we are talking about here. Implicitly, because if the conscious perceptual experience (the percept), is noisy, then the viewer must be called on to make a conscious decision as to how to interpret it. Explicitly, because the associated experiments refer to participants’ decisions, usually binary forced-choice decisions often requiring guesses.
Given all of this, the statement that “SDT assumes percepts are noisy” is hard to interpret. The assumption seems to lack face validity, and no explanation or references, or proposals of how to test it, are offered. On what basis is the assumption considered valid?
Oberauer & Lewandowsky (2019) Psychonomic Bulletin and Review.
[It seems particularly unkind of PubPeer moderators to censor my reply to another commenter’s reply to my initial comment, which did post.]
The text you cite seems very confused and waffling to me. Which of Mayo’s arguments have you found compelling with respect to making the fruits of post hoc correlation-fishing reliable – something that, as mentioned, is virtually never the case? Has she proposed and tested a method of post hoc statistical inference that produces replicable outcomes?
Chen, Yeh & Tyler (2019) Journal of Vision
I’d like to discuss the authors’ dichotomization of the images used in their procedures into “target” and “noise.” It seems to me that this dichotomy is not a valid one, for fairly obvious reasons.
In this comment, I’ll be using the term “image” to refer to a surface reflecting light into the eyes of a seeing human.
By target, the authors are referring to certain more-or-less smoothly-changing bands of light and dark which they call “Gabors.” These patterns are typically perceived either as alternating bars or, if the transitions are fairly gradual, as partly-shaded cylinders. By “noise,” they mean a different type of pattern, consisting of variously-arranged dots.
Collections of dots always tend to be grouped spontaneously by both human and species of non-human viewers to produce various perceived shapes or patterns, among the simplest examples being the simple “rows” or “columns” of dots typically used to demonstrate principles of organization. We may see the same tendency in the grouping of stars into constellations, and the perception of objects in clouds.
Here, we have two types of patterns; one rather orderly both physically and perceptually, the other less so – but both requiring and eliciting perceptual/neural organizing processes in order for perceived structures, stable or unstable, to arise in consciousness.
When two patterns are super-imposed in an image, the structures that will emerge in consciousness are not necessarily the sum of these two. They may be – it depends on the combined pattern, and how it is interpreted by the perceptual organizing processes. The combination of the two may destroy the structural coherence of one or the other or both; and new structures may be perceived. A classic series of experiments by Gottschaldt demonstrated, many decades ago, that “targets” may not be perceived in certain contexts, even when observers expect and are actively watching for them.
It seems to me the above facts are relevant and should be addressed in the authors’ theoretical discussion.