Skip to content

Commit

Permalink
2nd resub
Browse files Browse the repository at this point in the history
  • Loading branch information
jona-sassenhagen committed Jul 26, 2016
1 parent 890d4be commit 175efd2
Show file tree
Hide file tree
Showing 4 changed files with 14 additions and 3 deletions.
7 changes: 7 additions & 0 deletions coverletter_2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# To the editors

We hereby submit for your consideration a second revision of our manuscript on "A common misapplication of statistical inference: nuisance control with null-hypothesis significance tests". We have received excellent and timely reviews, for which we are grateful, and which have allowed us to further refine the manuscript into a very concise form. We have provided a detailed reviewer response letter laying out how we have dealt with the reviewer's comments.

We have to express our gratitude with regards to the quality and speed of the review and editorial process, and hope that *Brain & Language* will be able to help us share our work with the community.

Jona Sassenhagen
10 changes: 7 additions & 3 deletions response2.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,14 +24,18 @@ We have reformulated and streamlined this section. It is now reduced to the refe

# Reviewer 1 and 2 on the simulation

Written originally because we agreed with Reviewer #2 that a quantification of the likely impact of the described procedures would be very helpful, both reviewers agree that the simulation -- as-is -- is not exposed optimally. Reviewer #1 questions the general benefit of including a simulation analysis. Reviewer #2 discusses how in its current form, it is inadequately presented.
Written originally because we agreed with Reviewer #2 that a quantification of the likely impact of the described procedures would be very helpful, both reviewers agree that the simulation -- as presented in the previous version of the manuscript -- is not exposed optimally. Reviewer #1 questions the general benefit of including a simulation analysis. Reviewer #2 discusses how in its current form, it is inadequately presented.

>Reviewer 1
>pgs 5-6, Section 3 - I have two concerns about this section. First, I'm not convinced that it adds very much to the story you are telling. Thus it may not be necessary at all. Second, if there is to be a section on simulation, then the simulation methods need to be described much more clearly. I don't know what is meant by the "measured size of the confounding factor" and how you are addressing the correlation. Then I'm not clear on what your simulation procedures is doing. Help!!
>Reviewer 2
>I do have an idea for an alternative, simpler way of conveying the results. It seems to me that the two most useful things to know from the simulation are (copy/pasted from my initial review): “What exactly is the statistical power to detect differences in confounding variables with different stimulus sample sizes and confounder effect sizes? And given this low degree of power, if one does rely on NHST for deciding whether to control for confounders, then what is the expected Type 1 error rate for rejecting the null of no difference on the focal/treatment variable when in fact the difference is entirely due to differences in the confounding variable?” So one idea to make the simulation results more clear and comprehensible is to – at least in the paper, although not necessarily on the app page – remove all of the other results and info, and instead only present the results for those two things as a function of the parameters varied in the simulation. If the authors really feel that all the additional info should be presented in the paper itself, then I wouldn’t fight them on it, just as long as those results can be clarified a bit.
In response to this, we have decided to reduce the simulation aspect to its bare essentials, while providing a link to the full results, and the online application where the full simulation can be assessed.
In response to this, we have decided to reduce the simulation aspect of the manuscript to its bare essentials, while referring via a web link to the full results, and the online application where the full simulation can be assessed and manipulated. At this online site, the details of the simulation are exposed in great detail in the form of a well-documented online app, including code, the precise simulation procedure, and a demonstration of outcomes. We also note that the app is much more detailed and more clearly documented than the code which originally generated our simulation results; it more clearly addresses the question of interest, and can be employed by the user to investigate a broad spectrum of configurations.

We thank the reviewers for their further encouragements, comments and criticism, again helping us in sharpening the focus of the manuscript. We hope the reviewers agree that further downsizing the manuscript is the appropriate way of dealing with their concerns.
We think that a detailed discussion of the simulation is beyond the scope of the manuscript, and such an interactive online presentation is much better suited. We do however think that this online app, and a reference to it, can help readers understand the nature of the problem, and for those readers who are not interested, the reference to the online app takes up little space.

Regarding a specific point, reviewer #2 has indicated that it might be helpful to present the power of stimulus confound inference testing. We have considered this, but eventually decided to not report it in the manuscript (although it can be readily simulated with the app), for the reason that it might confuse some readers regarding the power of what hypothesis test is specifically estimated; it is, after all, the power for a test that, so we argue, has only illusory relation to what researchers might be truly concerned with. It is not, after all, the power to detect a real confound! Our primary argument is not that the test has low power to detect real confounds, but that any of its error rates do not refer to the actual question the researcher is interested in (but to inference about a population the researcher is *not* interested in: the not tested stimuli). We think it is best to avoid this potential source of confusion. However, the rate of rejected stimulus sets for various stimulus set sizes and differences can be readily simulated with our app. Similarly, the rate of failures to detect "false positives" that are due to undetected stimulus confounds (as also requested by reviewer #2) can also be rapidly visualized with the app.

We thank the reviewers for their further encouragements, comments and criticism, again helping us in sharpening the focus of the manuscript. We hope the reviewers agree that further downsizing the manuscript was the appropriate way of dealing with their concerns, as it more clearly highlights the – uncontroversial – main questions.
Binary file modified response2.pdf
Binary file not shown.
Binary file modified statement_of_significance.pdf
Binary file not shown.

0 comments on commit 175efd2

Please sign in to comment.