Angela
Elite member
- Messages
- 21,823
- Reaction score
- 12,329
- Points
- 113
- Ethnic group
- Italian
Actually, it should probably be labeled how to evaluate scientific papers, or how to evaluate statistical analysis.
The major take away seems to be: skepticism is in order.
This is just one example;
See: http://nymag.com/scienceofus/2017/0...or-problem.html?mid=twitter-share-scienceofus
Given the practices that the director unwittingly revealed, I personally now have no faith in any papers from that lab, even though some of them may be legit, because having no access to any of the runs, I can't evaluate them.
This provides some guidance on how to evaluate results:
"Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so—and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting."
https://link.springer.com/article/10.1007/s10654-016-0149-
See also the following for the data that would have to be retained by scientists and available for use by anyone wishing to look at the findings critically:
"#SLAS2017 Bioinformatics at @bmsnews adopted the @PLOS editorial's "10 simple rules for reproducibility" http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285 …
The major take away seems to be: skepticism is in order.
This is just one example;
See: http://nymag.com/scienceofus/2017/0...or-problem.html?mid=twitter-share-scienceofus
Given the practices that the director unwittingly revealed, I personally now have no faith in any papers from that lab, even though some of them may be legit, because having no access to any of the runs, I can't evaluate them.
This provides some guidance on how to evaluate results:
"Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so—and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting."
https://link.springer.com/article/10.1007/s10654-016-0149-
See also the following for the data that would have to be retained by scientists and available for use by anyone wishing to look at the findings critically:
"#SLAS2017 Bioinformatics at @bmsnews adopted the @PLOS editorial's "10 simple rules for reproducibility" http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003285 …