Upload
brian-wells-md-ms-mph
View
367
Download
0
Embed Size (px)
Citation preview
Basics of Research or:How I Learned to Stop Just Reading and Start Questioning
Brian Wells
Why Talk About This?
The lessons herein are useful for QI and researchIt is useful to know criticisms of the literature, what makes a good study v. a bad study, and that there are variations in quality of publicationThinking about the process will help spur action and ideas for future projects
What is Reliable Evidence?
“Truth in science can be defined as the working hypothesis best suited to open the way to the next better one.”—Konrad Lorenz, AustriaReliable evidence generally comes from several studies and from several teams of researchers, and that what matters is the totality of the evidence.
The PLoS Medicine Editors. Minimizing Mistakes and Embracing Uncertainty. PLoS Med. 2005 Aug; 2(8): e272.
Why Current Publication Practices May Distort Science
Scientific information is an economic commodity, and that scientific journals are a medium for its dissemination and exchangeIt shares the goal of transferring the commodity (knowledge) from its producers (scientists) to its consumers (other scientists, administrators, physicians, patients, and funding agencies). The function of this system has major consequences. Idealists may be offended that research be compared to widgets, but realists will acknowledge that journals generate revenue; publications are critical in drug development and marketing and to attract venture capital; and publishing defines successful scientific careers. Economic modelling of science may yield important insights
Neal S Young, John P. A Ioannidis, and Omar Al-Ubaydli. Why Current Publication Practices May Distort Science. PLoS Med. 2008 Oct; 5(10): e201.
Terms and Analogies
“There are lies, damned lies, and statistics” -- Disraeli
Manipulating Numbers
Examples
“Fact”: "If you look at the results of Obamacare, what you see is emergency room visits are up over 50 percent.“ – Carly Fiorina on Sunday, August 9th, 2015 in an interview on CNN's "State of the Union”Reality: Actually, surveys did not count the number of visits but rather the numbers of doctors saying visits are up. And indeed, over 50% of ER docs say they feel they’re seeing more patients. But this does not necessarily mean visits themselves are up over 50%, a number which was not counted.
Examples
“Fact”: "97 percent of the work that Planned Parenthood does is about mammograms and preventative health." - Martin O'Malley on Sunday, July 26th, 2015 in a New Hampshire interviewReality: 3% of Planned Parenthood’s activity are abortions, leaving 97% non-abortion. However, PP does not perform mammograms, and the 97% leaves out other services. So while this statistic sounds true, it actually leaves out a number of things such as STI/STD testing and treatment and adoption referrals.
But politicians always lie…Science is clean, right? Unfortunately, no.Dr. John Darsee, the author of more than 100 publications, fabricated research for over 14 years before he was caught. A Harvard researcher, it was initially thought it was a simple mistake from a young researcher.Dr. Darsee was able to fabricate research and slip it by editors using misleading statistics and “big names” on papers, such as Eugene Braunwald.He evaded the “triple safety net” that guards against chicanery: peer review, referee system, in which scientific journals send a manuscript out for review to judge whether it merits publication, and replication.It was ''the extraordinary difficulty of detecting fabrication by a clever individual.'' – Eugene Braunwald
BiasesExamples of Decision-making, Belief, and Behavioral Biases
Ambiguity effect (avoid options with missing information)Backfire effect (disconfirming evidence strengthens belief)Empathy gap (underestimate the influence or strength of feelings)Focusing effect (placing too much importance on one aspect)IKEA effect (placing disproportionately high value on something because you worked on it)Ostrich effect (ignoring a situation)Reactive devaluation (ignoring viewpoints because they originated from an adversary)Semmelweis reflex (rejecting evidence that contradicts a paradigm)
BiasesExamples of Social Biases
Halo effect (traits “spill over)Worse-than-average effect (believe ourselves to be worse than others at tasks)Projection bias (assuming that others share one’s thoughts or values)
Examples of Memory Error BiasesBizarreness effect (bizarre material is remembers better than common material)False memory (a form of misattribution where imagination is mistaken for a memory)Illusory correlation (inaccurately remembering a relationship between two events)
So What Do We Do?
Understand the ProcessWhat goes into research?How is it analyzed?How is it published?
How Is It Analyzed?
Statistics. But why use statistics?Anecdotal evidence is unreliable!
“Why does the phone ring when I’m in the shower?”“Why does it rain after I wash my car?”“Why do patients have issues when Dr. X comes on service?”“He/she is a black/white cloud because X happens when he/she is on.”
Statistics provides us with the way to tell the difference between chance and real effects
Basic Data Terms
Mean – arithmetic average = Σ(x) / nMedian – the “halfway” pointMode – the most common answerRange – Overall difference between the highest and lowest scoresVariance – average difference from the mean
Signal and Noise
Data is always “signal and noise”The signal is what we’re trying to measureThe noise is the error is our measurementIt is extremely important to choose the right test and measurement. The wrong one may be more likely to fail to find a difference when there is one, or find a difference where one does not exist.
Distributions
The “Normal Curve”
Data Can Have A Lot of Looks
Bimodal DistributionsSkewed DistributionsUniform Distributions
Lots of Ways to Analyze
“I want to know if there’s a difference.”One Sample
ParametricOne Sample t-test
Non-parametricWilcoxon Signed Rank Test
More than one sampleNominal
Chi-Squared TestOrdinal or above
One Independent VariableData Repeated, Independent, or Mixed?• … And so on…
Real effect No effect
Find difference Correct decision False positive (Type I)
Don’t find difference False negative (Type II) Correct decision
Decision Table
The Infamous P value
P value is the probability that a particular set of data was gained by chance aloneStatistical significance is typically set at the α = 0.05 level, but this is entirely arbitrary. It can easily be α = 0.10, α = 0.01, or α = 0.000000001.However, in nearly every research project, a chance of the data being associated due to chance alone being less than 5% is considered convincing
Correlations
Positive Correlation (0 < R < 1)Negative Correlation (-1 < R < 0)No Correlation (R = 0)
Correlation Does Not Imply Causation!
The Importance of the Right Presentation (or How to Lie)
Exaggerated presentations of accurate data
Vs.
Percent of US income going to federal taxes
The Importance of the Right Presentation (or How to Lie)
Conclusions
The quality of the project or publication depends on honest and accurate representation of dataAnalysis with statistics is useful for finding associations likely not due to chanceNumbers are not absolute, and they can be easily manipulatedAlways scrutinize data carefully, and draw your own conclusions.
Questions?