Monday, January 20, 2014

Is Irreproducibility an Issue of Academic Negligence?

The Economist article “Unreliable Research-Trouble at the Lab” criticized the scientific community on lack of reproducibility of published data, blaming it on poor statistical understanding, confirmation biases, incompetence, and fraud.  The question that arose during in-class discussion was whether this irreproducibility issue, be it honest or fraud, is a result of a culture that encourages poor conduct, starting at the undergraduate academic level.  It appears that there is an issue with data manipulation in (introductory level) laboratory courses; are these potential future doctors, scientists, engineers, and educators learning to obtain “good” data at the expense of honesty?
            There are two contributors to this fiasco, the laboratory instructor and the students.  Given an experiment to conduct, usually with ideal results easily predicted, most students’ goal is to get the “answer” and leave.  With the experimental results rarely (if ever) holding true to the ideal, some students take it upon themselves to “correct” the mistake by changing the data, rather than risk being told to repeat the experiment; they’re more concerned with the red number on the top corner of the page than with those in the excel spreadsheet boxes. 
            In a perfect world engines would be 100% efficient, chemical reactions would go to completion producing pure products, and experiments would be perfectly reproducible coming as close to the preconceived results as humanly possible.  This fantasy is perpetuated by neat diagrams in textbooks that show experiments and their ideal results.  For the sake of conceptual understanding these diagrams work, but it is up to the laboratory instructor to educate the students on the factors in reality that will ultimately cause their experiments to deviate from the ideal. 
While obtaining the “correct” data is a great feat, understanding experimental errors and deviations is much more useful, because it is much more realistic. For those whose laboratory experience ends at that point, the latter lesson is never learned, and they risk carrying that misunderstanding throughout their academic and professional career every time they face pressure and difficulty.  When it is all said and done, dishonesty is on the individual committing the fraud, but there is a responsibility on the educator to educate, because after all that is what tuition is being paid for.  
For those of us fortunate enough to move forward in our academic endeavors and experience more realistic laboratory environments, we begin to understand that that is unacceptable.  Now in my third year as a chemistry major, I have yet to experience a reaction that results in 100% yield or a curve that is perfectly linear.  However, instead of being graded on numerical results (given that the procedure was understood and followed) there is a focus on understanding the outcome of the experiment; if reasonable, then it is understood why, and if not, then there is an analysis of real world factors that may have contributed to experimental errors.  
While the issue of data manipulation and misinformation in some academic laboratory courses is a problem that should not be over-looked, my more recent experiences lead me to believe that irreproducibility is an issue beyond the scope of these freshmen classes.  The Economist article talks about an initiative by Nature to introduce an 18-point checklist that aims to “ensure that all technical and statistical information that is crucial to an experiment’s reproducibility or that might introduce bias is published,” and now with technological advances in data storage this can be possible.
Difficulty in reproducibility becomes an issue of hindering advancement and innovation. Over the summer I worked in a laboratory where we attempted to move forward and build on a published method, however, we soon learned that its reproduction was a challenge of its own.  Specifics, such as temperature, acidity, and time intervals for experiments, were omitted; time taken to read between the lines and figure out these missing details was time taken away from innovation and the possibility of achieving novel results.  Even with as close as we could get to what we thought were the original conditions we still did not achieve yields remotely close to the yields stated in the publication. 
The Economist article points out that “performing an experiment always entails what sociologists call "‘tacit knowledge’- craft skill and extemporisations that their possessors take for granted but can pass on only through example.”  From my experience this is in fact an issue, specifically with parts of experiments that are required to be done by hand, versus running through an apparatus or machine.  Statements as simple as “shake the solution aggressively” or “gently add pressure” are subjective and leave room for variation from individual to individual.  
The flaws in scientific research that results in fraud data is due to, what Cognitive Psychologist Scott Barry Kauffman calls in his Scientific American blog post (titled “From Evaluation to Inspiration”), “a culture saturated with evaluation”.  We have developed an “obsess[ion] with measuring talent, ability, and potential” by judging every individual based on the outcomes of exams, whether they are college entrance, graduate entrance, or occupational exams.  We’ve depleted inspiration from our work and made everything a competition.  The pressure caused by these fixated evaluations drive scientists to one goal, produce and publish anything and everything just to stay in the public sphere, meaning to keep their job and position.  Nature’s checklist is a push in the right direction, but a cultural transformation away from this uninformative means of evaluation will be the ultimate solution, and this is where it starts at the academic level.  Take Harvard University Law Professor Lani Guinier who said “what the test (the LSAT) actually judges is quick strategic guessing with less than perfect information”, this claim applies to any standardized test, and resonates impeccably well with state of scientific research.

 http://blogs.scientificamerican.com/beautiful-minds/2014/01/08/from-evaluation-to-inspiration/
http://www.pbs.org/wgbh/pages/frontline/shows/sats/interviews/guinier.html

No comments:

Post a Comment