It has become apparent that there is something of an
epidemic going on in fields of science and technology. It is an epidemic of
blatant disinterest in the accuracy of statistical data. Scientists are making
errors that lead them to incorrect conclusions. Many of these conclusions are published
only to be retracted soon after. There are several causes of this widespread
problem, but the most obvious seems to be that the current culture of science puts
a premium on things other than statistical accuracy. This type is of mindset is
all across the fields of science and technology. It is in so many of the classrooms
where the future scientists of the world begin to master their disciplines as
well as the labs where these same scientists will eventually perform
experiments for research. This statistical disrespect is even obvious in the offices
from which researchers are offered grants to perform their work. That, at least
to me, is incredibly ironic. How is it possible that, in a field that makes conclusions
based on numbers and percentages, the focus is not on making sure data acquired
is correct? Who is to blame for this? Is it the scientists themselves? Their
educators? Their bosses? Could it be that the blame can be placed on all three?
The only thing that can be said for certain is there must be some kind of
culture shock to cure this epidemic.
There can be a case made that this is the kind of
statistical neglect comes from behavior that scientists have been doing since
they were college students. Even as an undergraduate at a technology oriented
school, I have seen numerous cases of this type of behavior. For example, in
the several of the classes students are expected to perform experiments in,
data is not necessarily as important as the report itself. I have seen students
essentially create data to fit into a range given by the professor regardless
of what their data actually is. Change a number here, move a decimal point
there and voila, you have desirable data. The motivation for these kinds of
actions varies. The most obvious is the mentality that GPA is the without a
doubt the most important thing in college. Students with this kind of mindset will
do just about anything to make sure that the letter they receive for the class
is as high as possible. Students will change data in class to whatever the
teacher expects the experiment to yield in hope that this will bring them that
A. I have also seen data skewed out of pure laziness. For example, towards the
end of a 3 hour class, students break up into small groups and work on an
assignment. The professor gives us the typical range of where our data should
be and if a group’s data fits into that range, they have completed the day’s
work and can leave. This fosters the “Screw it, I just want to go home”
attitude. A group changes a few numbers and they get rewarded with leaving the
class early. Now you may be thinking, “Okay. I see how that applies in the classroom,
but laziness in the workplace is never rewarded.” I would be lying if I said I
haven’t seen it in the workplace also. About two weeks ago, I completed a four
and a half month co-op assignment at Avon Products. At Avon they produce a
myriad of beauty supplies for both men and women. During my time there, one
engineer in particular who I did work for would occasionally “sweep things
under the rug” to avoid failing a product. The other co-op students and I would
show him problems with projects he oversaw, that typically would be problematic
in work done for the other engineers, and he would brush them off or lessen
their severity. The only logical explanation myself, the other co-op students,
or the other full time employees could come up with was that he didn’t like
failing things because that meant they would have to be retested at a later
date and he would have to do more work. Just like the students that want to
leave class early, this engineer would alter data just so he did not have to
put more work on himself. His laziness was rewarded, in a sense. Is this a case
of him performing old habits developed in college? I obviously can’t say for
certain, but it is certainly possible. It is also not definite that students
behaving this way will continue on a downward trend and do it as professionals,
but these kinds of actions have to come from somewhere. It may be coming from their professors, which
is also something I have seen in my 2+ semesters in an engineering focused
university.
The professors of the past and future scientists of
the world may be at as much fault as their students. By not making it a
priority to correct statistical errors and by pushing aside the importance of
effective data collection and analysis, they pass these ideologies on to their
students, many of whom eventually put them into practice. I have seen numerous
cases where students, after struggling with an experiment for whatever reason,
would simply be given a set of data points from their professor so they can
move on with what had been planned for the class. By doing this, the professor
implies that the data is not necessarily as important as the final result. When
most of these students enter the work force and begin submitting papers for
peer review, this type of behavior in the classroom brings about the mindset
that the data, which should really be proving their case, doesn’t actually
matter as long as they get the result that they want. This is the most logical
connection between my experiences and the economist’s account of the statistical
disaster currently going on in science. According to the Economist article,
titled Unreliable Research, Trouble at the Lab, “28% of respondents [respondents
came from a study of 21 different surveys between 1987 and 2008] claimed to
know of colleagues who engaged in questionable research practices.” So to almost
a third of all researchers, the integrity of their data is of so little importance
that they will perform any experiment just to get a certain result. It is
certainly likely that these practices were developed while in college by
professors like the ones that I have had during my time here. This idea that
the conclusion is more important than the means by which that conclusion was
reached is a mentality that many professors may be indirectly passing on to
their students.
As the Economist explained in October 2013, there is a
disturbing pattern across scientific research. The quality of the data acquired
continues to decline, making the large influx of papers submitted for peer review
more and more unreliable. Even as an engineering undergraduate, I can see where
this lack of statistical focus comes from. Actions by both the students and
their professors lead me to believe the classroom in its entirety needs to
shift its focus if science researchers want to remain credible sources of
information. There is another explanation the Economist offers, but I won’t delve
into it at length because I can’t speak from personal experience. The Economist
implies that because the people that pay these researchers are so concerned
with the number of papers that their scientists publish, the scientists have so
little time for anything other than the conclusion that is being published. As
a result, the accuracy of the data acquired and the importance on how it was acquired
tend to fall by the wayside. In the article Brian Nosek, a psychologist at the
University of Virginia, is quoted as saying,” There is no cost to getting
things wrong. The cost is not getting them published.” The significance of what
he had to say is essentially that researchers see spending time on the data as
wasteful. They are focused on producing as many papers as physically possible
and they can get away with this approach to research because as Dr. Nosek
states, there is no penalty for being incorrect. They simply retract their
paper and move on to the next one. This urgency to produce research in excess
comes from the competitive nature of the discipline. “Professional pressure,
competition and ambition push scientists to publish more quickly than would be
wise. A career structure which lays great stress on publishing copious papers
exacerbates all these problems.” The reason behind this extreme flow of papers
looking for approval is more of a need then a desire. These funding agencies
that give out grants to researchers base their decisions on how many papers the
researcher has published, not how well each paper is done. Based on what I’ve seen from my own
experiences, both in the workplace as well as in the classroom, the areas of
science and technology need a significant reform to be able to consistently
submit statistically accurate papers. That reform must be wide-reaching, starting
with tomorrow’s scientists and their professors while they perform daily tasks
in class. This must also reach the scientists and the funding agencies they go
to for work. The reform that is needed would end the epidemic of statistical
ineptitude that is currently sweeping across the fields of science and technology.
No comments:
Post a Comment