[EAS] How Do We Know What We Know?
Peter J. Kindlmann
pjk at design.eng.yale.edu
Tue Oct 21 02:06:43 EDT 2008
Dear Colleagues -
Many years ago I read an intriguing book, "What
Engineers Know and How They Know It" by Walter
Vincenti (Johns Hopkins Press, 1990). Although it
dealt mainly with the history of aeronautical
engineering, interesting parallels and contrasts
with electrical engineering came readily to mind.
Before major computing power, the direct
connection between engineering understanding and
underlying physics, most notably for the
transistor, was a major distinction in the "How"
of electrical engineering compared to
aeronautical. It was my first exposure to the
intellectual history of a domain of engineering.
The total, or near total, absence of such
material in most science and engineering majors,
deprives the students of the important "process"
perspective about how data matures into
knowledge. Understanding this goes beyond
laboratory courses, themselves already threatened
as arduous and unpopular. It is not surprising
then that most students learn to recite
scientific knowledge as "facts from books," with
no understanding of their evolutionary
underpinnings, that rich history of how opinion
and data are distilled through reason into
evidence.
So I was delighted to find out about the
Exploratorium's new Evidence Project. I quote
from the last issue of the Scout Report
<http://scout.wisc.edu/Reports/ScoutReport/2008/scout-081017.php>,
still one of my favorite resource identifiers:
>Exploratorium: How Do We Know What We Know? [Macromedia Flash Player]
>http://www.exploratorium.edu/evidence/index.html
>
>To some, science may seem neat and tidy. Of
>course, scientists know better, and taken as a
>whole, the process of doing science is often
>quite messy. This fascinating interactive
>website created by the Exploratorium museum in
>San Francisco takes on the task of observation
>and investigation into human origins. The
>interactive exhibit and feature contains five
>primary sections, including "Observing
>Behavior", "Collecting Clues", and "Finding
>Patterns". Each section begins with an
>introductory essay and a selection of video
>clips featuring interviews with various
>scientists discussing their research and work.
>The subjects covered here are quite broad and
>they include fossil analysis, the evolution of
>primate behavior, and using new technologies to
>learn more about humans from their teeth.
>Finally, visitors will want to listen to a few
>of their podcasts. It's worth noting that the
>site is also available in Spanish.
I wish more material in science and engineering
curricula were approached from this viewpoint. It
would provide lasting lessons in methodology that
can long outlive the latest hot technological
topic.
A reminder about a related aspect, that of
thoroughness in the face of competitive research
pressures, comes from a recent Economist's
Science and Technology section, which reports on
research by John Ioannidis
<http://www.economist.com/science/PrinterFriendly.cfm?story_id=12376658>.
(Text follows, in case the URL has gone stale.)
One quote to stir your interest:
>Dr Ioannidis based his earlier argument about
>incorrect research partly on a study of 49
>papers in leading journals that had been cited
>by more than 1,000 other scientists. They were,
>in other words, well-regarded research. But he
>found that, within only a few years, almost a
>third of the papers had been refuted by other
>studies.
This too would be a valuable area for inclusion
in curricula, presumably on the graduate level.
--PJK
-------------------------------------------
Publish and be wrong
Oct 9th 2008
>From The Economist print edition
One group of researchers thinks headline-grabbing
scientific reports are the most likely to turn
out to be wrong
IN ECONOMIC theory the winner's curse refers to
the idea that someone who places the winning bid
in an auction may have paid too much. Consider,
for example, bids to develop an oil field. Most
of the offers are likely to cluster around the
true value of the resource, so the highest bidder
probably paid too much.
The same thing may be happening in scientific
publishing, according to a new analysis. With so
many scientific papers chasing so few pages in
the most prestigious journals, the winners could
be the ones most likely to oversell themselves-to
trumpet dramatic or important results that later
turn out to be false. This would produce a
distorted picture of scientific knowledge, with
less dramatic (but more accurate) results either
relegated to obscure journals or left unpublished.
In Public Library of Science (PloS) Medicine, an
online journal, John Ioannidis, an epidemiologist
at Ioannina School of Medicine, Greece, and his
colleagues, suggest that a variety of economic
conditions, such as oligopolies, artificial
scarcities and the winner's curse, may have
analogies in scientific publishing.
Dr Ioannidis made a splash three years ago by
arguing, quite convincingly, that most published
scientific research is wrong. Now, along with
Neal Young of the National Institutes of Health
in Maryland and Omar Al-Ubaydli, an economist at
George Mason University in Fairfax, Virginia, he
suggests why.
It starts with the nuts and bolts of scientific
publishing. Hundreds of thousands of scientific
researchers are hired, promoted and funded
according not only to how much work they produce,
but also to where it gets published. For many,
the ultimate accolade is to appear in a journal
like Nature or Science. Such publications boast
that they are very selective, turning down the
vast majority of papers that are submitted to
them.
Picking winners
The assumption is that, as a result, such
journals publish only the best scientific work.
But Dr Ioannidis and his colleagues argue that
the reputations of the journals are pumped up by
an artificial scarcity of the kind that keeps
diamonds expensive. And such a scarcity, they
suggest, can make it more likely that the leading
journals will publish dramatic, but what may
ultimately turn out to be incorrect, research.
Dr Ioannidis based his earlier argument about
incorrect research partly on a study of 49 papers
in leading journals that had been cited by more
than 1,000 other scientists. They were, in other
words, well-regarded research. But he found that,
within only a few years, almost a third of the
papers had been refuted by other studies. For the
idea of the winner's curse to hold, papers
published in less-well-known journals should be
more reliable; but that has not yet been
established.
The group's more general argument is that
scientific research is so difficult-the sample
sizes must be big and the analysis rigorous-that
most research may end up being wrong. And the
"hotter" the field, the greater the competition
is and the more likely it is that published
research in top journals could be wrong.
There also seems to be a bias towards publishing
positive results. For instance, a study earlier
this year found that among the studies submitted
to America's Food and Drug Administration about
the effectiveness of antidepressants, almost all
of those with positive results were published,
whereas very few of those with negative results
were. But negative results are potentially just
as informative as positive results, if not as
exciting.
The researchers are not suggesting fraud, just
that the way scientific publishing works makes it
more likely that incorrect findings end up in
print. They suggest that, as the marginal cost of
publishing a lot more material is minimal on the
internet, all research that meets a certain
quality threshold should be published online.
Preference might even be given to studies that
show negative results or those with the highest
quality of study methods and interpretation,
regardless of the results.
It seems likely that the danger of a winner's
curse does exist in scientific publishing. Yet it
may also be that editors and referees are aware
of this risk, and succeed in counteracting it.
Even if they do not, with a world awash in new
science the prestigious journals provide an
informed filter. The question for Dr Ioannidis is
that now his latest work has been accepted by a
journal, is that reason to doubt it?
Copyright © 2008 The Economist Newspaper and The Economist Group.
More information about the EAS-INFO
mailing list