Articles
5 May 2009

Press Releases by Academic Medical Centers: Not So Academic?FREE

Publication: Annals of Internal Medicine
Volume 150, Number 9

Abstract

Background:

The news media are often criticized for exaggerated coverage of weak science. Press releases, a source of information for many journalists, might be a source of those exaggerations.

Objective:

To characterize research press releases from academic medical centers.

Design:

Content analysis.

Setting:

Press releases from 10 medical centers at each extreme of U.S. News & World Report's rankings for medical research.

Measurements:

Press release quality.

Results:

Academic medical centers issued a mean of 49 press releases annually. Among 200 randomly selected releases analyzed in detail, 87 (44%) promoted animal or laboratory research, of which 64 (74%) explicitly claimed relevance to human health. Among 95 releases about primary human research, 22 (23%) omitted study size and 32 (34%) failed to quantify results. Among all 113 releases about human research, few (17%) promoted studies with the strongest designs (randomized trials or meta-analyses). Forty percent reported on the most limited human studies—those with uncontrolled interventions, small samples (<30 participants), surrogate primary outcomes, or unpublished data—yet 58% lacked the relevant cautions.

Limitation:

The effects of press release quality on media coverage were not directly assessed.

Conclusion:

Press releases from academic medical centers often promote research that has uncertain relevance to human health and do not provide key facts or acknowledge important limitations.

Primary Funding Source:

National Cancer Institute.

Context

News reports often exaggerate the importance of medical research.

Contribution

The researchers reviewed press releases issued by academic medical centers. They found that many press releases overstated the importance of study findings while underemphasizing cautions that limited the findings' clinical relevance.

Caution

The researchers did not attempt to see how the press releases influenced actual news stories.

Implication

Academic center press releases often promote research with uncertain clinical relevance without emphasizing important cautions or limitations.
—The Editors
Medical journalism is often criticized for what reporters cover (for example, preliminary work) and how they cover it (for example, turning modest findings into miracles) (1–4). Critics often place blame squarely on the media, pointing out that few journalists are trained to critically read medical research or suggesting that sensationalism is deliberate: Whereas scientists want to promote the truth, the media just want to sell newspapers.
But exaggeration may begin with the journalists' sources. Researchers and their funders, and even medical journals, often court media attention through press releases. The strategy works: Press releases increase the chance of getting media coverage (5, 6) and shape subsequent reporting (7). An independent medical news rating organization found that more than one third of U.S. health news stories seemed to rely solely or largely on press releases (1).
Academic medical centers produce large volumes of research and attract press coverage through press releases. Because these centers set the standard for research and education in U.S. medicine, one might assume that their press releases are measured and unexaggerated. To test this assumption, we examined press releases from academic medical centers in a systematic manner.

Methods

We selected the 10 highest-ranked and 10 lowest-ranked of the academic medical centers covered in U.S. News & World Report's medical school research rankings (8) that issued at least 10 releases in 2005. In addition, we identified each medical school's affiliates by using an Association of American Medical Colleges database. The Appendix Table lists the centers and their affiliated press offices. We initially intended to compare press releases by research ranking, but because we found few differences, we report findings across the entire study sample, highlighting the few differences by rank where they exist.
Appendix Table. Highest- and Lower-Ranked Medical Schools (and Their Affiliated Press Offices) for Research That Issued at Least 10 Press Releases in 2005
Appendix Table. Highest- and Lower-Ranked Medical Schools (and Their Affiliated Press Offices) for Research That Issued at Least 10 Press Releases in 2005

Press Release Process

During 2006, a former medical school press officer conducted semistructured (15-minute) telephone interviews with “the person in charge” of media relations at the 20 centers. The interview script (Appendix) covered release policy (how is research chosen?), production (writing, review, researcher's role), and an overall assessment (perceived pressure for media results, praise, or backlash).

Press Release Content

We searched Eureka Alert (a press release database) for all “medical and health” releases issued by the 20 centers and their affiliates in 2005. The Figure summarizes the search results.
Figure. Study flow diagram.  *Of the medical schools that issued at least 10 press releases in 2005.
Figure. Study flow diagram.
*Of the medical schools that issued at least 10 press releases in 2005.

Science Promoted

After excluding duplicate or nonresearch releases (such as those announcing grants), we determined study focus (animal or human) and publication status; if the study was published, we characterized the journal's academic prominence by using the Thompson Scientific Journal Citation Reports “impact factor.”

Content Analysis

We randomly selected 200 press releases (10 per center) and assessed presentation of study facts, cautions, and presence of exaggeration by using separate coding schemes for human and nonhuman studies (the Appendix includes both schemes). The schemes included 32 unique items (10 for human studies only, 4 for nonhuman studies only, and 18 common to both). Sixteen items involved simply extracting facts from the release (for example, study size); the other 16 items required subjective judgments (for example, were there cautions about confounding for observational studies?). To confirm key study details (such as population, design, and size), we obtained the research reports (journal article or meeting abstract) referenced in the releases.

Coding Reliability and Analysis

Two research assistants who were blinded to the study's purpose independently coded releases. To measure reliability, the coders and investigators reviewed each code's definition and then reread the release to confirm (or change) their code. Errors due to definition or data entry problems were corrected before agreement was calculated. Intercoder agreement was “nearly perfect” (9) for both sets of items: for factual items, κ was 1.0 (range, 0.98 to 1.0), and for subjective items, κ was 0.97 (range, 0.79 to 1.0). Disagreements were resolved by 4 of the investigators. We used STATA, version 10 (StataCorp, College Station, Texas) for all analyses.

Role of the Funding Source

The project was funded by the National Cancer Institute and the Robert Wood Johnson Generalist Faculty Scholars Program. Neither source had any role in study design, conduct, or analysis or in the decision to seek publication.

Results

Press Release Process

All centers said that investigators routinely request press releases and are regularly involved in editing and approving them (Table 1). Only 2 centers routinely involve independent reviewers. On average, centers employed 5 press release writers (the highest-ranked centers had more writers than lower-ranked centers [mean, 6.6 vs. 3.7]). Three centers said that they trained writers in research methods and results presentation, but most expected writers to already have these skills and hone them on the job. All 20 centers said that media coverage is an important measure of their success, and most report the number of “media hits” garnered to the administration.
Table 1. Press Release Process and Press Releases Issued by the 20 Academic Medical Centers
Table 1. Press Release Process and Press Releases Issued by the 20 Academic Medical Centers

Press Releases Issued

Table 1 shows that the centers issued 989 medical research-related releases in 2005. The centers averaged 49 releases per year; the range was 13 (Brown Medical School) to 186 (Johns Hopkins University School of Medicine). Twelve percent of the releases promoted unpublished research from scientific meetings. Higher-ranked centers issued more releases than lower-ranked centers (743 vs. 246) and were less likely to promote unpublished research (9% vs. 20%).

Press Release Quality

Table 2 summarizes the measures of press release quality.
Table 2. Type of Research Promoted in and Quality of the 200 Press Releases Analyzed in Detail
Table 2. Type of Research Promoted in and Quality of the 200 Press Releases Analyzed in Detail

Study Details and Cautions

Of the 95 releases about primary human research (excluding unstructured reviews and decision models), 77% provided study size and most (66%) quantified the main finding in some way; 47% used at least 1 absolute number, the most transparent way to represent results (10, 11). Few releases (12%) provided access to the full scientific report.
Two thirds of the 200 randomly selected releases reported study funding sources; 4% noted conflicts of interest (either that none [3 releases] or some existed [4 releases]).
Of all 113 releases about human studies, 17% promoted published studies with the strongest designs (randomized trials or meta-analyses). Forty percent reported on inherently limited studies (for example, sample size <30, uncontrolled interventions, primary surrogate outcomes, or unpublished meeting reports). Fewer than half (42%) provided any relevant caveats. For example, a release titled “Lung-sparing treatment for cancer proving effective” (which concluded that treatment was “a safe and effective way to treat early stage lung cancer in medically inoperable patients”) lacked cautions about this uncontrolled study of 70 patients.
Among the 87 releases about animal or laboratory studies, most (64 of 87) explicitly claimed relevance to human health, yet 90% lacked caveats about extrapolating results to people. For example, a release about a study of ultrasonography reducing tumors in mice, titled “Researchers study the use of ultrasound for treatment of cancer,” claimed (without caveats) that “in the future, treatments with ultrasound either alone or with chemotherapeutic and antivascular agents could be used to treat cancers.”

Exaggeration

Twenty-nine percent of releases (58 of 200) were rated as exaggerating the finding's importance. Exaggeration was found more often in releases about animal studies than human studies (41% vs. 18%).
Almost all releases (195 of 200) included investigator quotes, 26% of which were judged to overstate research importance. For example, a release for a study of mice with skin cancer, titled “Scientists inhibit cancer gene. Potential therapy for up to 30 percent of human tumors,” quoted the investigator as saying that “the implication is that a drug therapy could be developed to reduce tumors caused by Ras without significant side effects.” Coders thought that the “implication” exaggerated the study findings, because neither treatment efficacy nor tolerability in humans was assessed.
Although 24% (47 of 200) of releases used the word “significant,” only 1 clearly distinguished statistical from clincial significance. All other cases were ambiguous, creating an opportunity for overinterpretation: for example, “Not-for-profit hospitals consistently had significantly higher scores than for-profit hospitals.”

Discussion

Press releases issued by 20 academic medical centers frequently promoted preliminary research or inherently limited human studies without providing basic details or cautions needed to judge the meaning, relevance, or validity of the science. Our findings are consistent with those of other analyses of pharmaceutical industry (12) and medical journal (13) press releases, which also revealed a tendency to overstate the importance and downplay (or ignore) the limitations of research.
Our study has several limitations. First, content analysis coding always involves subjectivity. The high level of agreement observed among coders, however, is reassuring. Second, our findings are based on 20 centers, which may raise concern about generalizability. Because only the top 52 (of 125) centers receive a U.S. News & World Report ranking, we believe that our findings represent a best-case scenario. Third, our coding scheme was not exhaustive. We focused on study details and cautions we thought were of highest priority because press releases are typically 1 page or fewer.
Most important, because we did not analyze subsequent news coverage of press-released research, we cannot directly link problems with press releases (such as lack of cautions or numbers, or exaggeration) with those in news reports. Our findings would be stronger if we had shown integration of exaggerated information from releases into news stories. Nevertheless, press releases matter: They attract journalists' attention, and although the practice is generally discouraged, many health news stories—perhaps as many as one third—seem to rely largely or solely on the press release (1). Journalists worry that such reliance will increase, given newsroom cutbacks and greater demand for online news (7). The problems that we document are very similar to those seen in analyses of medical news: no quantification of main results (14–16) and no mention of intervention side effects (14–16), conflicts of interest (16), or study limitations (1–4). We believe that academic centers contribute to poor media coverage and are forgoing an opportunity to help journalists do better.
The quickest strategy for improvement would be for centers to issue fewer releases about preliminary research, especially unpublished scientific meeting presentations, because findings often change substantially—or fail to hold up—as studies mature (17). Forty percent of meeting abstracts and 25% of abstracts that garner media attention (18) are never subsequently published as full reports in medical journals (19). Similarly, centers should limit releases about animal or laboratory research. Although such research is important, institutions should not imply clinical benefit when it does not exist (and may not for years, if ever): Two thirds of even highly cited animal studies fail to translate into successful human treatments (20).
When press releases are issued, they should include basic study facts and explicit cautions. For example, press releases should remind journalists that strong inferences cannot be drawn from uncontrolled studies, or that surrogate outcomes do not always translate into clinical outcomes. Although good press releases will probably help, quality reporting also requires good critical evaluation skills. Fortunately, journalists have opportunities to acquire these skills, through such programs as the Association of Health Care Journalists seminars; the Knight Science Journalism Medical Evidence Boot Camp at MIT; and “Medicine in the Media: The Challenge of Reporting on Medical Research,” a workshop sponsored by the National Institutes of Health, the Dartmouth Institute for Health Policy and Clinical Practice, and the Department of Veterans Affairs.
Investigators can also do better. They could forgo requesting releases for studies with obvious limitations and review releases before dissemination, taking care to temper their tone (particularly their own quotes, which we often found overly enthusiastic).
By issuing fewer but better press releases, academic centers could help reduce the chance that journalists and the public are misled about the importance or implications of medical research. Centers might get less press coverage, but they would better serve their mission: to improve the health of their communities and the larger society in which they reside.

Supplemental Material

Supplement. Appendix

References

1.
Schwitzer G. How do US journalists cover treatments, tests, products, and procedures? An evaluation of 500 stories. PLoS Med. 2008;5:95. [PMID: 18507496]
2.
Smith DWilson AHenry DMedia Doctor Study Group. Monitoring the quality of medical news reporting: early experience with media doctor. Med J Aust. 2005;183:190-3. [PMID: 16097916]
3.
Wilkes MSKravitz RL. Medical researchers and the media. Attitudes toward public dissemination of research. JAMA. 1992;268:999-1003. [PMID: 1501326]
4.
Woloshin SSchwartz LM. Media reporting on research presented at scientific meetings: more caution needed. Med J Aust. 2006;184:576-80. [PMID: 16768666]
5.
de Semir VRibas CRevuelta G. Press releases of science journal articles and subsequent newspaper stories on the same topic. JAMA. 1998;280:294-5. [PMID: 9676688]
6.
Bartlett CSterne JEgger M. What is newsworthy? Longitudinal study of the reporting of medical research in two British newspapers. BMJ. 2002;325:81-4. [PMID: 12114239]
7.
Russell C. Science reporting by press release. Columbia Journalism Review. 2008. Accessed at www.cjr.org/the_observatory/science_reporting_by_press_rel.php on 3 March 2009.
8.
America's Best Graduate Schools: Plus a Directory of Business, Education, Engineering, Law, and Medical Schools. Washington, DC: U.S. News and World Report; 2005.
9.
Landis JRKoch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159-74. [PMID: 843571]
10.
Naylor CDChen EStrauss B. Measured enthusiasm: does the method of reporting trial results alter perceptions of therapeutic effectiveness? Ann Intern Med. 1992;117:916-21. [PMID: 1443954]
11.
Schwartz LMWoloshin SBlack WCWelch HG. The role of numeracy in understanding the benefit of screening mammography. Ann Intern Med. 1997;127:966-72. [PMID: 9412301]
12.
Kuriya BSchneid ECBell CM. Quality of pharmaceutical industry press releases based on original research. PLoS ONE. 2008;3:2828. [PMID: 18716675]
13.
Woloshin SSchwartz LM. Press releases: translating research into news. JAMA. 2002;287:2856-8. [PMID: 12038933]
14.
Cassels AHughes MACole CMintzes BLexchin JMcCormack JP. Drugs in the news: an analysis of Canadian newspaper coverage of new prescription drugs. CMAJ. 2003;168:1133-7. [PMID: 12719316]
15.
Høye SHjortdahl P. [“New wonder pill!”—what do Norwegian newspapers write]. Tidsskr Nor Laegeforen. 2002;122:1671-6. [PMID: 12555610]
16.
Moynihan RBero LRoss-Degnan DHenry DLee KWatkins Jet al. Coverage by the news media of the benefits and risks of medications. N Engl J Med. 2000;342:1645-50. [PMID: 10833211]
17.
Toma MMcAlister FABialy LAdams DVandermeer BArmstrong PW. Transition from meeting abstract to full-length journal article for randomized controlled trials. JAMA. 2006;295:1281-7. [PMID: 16537738]
18.
Schwartz LMWoloshin SBaczek L. Media coverage of scientific meetings: too much, too soon? JAMA. 2002;287:2859-63. [PMID: 12038934]
19.
Scherer RWLangenberg Pvon Elm E. Full publication of results initially presented in abstracts. Cochrane Database Syst Rev. 2007;:MR000005. [PMID: 17443628]
20.
Hackam DGRedelmeier DA. Translation of research evidence from animals to humans [Letter]. JAMA. 2006;296:1731-2. [PMID: 17032985]

Comments

0 Comments
Sign In to Submit A Comment