首页    期刊浏览 2025年07月26日 星期六
登录注册

文章基本信息

  • 标题:What do grant reviewers really want, anyway?
  • 作者:Porter, Robert
  • 期刊名称:Journal of Research Administration
  • 印刷版ISSN:1539-1590
  • 出版年度:2005
  • 期号:April
  • 语种:English
  • 出版社:Society of Research Administrators, Inc.
  • 摘要:It can be argued that most research administrators owe their jobs to a key power group in academe: grant reviewers. These folks are the gatekeepers who decide who will get money to fund research, and it is quite a bit of money, as universities now consume about $40 billion in R&D funds annually, much of it obtained competitively from government, industry and private sources (NSF, 2003).
  • 关键词:Universities and colleges

What do grant reviewers really want, anyway?


Porter, Robert


Background

It can be argued that most research administrators owe their jobs to a key power group in academe: grant reviewers. These folks are the gatekeepers who decide who will get money to fund research, and it is quite a bit of money, as universities now consume about $40 billion in R&D funds annually, much of it obtained competitively from government, industry and private sources (NSF, 2003).

Divvying up this diverse pool of funds is a massive undertaking, and it takes a lot of people to do it. In FY2003, the National Science Foundation alone utilized 54,000 individual grant reviewers, 8,000 of whom were engaged for the first time (NSF, 2004). To evaluate the 40,000+ proposals it receives annually, the National Institutes of Health uses 258 separate study sections and special emphasis panels, each with a roster ranging from 5 to 22 members (NIH, 2004). Review panelists work hard for little pay (usually travel expenses and modest daily honoraria when the panel is in session). In an NIH survey of study section members, reviewers reported spending an average of 49 hours reading proposals and writing reviews prior to meeting with the panel! That same study also found a high level of satisfaction with the peer review process: 94 per cent of all respondents reported they were either "satisfied" or "very satisfied" with their overall experience (NIH, 2001). Yet the system that lies at the very heart of science also has its share of critics who have attacked peer review for its perceived biases, questionable ethics, and scientific conservatism (Horrobin, 2001; Smith, 1997; Wessely, 1998).

To be funded, grant proposals must receive very high marks from reviewers. NSF reports that just half of proposals rated "Very Good to Excellent" by reviewers were funded in 2003 (NSF, 2004). At NIH, the "streamlining" procedure can eliminate up to half of the proposals submitted from full discussion by the panel; these are returned to the PI's with no score. For the rest, the numerical panel scores are ranked from top to bottom, often with very small differentials before the payline is reached and the money runs out (NIH, 2003). Overall success ratios range from 20 to 30 percent at most agencies, but these figures include a significant percentage of resubmissions, and many grant programs fund as few as 10 to 15 per cent. With budgets in sponsor agencies flattening and universities ramping up their research goals, competition can only intensify, adding to the need for a better understanding of the people who serve on these vitally important bodies.

Much has been published about the review process, especially the established practices of major federal agencies such as NSF and NIH. Relatively little has been written about the experience of being a reviewer. An exception is biologist Pam Member, who has written a strong personal affirmation of the review process as a valuable learning experience that has particular impact on one's proposal writing skills (Member, 2003). Recently Molfese, Karp and Siegel (2002) recommended proposal writing strategies geared to reviewers' likes and dislikes.

Research questions

This paper arose from a desire to learn more of the personal perspectives of experienced grant reviewers: What were (and are) their motivations for serving? What drives their positive or negative recommendations for particular proposals? How do they view the strengths and weaknesses of the peer review system? What have been the most important lessons learned? How has the experience affected their own proposal writing?

These and related questions were asked in structured interviews conducted with 16 senior Virginia Tech faculty, 10 men and 6 women, in May and June of 2004. Twelve were full professors (two of whom were also associate deans for research), and four were associate professors. A wide range of science and engineering disciplines were represented, as were the social and behavioral sciences. This was an experienced group, having served on an average of 10 review panels each, most of them with federal agencies such as NSF, NIH and USDA. Not surprisingly, they were also successful proposal writers, winning an average of 8.3 awards each in the five year period from 1999 to 2004. In dollars, their total awards averaged more than $2.2 million each over that same period.

Motivation

In most cases, the first invitation to join a review panel came soon after receiving a grant from that same agency. When asked why they chose to participate in such a time-consuming task, the answers centered around four basic themes:

1. Learning the ropes

They wanted to learn more about how review panels operate, in order to write better proposals and improve their chances for future funding. "To see how the game is being played," and "to pick up on what reviewers like and don't like" were typical comments.

2. Service to science

Reviewers felt a strong sense of obligation to serve the science community. "I benefited from this process and felt I had to give back," said one reviewer. "This was a way I could contribute to the high quality review process at NIH," said another.

3. Keeping current

They believed this would be a good way to keep up with their discipline and learn about future research directions.

4. Professional networking

They wanted to build a network of professional contacts with peers at other universities, as well as program managers within the sponsor agencies.

Preparation for the panel meeting

Reviewers reported receiving anywhere from 20 to 100 proposals prior to the panel meeting, and were assigned to be a primary or secondary reviewer on six to eight proposals. Such assignments often require the submission of written critiques prior to the panel meeting. Starting about two weeks ahead of the meeting, time spent reading and writing reviews was estimated to range from 15 to 60 hours, with 35 hours being the average. While most stated they were usually prepared for the panel meeting, they also observed it was not uncommon for other reviewers to keep on writing at the last minute. "We spend the first hour standing around drinking coffee while these folks are still pecking away at their keyboards," noted one reviewer.

Reviewer expectations at initial reading

As they started reading each proposal, reviewers emphasized their first wish was to learn very quickly what the project was about and whether it fit the program objectives. Additionally they were looking for: (a) writing that was clear and concise ("concise" being the word most often used); (b) interesting, innovative ideas that would contribute to the field; (c) solid data showing that the approach has promise; (d) a crisp, specific project description with a research plan that is well thought out; and (e) evidence that the PI is well qualified to do the research.

First impressions are critical. "The abstract must sell the grant," said one. "If I don't get interested by the first page, the proposal is lost," said another.

Characteristics of a good proposal

When asked to describe the qualities of good proposals, these characteristics were mentioned: (a) a document that is neat, well organized and easy to read; (b) responsiveness to the program announcement, with specific references showing how the proposed project will achieve program goals and objectives; (c) fresh insight into an important problem; (d) writing that communicates the enthusiasm and commitment of the researcher; (e) evidence that the PI knows the field; (f) convincing preliminary data; and (g) a feasible work plan that is supported by an appropriate budget.

Several stressed the importance of the proposal's speaking to the reviewer, stimulating a level of interest and enthusiasm to match the writer's. In the words of one reviewer: "You get the feeling 'This is really great, this study has to be done.' It's like a fire in the belly, or knocking your socks down, it makes you say to yourself, 'Darn, I wish I had thought of this!'" Another said that reading a good proposal was also a learning experience. "The best proposals teach," she observed. In this part of the interview, reviewers kept coming back to the core theme of clear, persuasive writing. One used this story to make the point:
 Imagine that you've submitted a proposal
 to NIH. Your reviewer is reading
 through the proposals, but she's left at
 the last moment. It's 6 a.m. on the day
 she's flying to Washington. She's sitting
 at the bus stop, it's raining, she has the
 flu, and she's got your proposal in front
 of her. Your writing should be able to
 persuade her that this is a great proposal,
 even under those conditions.
 (B.Tyler, personal communication, 27
 May 2004)


Common mistakes

Reviewers were emphatic in describing the common mistakes they encounter, and most began by critiquing poor writing styles. The most common mistake is writing that is vague and unfocussed. "It takes me too long to figure what it is that they want to do," was one description, Another stylistic error is prose that is too densely academic, or "written like a journal paper." What they dread most is the sheer boredom of wading through tedious material and the unnecessary verbosity of many writers who force small fonts and smaller margins on the weary reader. "It's as though the PI is desperate to pack in more and more, while the reviewer wants to read less and less," said one. Other common mistakes include (a) an incomplete response to the program announcement; (b) the writer does not understand the state of the art; (c) the project is too ambitious, too global in scope; (d) the research plan is vague, where the PI seems to be saying, "I know what I'm doing. Trust me"; and (e) the PI lacks proven competence to do the research.

When asked about qualities that particularly annoy or irritate then), a frequent complaint was sloppiness and lack of proofreading. Apparently, killer mistakes in spelling and grammar are encountered all too frequently. "This isn't freshman English," one reviewer stated flatly. Others cited instances where it was obvious that the document is a "cut and paste" job, with inconsistent formatting and writing styles. "If the PI can't take the time to do it right, why should I?" was a question posed by more than one reviewer. When asked why very bright people could commit such basic errors, reviewers guessed that PI's wait too long to get serious about writing their proposals and don't allow enough time to polish the document. "Maybe they don't realize how important this is," said one.

Learning from experience

Most reviewers had multiple years of experience, and most said they now perform their work more efficiently, taking less time than they did when they started. "I used to just plod through each proposal, focusing on all the details," said one. "Now I get to the gestalt, the big picture first. If I like it, then I'll go on to the details. If I don't, I'm done reading." Another referred to having attained higher standards over the years: "I'm much more confident in my own judgment now, and I'm more ready to strongly advocate or 'shoot down' individual proposals." A third mentioned the advantages of being able to look up citations on the internet. "I use the computer to check references cited in the proposal, and this helps me a great deal to get up to speed in areas where I don't have specific expertise." Some mentioned skimming or skipping over sections they deemed to be overwritten or irrelevant.

Objectivity of review panels

In the intensely competitive arena of proposal reviews, one could expect disgruntled PI's to challenge the objectivity of the panels, and they do. However, the participants in this study, all of whom have experienced disappointment as well as success with their own proposals, rate their panels' objectivity very highly. Several stated that in their experience, evidence of bias was "nil" or "virtually nonexistent." One described his panel as a "straight, straight arrow operation." Another stated that "perhaps the system isn't perfect, but it's the fairest one possible." In response to the perception that there is an "old boys' network" conspiring to steer a disproportionate amount of funds to its members, several reviewers disagreed. They described panel dynamics as a democratic, self correcting system where it is hard for one person or faction to dominate. Here is a typical comment:
 Applicants have got to realize that the
 people doing these reviews are doing
 the best they can. They're providing the
 very best information and judgment
 they're capable of. There is very, very
 little cronyism in the system. There is
 some, but not very much. But there is
 clubbism, which is not cronyism. That
 is, if I'm sitting in an NIH study section,
 and I believe the real area of current
 interest in the field is neurotoxicology,
 I'm thinking if you're not doing
 neurotoxicology, you're not doing
 interesting science. So there is this possibility
 of egotistical impact on the
 process. But it's relatively minor, and
 unless you're a very powerful person,
 you won't get away with it. (N. Castagnoli,
 personal communication, 14 May
 2004)


Some did acknowledge the occasional favoritism shown toward a senior PI based on his or her reputation rather than solely on the proposal itself: Where a PI has a strong record of scholarly output, panels will sometimes "fund it on the come," a gambler's phrase used by one reviewer.

Panel procedures

Though they served on many different panels in several agencies, these reviewers described working procedures that were remarkably similar. In a typical routine, the program manager at the agency starts the working session by reviewing program goals and laying out the ground rules for the actual review. Responsibility to moderate the discussion rests with the program chair, a peer who is a member of the committee, but doesn't vote. Primary and secondary reviewers read or summarize their written reviews, and panel members are polled for their scores or recommendations for funding. Discussion follows, after which panelists may change their ratings. The program chair checks for the panel's concurrence with the final rankings, and the session ends.

Recently the Center for Scientific Review at NIH posted an interesting video on the intcrnet depicting a typical study section study meeting (NIH, 2003). Although it's a simulated exercise (referred to as "mock review panel"), it's an instructive introduction to the group dynamics of the review process.

Impact on grant writing

All participants reported that serving on review panels has dramatically improved their proposal writing. "You learn to put the reviewer's hat on," said one. "You know what the panel is looking for; you can hear their discussion in your head while you're writing." "You're exposed to the writing skills of successful PIs and you learn to imitate their best qualities," said another. A third noted, "I used to write to a peer; now I write to a committee. I write to reach both the specialist scholar in my particular field and the generalists, who make up the majority of the panel. And I make it easy to read, large font (never size 10!), and 1-1/2 line spacing." A typically enthusiastic response was this:
 It's been a tremendous influence on my
 own grant writing, all across the
 board--learning how to strengthen the
 qualities of a good proposal--coherence,
 theoretical background, feasibility,
 methodological nuances, need for a
 statistical consultant, the overall vision.
 How to write so you're not coining
 across as pompous, how to write so
 you'll be well received--almost every
 facet of my grant writing has been
 enhanced. It's just been a tremendous
 source of feedback. (T. Ollendick, personal
 communication, 13 May 2004)


Other improved skills were mentioned, including: (a) a simpler, livelier writing style aimed at capturing and holding the reviewers' attention; (b) key points laid out very early; (c) clear organization with frequent section headings; (d) more use of visual illustrations (graphs, charts, photos). One reviewer summed up her new perspective with the simple statement: "You have to be a critic reading a proposal in order to write a good one."

Lessons learned

Participants were asked to step back, take the long view of their experience as reviewers, and sum up the most important lessons they've learned. One reviewer went back to a strong restatement of the "clear writing" theme:
 The big lesson reviewers learn is how
 pitifully, poorly written a lot of proposals
 are. It's truly an eye opener for all of
 your life. You say to yourself; "Oh my
 gosh, we got 150 proposals and half to
 two-thirds of them are in the No
 Merit/Do Not Fund category, so about
 fifty are still in the game, and you're only
 going to fired 20 to 28 of those, so
 you're looking at a pretty small number."
 So the reviewers walk away clearly
 knowing that they have to write their
 own proposals so they wind up in that
 final quadrant. We never really sit down
 and say how we do it--we all do this
 independently--but two things make
 the big difference: One, it's just the
 power of the idea, and two, their writing
 conveys that idea very concisely and you
 can see right away how they're going to
 do something very specific with it. (S.
 Sumner, personal communication, 25
 May 2004)


Another reviewer with a strong funding history stressed relationship building as the key to success:
 As a PI or co-PI you need to have a relationship
 with the program manager.
 Your job in writing the proposal is to
 help the program manager be successful.
 I really believe that. So if the program
 manager says, "Look, I want to develop
 the next XYZ," your job is to help him
 or her be successful by doing just that.
 That's the truth. Your job is to help that
 manager establish that research program.
 You do it by showing a 2 or 3 page white
 paper and asking, "How about this, does
 this fit your program?" It's very important
 to strike up a relationship with the
 program manager in a somewhat personal
 was. I mean go visit face-to-face first,
 you don't want to send a white paper out
 of the blue, you want to go up to DC
 and meet these people. (T. Long, personal
 communication, 20 May 2004)


Other basic lessons included: (a) "Study, study, study the program call"; (b) "Make your proposal easy to read"; (c) "Start much earlier than you think you have to"; (d) "Make sure you know what has already been done"; (e) "Write in an accessible way that can be understood by a diverse group"; and (f) "Get in the habit of resubmitting."

Luck of the draw

In discussing lessons learned, hick was often mentioned. Two dominant realities of the peer review process--the powerful influence of lead reviewers and the low probability, of success--have led most reviewers to the ironic conclusion that, in spite of the inherent fairness of the system, hick has a great deal to do with the outcome. Despite the sponsor agencies' best efforts, the final decision contains an element of randomness, depending on who gets appointed to the panel and who are the primary and secondary reviewers. Their conclusion is that shrewd PIs start with a resolve not to be deterred and always keep resubmission in mind. "Remember the funding decision, positive or negative, can be dumb luck, due to factors beyond your control," said one. "Keep on writing and resubmitting; you'll always be faced with a low probability of success, so there's no shame in being rejected," said another. A third brought in his own gambling analogy:
 The big lesson is not to take rejection
 personally, because when you throw in
 the social dynamics of the panel, and
 the large number of proposals they've
 looked at in a short period of time, it's
 a crapshoot. Also, remember you're
 writing a document that most panelists
 are not going to read--they're going to
 look at parts of it, but they won't read
 it from start to finish--so you better put
 some eye-catching things in there to
 hold their attention. (D. Inman, personal
 communication, 13 May 2004)


Strengths of the peer review system

With few exceptions, participants in this study gave a ringing endorsement to the peer review system, in their view, its great strength is democratic self-determination, as researchers themselves chart the future direction and quality of their respective disciplines. "The research community decides its own fate by determining what good science is," said one. Another noted, "The people doing the work are the right people to decide where science is going." A second strong theme was the diversity of the panels, credited with assuring a good cross section of ideas to drive innovation. While admitting it's not perfect, the overall consensus was that peer review is the best means to preserve the scientific integrity of sponsored research.

Weaknesses of peer review

Despite their strong overall support, participants expressed a range of concerns about peer review. No one theme dominates, though several mentioned that panel discussion can be unduly influenced by a strongly opinionated member. A related concern was the "veto" effect, whereby less than enthusiastic comments by any one of the lead reviewers can doom the proposal. Most commented on the heavy workload, and the difficulty of giving a fair hearing to so many proposals in a single batch. Women are especially pressured to participate more often, a concern shared by both genders. A few mentioned that some panels do not have the breadth of expertise to adequately cover all the proposals. Finally some expressed a concern about "splitting hairs," as intense competition forces many panels to focus on relatively minor weaknesses, for example "this proposal lacks preliminary data." The funding decision is then based not on the merit of the basic idea, but on how much work has already been done. Some reviewers felt that this was at the root of ill feelings expressed toward peer review, usually by disappointed PIs. (An excellent example of PI outrage can be found in a letter published in Current Biology, entitled "Moron Peer Review" (Brenner, 1999).)

One reviewer expressed deep reservations about NSF's increasing emphasis on interactive panels contrasted with the old mail reviews:
 I think the panel review process is terrible.
 It is not the best way to review proposals.
 The best way in my mind is the
 old way, where the program manager
 sent the proposal out to two or three
 reviewers with expertise in the field, and
 asks for a written critique, collects the
 reviews, and then makes the decision. It
 was a mail review process very much
 like reviewing papers for a journal.
 Review panels are terrible for two reasons:
 One, you're forced to read 40
 proposals at one time, as opposed to the
 old mail review where you read maybe
 ten proposals over a year. That way you
 got a higher quality, more serious written
 review, like the Canadians and the
 British do. Two, putting people in a
 room for discussion opens the process
 to a tremendous amount of subjectivity,
 and not because anybody wants to or
 tries to, it's just because of human
 nature. (D. Inman, personal communication,
 13 May 2004)


Most participants were more forgiving, concluding that the system may have its flaws, but there is no better way. Some recalled Winston Churchill's famous dictum about democracy: "It's absolutely the worst form of government except for all the other forms that have been tried."

Innovation or incrementalism?

Critics have charged that review panels shy away from funding truly innovative work in favor of research that is within established boundaries (Horrobin, 1996). Participants in this study were almost evenly split on the issue: 9 agreed with the accusation while 7 disagreed. One who agreed gave this rationale:
 The proposals most likely to get funded
 are incremental, where the writer takes
 a very mature topic and kicks it up just
 one notch. The ones that have a hard
 time getting funded are the most creative
 ones, where the writer is taking a
 huge leap forward, so much so that
 there aren't a lot of references, and
 most people aren't comfortable with
 that. One of the tactics of successful
 grant writing is that you have to make
 people comfortable.


One who disagreed was adamant in placing the blame on the writer rather than the system:
 The real reason that a lot of ideas that
 are called "innovative" aren't funded is
 not because review panels are biased
 against them, but because they're not
 well-developed, scientific ideas. They're
 not well thought out or grounded in
 anything that's persuasive. You need to
 make your case, and if you're going
 outside established boundaries, the bigger
 the burden of proof to show that
 this is an interesting idea, and people
 just aren't meeting that higher burden
 of proof. (S. Ball, personal communication,
 27 May 2004)


Recommendations to improve peer review

Not surprisingly, most recommendations to improve the peer review system centered on the workload and how to relieve some of its pressures. Suggestions to spread the load among more reviewers were tempered by the observation that expanding an already large army presents its own challenges, not to speak of the added costs. Several felt that allowing more time for panel meetings would help, especially when the number of proposals is high. There were some suggestions to allocate more money to exploratory, high risk work that does not require as much preliminary data. One reviewer recommended that phone conferences be eliminated entirely, as face-to-face discussions are immensely preferable. One interesting suggestion to help new reviewers who need mentoring was to set up a listserv so panel members can access an interactive bulletin board prior to the meeting.

Summing up

Consistent with the 2001 NIH study, these reviewers were generally well satisfied with peer review, both with the system and with its overall implementation. Some saw impressive value in peer review above and beyond its functional role in allocating research funds. A particularly cogent expression of this view is the following:
 Participating in these panels is part of
 doing science in this country. It's not
 an option. You owe it to the system if
 you expect to get funding. At the same
 time, it's an integral part of your own
 intellectual development, your ability
 to stay in touch with things. It's much
 more than just deciding who's going to
 get money. It's like going to a conference,
 except it's even broader and more
 intense intellectually. It affects my
 teaching, it affects my research, it
 affects what I think about my university
 in terms of where things are going
 and how priorities are set. It just a huge
 thing with me, and part of that is
 because I'm successful with it, I'm one
 of the success stories. I'm very, very
 fortunate and I'm very grateful. (B.
 Winkel, personal communication, 18
 May 2004)


For a minority view, consider this blunt assessment of the massive time and effort it takes to administer the enterprise:
 If I were science advisor to the president,
 I would look at the peer review
 system and ask: "Are we using our best
 scientific and engineering minds in the
 best way?" And I would say there has
 to be a better way, because we spend
 way too much time writing proposals
 and way too much time evaluating proposals
 and way too little time actually
 doing the work. The British, Canadian
 and Australian systems are better
 because they're much less voluminous,
 with much less time spent writing and
 much less time evaluating. Overall,
 when I look at my life, if I didn't have
 to spend so much time chasing money,
 or evaluating other people who are
 chasing money, I'd be a heck of a lot
 more productive. (D. Inman, personal
 communication, 13 May 2004)


The author leaves the last words to the reviewers.

Special thanks to Virginia Tech faculty who were interviewed for this study:

Sheryl Ball, Economics Frank Chen, Industrial & Systems Engineering Neal Castagnoli, Chemistry Felicia Etzkorn, Chemistry Daniel Inman, Mechanical Engineering Thomas Inzana, Biomedical Sciences & Pathobiology Yilu Liu, Electrical & Computer Engineering

Gwen Lloyd, Mathematics Timothy Long, Chemistry Scott Midkiff, Electrical & Computer Engineering John Novak, Civil & Environmental Engineering Craig Nessler, Plant Pathology, Physiology & Weed Science Thomas Ollendick, Psychology Susan Sumner, Food Science Brett Tyler, Virginia Bioinformatics Institute Brenda Winkel, Biology

Author's Note: Submitted as a contributed paper for the SRA Annual Meeting, 2004, Salt Lake City. Contact Robert Porter, Ph. D., Program Development Manager, Research Division, Virginia Tech, 340 Burruss Hall, Blacksburg VA 24061, USA. Ph: 540-231-6747. E-mail: reporter@vt.edu.

References

Brenner. S. (1999). Moron peer review. Current Biology, 9, 20, R755.

Horrobin, D. (1996). Peer review of grant applications: A harbinger for mediocrity in clinical research? The Lancet, 348, 9037, 1293-1295.

Horrobin, D. (2001). Something rotten at the core of science? Trends in Pharmacological Sciences, 22, 2, 51-52.

Molfese, V. J., Karp, K. S., & Siegel, L. S. (2002). Recommendations for writing successful proposals from a reviewer's perspective. The Journal of Research Administration, 33(3), 21-24.

National Institutes of Health, Center for Scientific Review. (2001). Study section member satisfaction survey final report." Executive summary. Retrieved 5 May 2004 from http://www.csr.nih.gov/ events/ExecSumm.pdf

National Institutes of Health, Center for Scientific Review. (2003). Inside the grant review process (video). Retrieved 4 May 2004 from: http://www.csr.nih.gov/video/video/asp

National Institutes of Health, Center for Scientific Review. (September, 2003). Review procedures for scientific review group meetings. Retrieved 5 May 2004 from http://www.csr.nih.gov/guide lines/proc.htm

National Science Foundation. (2003). National patterns of R&D resources: 2002. Retrieved 5 May 2004 from http://www.nsf.gov/sbe/srs/nsf0331 3/start.htm

National Science Foundation. (2004). NSB-04-43: Report to the National Science Board on the National Science Foundation's merit review process, Fiscal Year 2003. Retrieved 7 July 2004 from http://www.nsf.gov/nsb/documents/ 2004/MRreport 2003 final.pdf

Member, E (2003). NSF grant reviewer tells all. Next Wave (online publication of Science magazine). 4 April 2003. Retrieved 5 May 2004 from http://nextwave.sciencemag.org/cgi/ content/full/2003/04/10/2

Smith, R. (1997). Peer review: Reform or revolution? Time to open up the black box of peer review. British Medical Journal, 315,759-760.

Wessely, S. (1998). Peer review of grant applications: What do we know? The Lancet, 352, 9124, 301-305.

Robert Porter

Virginia Tech
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有