首页    期刊浏览 2024年12月01日 星期日
登录注册

文章基本信息

  • 标题:Reading and reviewing research: Tips for the informed consumer
  • 作者:Fischer, Robert L
  • 期刊名称:Families in Society
  • 印刷版ISSN:1044-3894
  • 电子版ISSN:1945-1350
  • 出版年度:2000
  • 卷号:Mar/Apr 2000
  • 出版社:Alliance for Children and Families

Reading and reviewing research: Tips for the informed consumer

Fischer, Robert L

Abstract

Those who work in the field of human services are bombarded with research of all types and levels of quality. The task of reviewing research for publication can tend a number of helpful concepts to those who are consumers of such research. This brief piece suggests a number of tactics for becoming a more effective consumer of research reports and, by extension, a better producer of human services research

The section allows readers to "meet" authors and to learn about the personal inspirations and influences that compel people to write for publication. We invite all readers to share their own literary experiences. In this issue, we hear from author Robert L. Fischer.

IN THE FIELD OF HUMAN SERVICES, whether you are in an academic or practice setting, you will need to read research to some extent. Such research could range from agency reports, to journal articles, to major studies. But all these works share one goal at their core: communicating a question and an answer within the confines of a written text. Simple, right? We all know from our own reading and writing that this is frequently not the case. In this brief presentation, I will discuss how all of us in the human services might go about becoming better consumers of written research, and ultimately better producers of research as well. These comments are from the perspective of someone who reviews research submitted for publication in several professional journals and who is a consumer of research.

The Art of the Review

In reviewing a piece submitted for publication, a reviewer is expected to cast a critical eye on the work at hand, identifying both the strengths and weaknesses of the submission. However, all of us who have received reviewers' comments know that sometimes it seems the reviewers get their follies only from finding fault with the work. A few reviews fall into the BMW category; that is, it seems the reviewer can only "bitch, moan, and whine" about the problems with the research under review. But most reviewers do seek to provide even-handed critiques of the submissions they review.

You might ask, "who reviews the reviewers' comments?" Well, certainly the editors of journals and books read the comments provided by their reviewers. Based on these reviews, editors work to find a majority opinion about what the disposition of the article should be (e.g., accept, revise and resubmit, reject). But in reality, reviewers can be surly and obtuse in their commentary, and generally no one will question their understanding or motive.

In a sense we are all reviewers of research, or at least we should be. As consumers of research we should be critical of the works we read, truly a caveat emptor ("let the buyer beware") model of research consumption. So the trick becomes being able to assess the substantive limitations of the work without becoming distracted by the more minor inconsistencies and errors of omission. Furthermore, as consumers of published results, we should be aware that just because the work has been formally published (peer reviewed or refereed), it does not follow that the content and presentation are without flaw. Thus, we all need to maintain a certain level of critique when approaching any written research.

The Introduction: "Explaining It to Your Mother"

I think most of us want desperately for others to understand us as quickly as possible. In the context of an article or report, this puts a lot of emphasis on the introduction to the work. Frequently, authors will attempt to make the introduction into a mini-version of the entire report. This may cause the reader to become overwhelmed or simply lost in the details. I am reminded of what my doctoral thesis advisor, David Cordray at Vanderbilt University, encouraged me to do after several drafts of my introductory chapter. He said with an uncharacteristic calmness, "write it like you're explaining it to your mother. " Now, mind you, my mother was a fairly sharp lady, a registered nurse by training, but the point was well taken. The next draft of the chapter was about half the length of the earlier versions and attempted more directly to draw the reader in to wanting to read the remaining chapters. I would not claim that more than about six people read my dissertation, but the five members of my committee were apparently satisfied enough. This is all to say that the introduction should embody both a certain simplicity and the "hook," the angle that draws the reader into the remainder of the report. In schools of public policy, they frequently speak of answering the "so what?" question, the aspect of the topic that will make a difference in the real world. Reviewers should closely examine the introduction for some attention to this very important question.

Design as the Engine of the Report

The presentation of the design of the research is at the core of the reviewer's ability to understand the work. Readers should look for the methods section to clearly convey the key elements of the design: data sources) and timing of collection, a discussion of the instruments or other data collection methods used, and the basis for judging effects (e.g., pre/post or comparison group design). A reviewer should be able to summarize the report in a basic football-type diagram (i.e., Xs and Os) for the purpose of getting the big picture, regardless of whether the methods are qualitative or quantitative in approach. If conspicuously absent from this summary is information about key aspects of the research, the reviewer might consider how this reflects on the remainder of the work, i.e., does this speak to the looseness of the methods used or simply the effectiveness of the writer's style. I have known research psychologists who begin reading a paper with the methods sections, because they tended to assess the work's overall quality based on the research method used (and obviously, the way it is written up). With the advent of research synthesis techniques (AKA, meta-analysis), some researchers spend many months reviewing all the completed research on a particular topic to report what the literature says about that issue. As a recovering meta-- analyst myself, I can report that this type of method affords a great opportunity to hone one's skills at identifying and summarizing the research designs used in the studies under review. Although many low-quality studies can be found on a given topic, I can confidently report that I never "meta-analysis" I didn't like. Undoubtedly, I am not recovering as quickly as I thought. It suffices to say that if the reader cannot make sense of the basic design of the research, it does not bode well for the remainder of the report.

Saying Too Little and Saying Too Much

Every writer wants to conclude their work with a set of noteworthy, potentially stirring observations and findings. Frequently, writers go well beyond the parameters of the work they have completed and appear to "sneak in" findings that may have more to do with their general beliefs or overall research agenda than with the work in question. A reviewer should note whether the writer is able to tie back each finding to something described earlier in the paper, as in a data source or summary statistic. Simply put, each finding should relate to something the author actually "found." Reviewers cannot ignore writers who borrow a deus ex machina approach from the theater in tying up all the loose ends in their research project. The idea of lowering a deity from above via a mechanical contraption may be an effective way to resolve a multiplot theatrical production, but it falls flat in research articles.

Eyeball the Figure(s)

Tables and figures in research articles should serve as useful aids in conveying the content of the work. Unfortunately, some writers seem to include figures more often because they think they add an air of science to an inherently squishy enterprise or because the figures simply "look neat." If it takes more than three sentences to describe what the figure means, many readers are not going to be able to follow the figure to begin with. Tables, on the other hand, should help to condense a variety of data and comparisons for the ease of reference. In graduate school, we were told to apply the "interocular trauma test" to our tables. That is, if when the tables were shown to a selected subject, their eyes began to bulge due to the smallness of the font or the complexity of the table, the item had failed the test. Reviewers can suggest that such tables and figures be rethought, condensed, enlarged, or simply ejected from the paper. As a reader or reviewer, if you feel your eyes begin to bulge, chances are you will not get much benefit from the table ar figure, however neat it looks.

Bridge-Building in the Literature Review

Authors should make a concerted effort to connect their work to the existing research on the topic and provide a useful extension of it. If an author is simply applying a rigorous design and validated measures to an uninteresting question, the research may become such that the reader never makes it past the abstract. In an address by psychologist Lee Sechrest from the University of Arizona at a data integrity conference in 1992, he cited the comments of a reviewer who had said that a particular research effort had "filled a much needed gap in the research literature." It highlights the reality that not all research gaps need to be filled.

Caveats Are Your Friends

Reviewers look for authors to address the limitations of their work head-on. If a reader senses that the author is glossing over these problems without discussing the potential impacts on the findings, there should he cause for concern. For example, many writers talk about the response rate to a survey or the degree of attrition from the sample between baseline and follow-up but go no further. Reviewers should note the response rate because it is important, but there is much more to worry about, The reader should look for the author to compare the respondents and nonrespondents and to examine whether the groups differ on any measured variables. If they do, then the potential implications should to be addressed. If they do not differ, then at least the findings do have some limited generalizabilty, though it should be recognized that the groups might differ on a variety of other unmeasured and important characteristics (assuming a nonrandomized design). On the listserve of the American Evaluation Association (EVALTALK), there are periodic discussions about what is a "good enough" response rate. All researchers know that even with a moderately high response rate of 80%, while you can worry much less about selection bias, it is far from eliminated as a concern. If the 20% of the sample who did not respond differ systematically from the respondents, then there should be some concern about the generalizability of the findings. Response rates short of 100% are similar to money, in that while more is always better, it's usually hard to say how much is enough.

Conclusion

As this piece draws to a close, I hope that I have at least conveyed some thoughts on how readers and reviewers of research can improve their game. I suppose underlying all this is the basic assumption that the reviewer has a fundamental desire to understand the author and the research. Without that desire, even the best-written paper will become rote reading with no lasting value beyond the bounds of the pages of text. As you have read this epistle, I trust you have applied some of these ideas to the extent possible to what I have written. This is a definite departure for me as a writer, having relied heavily on extensive citation in my other writings and having espoused a philosophy that you must "cite to excite." This piece comes more from my own direct research practice and,my gut than from the writings of others; I believe this is frequently the way knowledge is developed in the human services. I hope that these thoughts encourage us all to be better reviewers and consumers of research, and that those of us who write research will think hard about better serving the needs of our consumers.

L !is director of program evaluation at Families First in Atlanta, Georgia.

Copyright Manticore Publishers Mar/Apr 2000
Provided by ProQuest Information and Learning Company. All rights Reserved

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有