Edifying editing.
McAfee, R. Preston
I've spent a considerable amount of time as an editor.
I've rejected about 2,500 papers, and accepted 200. No one likes a
rejection and less than 1% consider it justified. Fortunately, there is
some duplication across authors, so I have only made around 1,800
enemies.
The purpose of this little paper is to answer in print the
questions I am frequently asked in person. These are my answers but may
not apply to you.
Who makes a good editor?
When Paul Milgrom recommended me to replace him as a co-editor of
the American Economic Review, a post I held over nine years, one of the
attributes he gave as a justification for the recommendation was that I
am opinionated. At the time, I considered "opinionated" to
mean 'holding opinions without regard to the facts,' and
indeed dictionary definitions suggest 'stubborn adherence to
preconceived notions.' But there is another side to being
opinionated, which means having a view. It is a management truism that
having a vision based on false hypotheses is better than a lack of
vision, and like all truisms it is probably false some of the time, but
the same feature holds true in editing: the editor's main job is to
decide what is published, and what is not. Having some basis for
deciding definitely dominates the absence of a basis. Even if I
don't like to think of myself as "obstinate, stubborn or
bigoted," it is valuable to have an opinion about everything.
Perhaps the most important attribute of an editor is obsessive
organization, processing work unrelentingly until it is done. The AER is
a fire-hose: in my first year I handled 275 manuscripts. In my first
year at Economic Inquiry I processed 225 manuscripts to completion. I
typically write referee reports the same day they are requested, so that
I keep my inbox clear. I did this even in the days before electronic
inboxes. This "clear the inbox" strategy may not be a good
strategy for success in life but it is a great characteristic in an
editor. Otherwise, upon returning from a couple of weeks of vacation,
there may be a mountain of manuscripts visible on satellite photos
awaiting processing.
The third characteristic of successful editors is a lack of
personal agenda. If you think papers on, say, the economics of penguins
are extraordinarily important, you risk filling the journal with
second-rate penguin papers. A personal agenda is a bias, and when it
matters, will lead to bad decisions. As everyone has biases, this is of
course relative; if your reaction is "but it isn't a bias,
I'm just right" you have a strong personal agenda.
The last attribute of a good editor is a very thick skin. One
well-known irate author, after a rejection, wrote me "Who are you
to reject my paper?" The answer, which I didn't send, is
"I'm the editor." There are authors who write over and
over, asking about their paper, complaining about decisions. If you lose
sleep over decisions and wring your hands in anguish, or take every
disagreement as a personal affront, it is probably best to decline the
offer to edit a journal. One author wrote me, with no evidence of a
sense of humor, that if I rejected his paper, he would be denied tenure
and his three children would go hungry. My response, which I didn't
send, was "Good luck in your next career." There are papers I
wish I had accepted, three of them to be exact. Not bad for 2,500
rejections.
How do I become an editor?
One of the surprises of being an AER co-editor was the number of
people who believe the journals are controlled by the top departments
for the benefit of the top departments, that is, who believe in the
conspiracy theory. This has been the prevailing theory of authors in
spite of the wide editorial net cast by the AER. Three different authors
(from departments without graduate programs) thanked me after I accepted
their papers for breaking the conspiracy of journal editors to favor the
top ten departments.
I'm confident that there is no conspiracy, for if there were,
I wouldn't have been chosen as a coeditor. The causality actually
runs the opposite direction--people who publish a lot wind up hired by
top departments. Papers and Proceedings is run, of course, for the
benefit of the AEA president who organizes it and thus represents a
conspiracy. (2)
Anyone can become an editor by being a super referee. Referees who
respond quickly with thoughtful reports are appointed as associate
editors after half a dozen years or so, and from there soon become
co-editors.
What editorial strategies and tricks can you share?
An acceptance from the Journal of Economic Theory in the past was a
list of possible decisions with a check mark next to "Accept."
While everyone prefers an acceptance to the alternatives, this is really
a pretty hideous notification method. Consequently, I decided to write
what my assistant called the gush letter, in which I explained to the
author why I was enthusiastic about publishing their paper, and why they
should be especially proud of the contribution. Authors like this a
lot--many have told me it was the only the positive feedback they have
ever received from a journal--but it serves an additional role. If as an
editor you can't painlessly explain why you are excited to publish
a paper, you should probably reject it. If you can painlessly explain,
then do so for the good of humanity--it creates a lot of social value at
very low personal cost.
A great efficiency gain is to look at reviews as they arrive. About
half the time I feel comfortable rejecting on the basis of a single
negative review. Since the expected waiting time for the second review
is usually two or three months, this procedure cuts the waiting time
substantially.
When I am having trouble making a decision on a paper, one strategy
is to talk it over at lunch. I provide a description of the issue and
see where the conversation goes, peppering the discussion with the
author's contribution. Whether a group of economists find the
results intriguing is useful data on whether the paper will be
well-received.
In his study of refereeing, Dan Hamermesh (1994) discovered that,
conditional on not receiving a report in 3 months, the expected waiting
time was a year. Economists often make promises that they don't
deliver, which is a grim fact of editing. One author wrote chastising me
for making him wait four months for a response to his submission; I
politely responded that I had been waiting over five months for a
referee's report from him! As a result, I often request more than
the standard two reports. At the AER, more than two-thirds agree to
review manuscripts, while at Economic Inquiry, that number is below
half. To get two, I now need to request four. Finding referees used to
be much more challenging and I would assiduously keep track of fields of
expertise of everyone I encountered at conferences (making me quite
unpopular), but SSRN makes finding reviewers much more straightforward
since it is now easy to identify people with recent working papers on
any topic.
I reject 10-15% of papers without refereeing, a so-called
"desk rejection." This prompts some complaints--"I paid
for those reviews with my submission fee"--but in fact when
appropriate a desk rejection is the kind thing to do. If, on reading a
paper, I find that there is no chance I am going to publish a paper, why
should I waste the referees' time and make the author wait? Not all
authors agree, of course, but in my view, we are in the business of
evaluating papers, not improving papers. If you want to improve your
paper, ask your colleagues for advice. When you know what you want to
say and how to say it, submit it to a journal.
As noted above, some authors are irate about desk rejections on the
principle that their submission fee pays for refereeing, or that they
deserve refereeing. But in fact the editor, not referees, make decisions
and I generally spend a significant amount of time making a desk
rejection. I think of a desk rejection as a circumstance where the
editor doesn't feel refereeing advice is warranted.
There are authors who attempt to annoy the editor. I'm not
sure why they consider this to be a good strategy. I attempt to be
unfailingly professional in my journal dealings, as this is what I seek
in editors handling my work. Back when I had a journal assistant
(everything is electronic now), I asked her to impose a "24 hour
cooling off period" whenever I seemed to write something emotional
or unprofessional. I still write and delay sending even now, if I feel
at all peevish or irritated.
Authors, in their attempt to irritate the editor, will ask
"Have you even read my paper?" This is a more subtle question
than it first appears, for there is an elastic meaning of the word
'read.' The amount of time necessary to establish beyond a
reasonable doubt that a paper is not suitable for a journal ranges from
a few minutes--the paper's own summary of its findings are
incomprehensible or not ambitious--to many hours. One of the effects of
experience as an editor is that the amount of time spent on the bottom
half of the papers goes to about zero (except for the desk rejections,
which get a bit more), and most of the time is devoted to those papers
that are close to the acceptable versus unacceptable line.
Gans and Shepherd (1994)'s article created among editors what
I think of as the fear of rejecting the "Market for Lemons,"
based on the fact that Akerlof's 1970 "Market for Lemons"
paper was rejected by three prominent journals, including the AER. No
one wants to go down in history as the editor who rejected a paper that
subsequently contributed greatly to a person's winning a Nobel
prize. However, I eventually came to the conclusion that the fear is
overblown. There are type 1 and type 2 errors and any procedure that
never rejects the "Market for Lemons" produces a low average
quality. One lesson, indeed, is to be open to the new and different. I
use a higher bar for 'booming' topics that generate a lot of
current excitement and hence may be a fad. (At the time of this writing,
behavioral economics is such a topic.) A second lesson from
Akerlof's experience is to be careful in crafting rejection
letters; the letters Akerlof received, with their smug acceptance of
general equilibrium as the end state of economics, look pathetic today.
Finally, Akerlof's experience was unusual in that his rejection
wasn't perpetrated by Lord Keynes. Absent Keynes, who I think
suffered mightily from the personal agenda problem discussed above,
there are not so many great rejected papers.
What are some common problems with manuscripts?
Around 25% of the submissions to the AER, in my experience, are
rejected due to poor execution. That is, the paper represented a good
start on an article-worthy topic, but provided too little for the
audience.
Most of my experience is editing general interest journals, and as
a result my number one reason for rejection is that the paper is too
specialized for the audience. When the interest in the paper is limited
to a specific field, the paper belongs in a field journal, not in the
AER or even Economic Inquiry. I expect submissions to make the case that
the paper is of interest beyond the specific field and often ask
"Why should a labor or public finance economist want to read this
paper?" A good strategy is to identify the audience and then submit
to a journal that reaches that audience.
A surprising number of papers provide no meaningful conclusion. I
consider these papers to be fatally incomplete. I have seen one that had
a heading "Conclusion" with only one sentence: "See the
introduction." Opinions vary but I consider a serious conclusion
section to be essential. After going through the body of the
paper--usually very hard work--it is time to get a payoff, which is
delivered in the conclusion. The difference between an introduction--in
which one motivates a problem and summarizes the findings--and a
conclusion is that the reader has actually gone through the body of the
paper at the point where they encounter the conclusion. Thus, the kinds
of points you can make are different. If, after finishing the body of
the paper, you really have nothing more to say, it is not clear why
anyone wants to read the paper. The conclusion should be more than just
a summary of the paper.
Paul Milgrom is fond of saying that theory papers can be evaluated
based on generality and simplicity and it is important to remember that
both are goods. I think Milgrom's insight is similar to what is
sometimes known as the "bang for the buck" evaluation; how
much work do I have to do and time do I have to spend for the amount of
insight I receive? Being clear about the contribution and relating it
accurately to other papers makes the paper simpler to understand and
more likely to be accepted.
Do you have any amusing anecdotes to share with us?
There is a lot of heartbreak in journal editing since most of the
job is rejecting papers. If you are looking for amusing anecdotes,
subscribe to Readers' Digest.
The job of theory editor at the AER is unique in one way. There are
thousands of people who believe they have a Great Economic Idea that
economists desperately need to know. Let us agree to call these people
"kooks" for want of a better term. Pretty much 100% of kooks
are theorists; you won't meet a, say, physicist or physician with a
Great Economic Idea that involved running regressions or doing lab
experiments, although occasionally there is a table illustrating a
correlation between some economic variable like lawyers or fluoridated
water and per capita GDP.
An illustration of the Great Economic Idea is the value of time. A
paper was submitted pointing out that the order of consumption of goods
may matter; one may want to consume Alka Seltzer after a large meal, not
before. The paper proceeds to compute the number of orders one can
consume a given number of goods. Why the number of orders is interesting
is not explained. It is an inessential and unsurprising detail that the
author has never heard of multinomials and manages to get the formula
slightly wrong. The important thing is that he submitted two papers, the
second identical to the first, except that the term consumption has been
replaced with production. Both papers have no references but have a
helpful statement that the paper is so novel that there are no
appropriate references. I received these prior to instituting desk
rejections and sent both papers to one referee. To counter the
author's assertion that economists have never considered the timing
of consumption, the referee wrote a one sentence report:
"Arrow-Debreu commodities are time-dated." The referee also
provided two references and wrote in the letter to me that "the AER
refereeing fee is just enough to buy a bottle of scotch, which helps me
forget these miserable papers."
Another paper began with the memorable sentence "An economic
system is like an electric power plant." The paper proceeded to
analyze electric power generation in great detail. There were diagrams
of power plants and discussion of Kirchoff's laws and other
essential ingredients of electrical engineering. What was not present,
however, was anything vaguely recognizable as economics, like prices,
demand or even cost. There was no attempt to explain in what way a power
plant was like an economic system. Not surprisingly, I rejected this
paper, which prompted a boundless series of irate complaints including a
claim that von Neumann worked on and was unable to solve the problem
that the author had solved. No reference was given to demonstrate von
Neumann's interest in the problem; the generous interpretation is
that von Neumann only published when he actually solved the problem.
After more than a dozen letters I eventually informed him that I would
no longer open his letters. They kept coming for months.
The essential mystery of editing is why the reports I receive as an
editor are so much better than the reports I receive as an author.
Reading thousands of referees' reports has changed my perspective
on reports. We may wait a long time for reports but they are generally
serious, thoughtful and insightful. Authors who complain about referees
usually focus on inessential details rather than the main substance of
the review. By and large, reviewers understand papers well enough to
evaluate them; when they don't, it is usually because the author
failed to communicate very well. Moreover, referees offer good advice
about how to improve the paper and take the research to the next level.
It is worth remembering that the referee's task is to give advice
to the editor, not to give advice to the author.
Many people write me saying that they have already refereed a
manuscript for another journal and want to give the author a new chance.
I see this response as wildly inefficient. First, the referee has a very
good idea what the author has accomplished and can quickly review the
current draft. Second, if the author has ignored serious issues pointed
out previously, that is very important information about the quality of
scholarship and I really want to know about it. Third, the fact that
another editor selected the same person is a confirmation that we have
selected well; papers should pass muster with experts in the field. The
only circumstance where I don't want to hear from a repeat referee
is when the referee recommended rejection for personal, unprofessional
reasons, which is precisely the set of the circumstances where they
won't tell me they reviewed the paper for another journal.
I overheard an author tell another economist at a conference what
an idiotic referee he had for an AER submission. He went into some
detail about all the stupid things the referee said and the economist
listening to the story commiserated and wholeheartedly agreed with the
author. You have probably already figured out that the commiserator was
the referee in question. This referee had actually written a very
thoughtful and serious report on a paper of a friend; as is sadly
common, the author didn't appreciate the insight available in the
report.
As a final anecdote, I received a report from a respected
economist, who said in the letter to me: 'I have written a gentle
report, because the author is obviously inexperienced and very junior,
and I don't want to discourage him. But make no mistake: this paper
makes no contribution and you should not encourage a revision.' The
author of that paper, which I rejected, had already won a Nobel prize in
economics.
What's up with Economic Inquiry?
I strongly recommend Ellison's 2002 paper on journal
publishing. This paper definitely changed my perspective on problems
with economics journal editing, so much so that I took action in 2007.
Ellison finds that the profession has slowed down, doubling the
"submission to print" time at major journals. What was
unexpected for me was the finding that most of the slowdown is the
number of revisions, not the 'within round cycle time.' I
hadn't realized that the interminable wait for a response was
common twenty-five years ago. What has changed, Ellison shows, is that
we have about doubled the number of rounds. I had thought it was merely
deficiencies in my own papers that caused me to revise three, four, even
five times. But no, it is a profession-wide phenomenon.
Like most economists, I am personally obsessed with efficiency, and
wasted resources offend me in an irrational way. The way economists
operate journals is perhaps the most inefficient operation I encounter
on a regular basis. It is a fabulous irony that a profession obsessed
with efficiency operates its core business in such an inefficient
manner. How long do you spend refereeing a paper? Many hours are devoted
to reviewing papers. This would be socially efficient if the paper
improved in a way commensurate with the time spent, but in fact revising
papers using blind referees often makes papers worse. Referees offer
specific advice that push papers away from the author's intent. It
is one thing for a referee to say "I do not find this paper
compelling because of X" and another thing entirely to say that the
referee would rather see a different paper on the same general topic and
try to get the author to write it. The latter is all too common.
Gradually, like a lobster in a pot slowly warming to a boil, we have
transformed the business of refereeing from the evaluation of
contributions with a little grammatical help into an elaborate system of
glacier-paced anonymous co-authorship. This system, of course,
encourages authors to submit papers crafted not for publication but to
survive the revision process. Why fix an issue when referees are going
to force a rewrite of a paper anyway? (3) My sense is that the first
revision of papers generally improves them and it is downhill from
there.
The 'anonymous co-authorship' problem has an insidious
aspect: having encouraged a revision, referees often feel obliged to
recommend acceptance even if the paper has gotten worse. Referees become
psychologically tied to the outcome because they caused it. I once
directed an author to roll-back a paper to an earlier state, because a
referee encouraged the author to make a mess of what had been a clean,
insightful analysis.
When I was asked to recommend an editor for Economic Inquiry, it
occurred to me that EI was ideally positioned for an experiment. It
isn't sensible to experiment with extremely successful journals
like the AER or Journal of Political Economy, because of the large
potential downside. It also isn't very useful to experiment with a
brand-new journal. New journals aren't on anyone's radar
screen and it is extremely challenging to attract high quality papers to
a new journal. As a result, successful new journals tend to be run in an
autocratic way by a committed and talented editor; policies play a small
role in the operation. As a result, the ideal experiment is a journal
like EI, which has a decent, but not stellar, history.
I offered to serve as editor, provided I was given a free hand to
experiment with policies, including the "no revisions" option.
The no revisions option is a commitment by the journal to say "yes
or no" to a submission, hence preventing the endless rounds of
revision common at other journals and at EI itself. No revisions is an
option for the author, not a requirement. I implemented no revisions
when I assumed editorship in July 2007. About 35% of the papers are now
submitted under this option.
At the time I started, Steve Levitt mentioned no revisions in his
immensely popular Freakonomics blog and I was very surprised by the
comments he received. Most anonymous commentators were negative. They
(1) didn't think it necessary, (2) didn't think I could commit
to it, or (3) ignored the fact that it was optional and considered
whether it would be socially optimal for all journals to impose it.
No revisions is and should remain optional. Inexperienced authors
are ill-advised to choose it; perhaps more importantly, authors with a
very novel, difficult thesis will often need a conversation with
referees to convince them. No revisions works best with experienced
authors who know what they want to say and how to say it, and just want
a forum to broadcast that to the profession. The option removes the
journal from the business of rewriting papers and escalates the business
of evaluating them. Consequently, the entire discussion based on what
would happen if all journals forced all papers through the no revisions
process is misguided; it is like saying that Taco Bell should not exist
because it would be a bad thing if Taco Bell were the only restaurant.
Commentators who think EI can't commit aren't thinking
clearly. The argument is the "thin edge of the wedge," which
is to say, papers will be submitted that deserve revision but are too
flawed to publish as is. But this is not a problem at all unless the
journal is desperate for manuscripts--there are lots of other journals
to take the author's revised paper. There have been at least a
dozen manuscripts rejected that would have been clear revise and
resubmits absent no revisions. That is a risk the authors take when they
choose the option. There have also been half a dozen that would have
received revise and resubmits but instead were accepted.
Finally, is the no revisions policy socially useful? The beauty of
the option is that no one is required to use it; that about 35% of the
submissions come in this form suggests some authors think it is a useful
experiment. Only one journal has copied the policy to date, but the
sensible thing is to wait and see if Economic Inquiry improves.
No revisions does not prohibit an author from benefitting from
advice. In fact, at this time 100% of the authors who received
acceptances under no revisions actually revised their manuscript in
light of referees' comments. The difference is that these revisions
were voluntary, not coerced. That is, the referees and editor say
'this paper meets our standards as is, but would be even better
if...' and the author is then free to improve the paper.
I've spent a lot of time thinking about the coeditor process.
At the Journal of Economic Theory, associate editors are de facto
co-editors in the sense that they send papers to referees for review and
make recommended decisions which almost always stick. There are about 40
associate editors, which insures there are always a couple of bad ones.
Bad co-editors pollute journals, preventing the journal from having
consistent standards and responses. The more co-editors, the more likely
the problem of conflicting standards and expectations arises. To be
specific, there were auction papers published by JET while I was an
associate editor that were not as good as papers I rejected, a very
frustrating event for an associate editor and more so for the rejected
author. However, employing few co-editors makes the job larger than most
would accept. So what is the right organizational form?
Empirically, the top journals run four to six co-editors. They are
distinguished by field. However, being an editor at this rarefied level
is strongly rewarded by the profession; at lesser journals, the
professional benefits are much smaller. Consequently it will be much
more difficult to find people willing to take a quarter of EI than, say,
a sixth of the AER, even though a sixth of the AER represents handling
more manuscripts per year. Moreover, the top journals require "jack
of all trades" who can handle papers in a very diverse set of
areas. As an AER co-editor, I had to handle theory papers on trade,
finance and environmental economics, fields in which I had never read a
paper when I started. The "broad general co-editor" is very
hard to find, even for the top journals.
The strategy I have adopted is a hybrid scheme. Like the top
journals, EI has general co-editors for applied microeconomic theory,
empirical microeconomics, and macroeconomics. In addition, we have
specialized co-editors for two kinds of subfields. First, in subfields
where we receive a reasonable flow (more than ten per year), like
sports, defense, experimental, and health, we have specialized
co-editors who handle all the papers. Second, in fields where I would
like to send a signal of interest, like neuroeconomics or algorithmic
game theory, because I think the field is likely to boom in future
years, I also have specialized coeditors. Thus, unlike JET,
responsibility among the specialized co-editors is pretty clear. This
hybrid scheme is an experiment, to see if it makes evaluating
manuscripts more efficient.
I want to call out one of these specialized coeditors: Yoram Bauman
(www.standupeconomist. com) for Miscellany. The JPE has a history of
publishing entertaining articles under the column of the same name, a
tradition that began to lapse with Stigler's death. As the
publisher of Leijonhufvud's classic 1973 humor article (before EI
changed its name from the more descriptive Western Economic Journal; we
remain a journal of the Western Economic Association), we also have a
venerable history in this area. I think the profession needs an outlet
for this kind of thing, and I am gratified to see that two of the
forthcoming papers for Miscellany are by Nobel laureates.
It is too early to tell whether these experiments have made the
journal sustainably better, but the rate of submissions has more than
doubled.
Do you have anything else to say or are you finally done?
There is a great deal of effort devoted to trying to scope out what
editors are interested in, and bend papers toward specific editor's
interests. There is similar effort devoted to figuring out what topics
journals seeks. I don't think journals really have favorites and
patterns are more a consequence of the pattern of submissions. Editors
do have favorites--it is unavoidable--but the papers accepted are not
strong evidence of what the favorites are. When I accepted a paper for
the AER, I would usually raise the bar a bit for papers on the same
topic. I didn't want a single area to dominate the journal. I
didn't raise the bar a lot, but in a close decision it could
matter. So topics in the journal, for me, were actually slightly
negatively correlated with the likelihood of acceptance, although such a
correlation was weak.
I use higher standards in my own research area than in other areas,
because it is harder to impress me. In areas with which I am unfamiliar,
a paper benefits from educating me about basic insights available in
other papers. This is also a small effect since such benefits won't
be experienced by the referees, who have substantial expertise, but only
in my reading. Nevertheless, in a close decision, it could make a
difference. Overall, I think submitting a paper where the editor has
deep expertise usually produces a higher bar but less variance in the
evaluation.
Being an editor hasn't made me a more effective author, or at
least much less so than I anticipated. It has made me much more critical
of my own work and much more effective at providing advice to
colleagues. I can reference a broader literature. Being an editor at a
major journal is a great way to keep abreast of new developments,
because even if a particular paper isn't submitted to the journal
one edits, it is usually discussed in some submission to the journal.
But overall, it probably isn't a good strategy to be an editor for
the sake of being a more effective author.
Mostly I've talked about the challenging aspects of being an
editor. But the great thing about editing a journal is reading terrific
manuscripts one wouldn't have otherwise encountered. This happens
just often enough to make me glad to serve, and keep me gushing.
References
Akerlof, George A. (1970). "The Market for 'Lemons':
Quality Uncertainty and the Market Mechanism," Quarterly Journal of
Economics 84(3):488-500.
Bergstrom, Ted, "Free Labor for Costly Journals" Journal
of Economic Perspectives 15.3 (2001):183-198.
Ellison, Glenn, "The Slowdown of the Economics Publishing
Process," July 2001, Journal of Political Economy, 105(5), 947-993,
2002.
Gans, Joshua S & Shepherd, George B, "How Are the Mighty
Fallen: Rejected Classic Articles by Leading Economists," Journal
of Economic Perspectives, vol. 8(1), pages 165-79, Winter 1994.
Hamermesh, Daniel, "Facts and Myths About Refereeing,"
Journal of Economic Perspectives, Winter 1994.
Laband, David N. and Michael J. Piette, "Favoritism versus
Search for Good Papers: Empirical Evidence Regarding the Behavior of
Journal Editors," Journal of Political Economy, 102, 194-203, 1994.
Leijonhufvud, Axel, "Life Among the Econ," Western
Economic Journal 11, 327-337, September 1973.
by R. Preston McAfee *, (1)
* Yahoo! and Caltech
Notes
(1.) I thank Kristin McAfee, Dan Hamermesh and Glenn Ellison for
very useful comments.
(2.) Laband and Piette (1994) argue that the journal conspiracies
are efficient.
(3.) I'm not going to comment here on two other major
inefficiencies. First, once we publish the paper, which was freely
provided, as a profession we lose general access to it because of
monopoly pricing by journals. Monopoly pricing of economics journals
represents also an appalling state of affairs or a delicious irony,
depending on your perspective. See Bergstrom (2001). Second, there are a
huge number of papers being refereed many times, a dramatic cost of not
coordinating across journals.