首页    期刊浏览 2024年12月01日 星期日
登录注册

文章基本信息

  • 标题:The effects of publication lags on life-cycle research productivity in economics.
  • 作者:Conley, John P. ; Crucini, Mario J. ; Driskill, Robert A.
  • 期刊名称:Economic Inquiry
  • 印刷版ISSN:0095-2583
  • 出版年度:2013
  • 期号:April
  • 语种:English
  • 出版社:Western Economic Association International
  • 摘要:Ellison (2002) documents that the time an economics paper typically spends at a given journal between submission and publication has more than doubled over the last 30 or so years. As Ellison notes, this has important implications:
  • 关键词:Manuscript preparation;Manuscript preparation (Authorship)

The effects of publication lags on life-cycle research productivity in economics.


Conley, John P. ; Crucini, Mario J. ; Driskill, Robert A. 等


I. INTRODUCTION

Ellison (2002) documents that the time an economics paper typically spends at a given journal between submission and publication has more than doubled over the last 30 or so years. As Ellison notes, this has important implications:

The change in the publication process affects the economics profession in a number of ways: it affects the timeliness of journals, the readability and completeness of papers, the evaluation of junior faculty, and so forth. (Ellison 2002, 948)

While all these effects are important, the stakes may be highest when it comes to the evaluation of junior faculty. One would expect that, ceteris paribus, increased publication lags would make it more difficult for members of recent cohorts to produce a curriculum vitae (CV) in 6 years as strong as those produced by earlier cohorts (the point at which the formal tenure review is conducted in most economics departments). If institutions do not internalize the effect of the new publishing environment, then fewer junior faculty will receive tenure than in the past. At an individual level, the cost of not gaining tenure is obviously significant. However, the costs are also significant for the profession at large. Failure to promote qualified scholars leads to more frequent, costly searches by departments for new faculty and the discouragement and exit of qualified scholars who would otherwise enrich the stock of economic research.

This notwithstanding, gaining tenure is a powerful incentive. It certainly might be the case that more recent Ph.D.s respond to the new environment by working harder or smarter. While this would impose costs on junior faculty, the increased effort might partly or wholly offset longer publication lags. Thus, the question of whether or not there is an "Ellison effect" on CVs is ultimately empirical.

In this paper, we begin by demonstrating the plausibility and potential magnitude of the Ellison effect in a simple model of research production and publication calibrated with what we believe are reasonable parameter values. We find that increasing publication delay from 1 year to 2 years has substantial effects of the expected length of a vita at the end of 6 years. These effects are magnified if we also include lower acceptance rates and longer paper lengths, which also seem to be part of the new publishing environment.

Next, we explore the Ellison effect empirically using data from various sources to reconstruct the JEL-listed journal-publication record of almost every graduate of a U.S. or Canadian Ph.D.-granting economics department from 1986 to 2000. Our approach is to document research productivity (in this context measured by the quantity and quality of publications) among successive cohorts of new Ph.D.s using different controls to demonstrate the robustness of the findings.

First, we find that approximately half of the graduates never publish at all, at least in those journals found in EconLit. (1) Of those graduates who do publish, however, the proportion of journal publications produced by each productivity percentile of a cohort is remarkably stable: the publication "Lorenz curve" for each cohort is practically identical. Roughly speaking, among the publishing members of each cohort, 80% of the output is produced by the top 20% of the researchers while the top 1% of researchers produce approximately 14% of all publications.

Second, the institution from which students receive their Ph.D.s has a significant impact on the quality and quantity of their published research. Publishing graduates of top 30 departments produce more than three times as many AER-equivalent pages and papers as do their counterparts from non-top 30 departments. (2) Furthermore, for all cohorts, the average quality of each published paper and page is about three times better for graduates of the top programs compared to the non-top programs.

In light of this, we divide our data in three ways. First, we restrict attention to Ph.D.s who produce at least one published paper in a JEL-indexed journal within 6 years after graduation on the grounds that this "publishing population" is the only part of the sample containing useful information about the hypothesis of interest. (3) For this subsample, we make a distinction between graduates of "top 30" and "non-top 30" departments based on representative research rankings. One might think of this as an "ex ante" predictor of publication success. (4) We also make a distinction based on the publication productivity percentile of researchers as calculated by the length of the vita at the 6th year after graduation to test if there are differences between scholars with different "ex post" publication success.

Our major finding is that there is evidence of an Ellison effect. The strength of this evidence, however, depends on whether we use AER-equivalent pages or AER-equivalent publications as a measure of research productivity. It also depends on whether we look at graduates of top 30 or non-top 30 institutions, and whether we look at the ex post more-productive or less-productive scholars.

In particular, the longer "time to build" process documented by Ellison (2002) has a measurable, but not uniformly dramatic, effect on publication success of all cohorts in terms of AER-equivalent pages published by graduates of the top 30 programs. For these scholars, the productivity (i.e., publishing success) of older cohorts is on average higher than the productivity of the middle cohorts, and the productivity of the middle cohorts higher than that of the youngest cohorts. In contrast, there is no such pattern of declining productivity for the non-top 30 departments using AER-equivalent pages as a measure. However, all groups show a distinctive hump-shaped pattern of life-cycle research productivity that extends both across and within cohorts. In particular, annual productivity rises until about the 6th year after graduation and then falls fairly quickly to about 60% of the peak.

When we look at the number of AER-equivalent publications instead of pages published at the end of 6 years, we find large and statistically significant declines in productivity over time for graduates of both the top and non-top 30 departments. By this measure for graduates of the top 30 programs, the oldest cohort is 45% more productive than the middle cohorts and 65% more productive than the youngest. The middle cohorts in turn are 13% more productive than the youngest cohorts. For non-top 30 departments, the oldest cohort is 19% more productive than the middle and 58% more productive than the youngest, while the middle cohorts are 33% more productive than the youngest cohorts.

For the ex post measure of publication success using productivity percentiles, the effects are even more pronounced: as we compare higher quintile ranges across cohorts, the dominance of older cohorts over younger cohorts, and the chronological ordering of cohorts in research productivity is more robust. This means that top performers of each cohort have been hit most severely by the publication bottleneck.

This, in conjunction with the finding that pages published are similar across cohorts (at least for the top 30 departments), is consistent with Ellison's (2002) documentation of the increasing length and decreasing number of published papers, and suggests two things. First, it exposes a significant methodological question about the best way to measure productivity of departments, graduate programs, and individual scholars. Productivity patterns over time look different depending on whether papers or pages are chosen as a basis of comparison. We argue that what the profession values when granting tenure, giving raises, or making senior hires is the number of lines on a CV and the quality of the research papers on those lines. It is much harder to distill this into the number of AER-quality weighted pages, and we suspect that this is seldom attempted in practice. If this speculation is true, then it is important to look at AER-equivalent papers rather than AER-equivalent pages. Second, when AER-equivalent papers are used as the productivity metric, there is a significant drop-off in the weighted quality of the CVs of Ph.D. graduates over time. Thus, unless we believe that recent graduates are fundamentally of poorer quality, the extent to which equally talented assistant professors are able to produce evidence of their research productivity in peer-review outlets is falling over time. We will explore the robustness of this finding and its implications below.

Although our primary focus in this paper is to investigate the existence of the Ellison effect, these data allow us to investigate the relative performance of graduate programs in terms of the research output of their Ph.D.s. This allows us to construct a new type of departmental ranking system that can be compared with other, more traditional systems which focus on the publications of faculty members at a particular department. We find that Massachusetts Institute of Technology (MIT), Princeton, Harvard, and Rochester do best by this quality measure and more generally that the rankings of other departments do not entirely agree with more traditional measures that use faculty output.

In the remainder of the paper we first develop a simple model using specific, plausible parameter values and show how the change in "time to build" documented by Ellison (2002) affects the time profile of an individual's vita. We then describe our data and document within and across cohort research productivity patterns and changes. Finally, we investigate life-cycle effects and cohort effects on Ph.D.s' research productivity and discuss what they imply for the existence of the Ellison effect.

II. AN ILLUSTRATIVE MODEL

Our purpose here is not to develop a general model of lifetime production and consumption, but rather to focus on a simple partial-equilibrium model that highlights the effects of a change in the time between submission and acceptance of a manuscript. The focus is on the period of time between entry into the academic workforce and the time of decision on tenure, namely 6 years. For a more complete model of individual choice of labor and leisure over the life cycle, see Levin and Stephan (1991).

A. Model Parameters and Solutions

We construct a model in which there are five exogenous parameters:

* s: the length of a manuscript; (5)

* [P.sub.0]: an individual's stock of unpublished papers at the time he/she receives a Ph.D. degree (thus [P.sub.0] is the number of manuscripts initially submitted to journals);

* m: the individual's production of manuscript pages (per year);

* [DELTA] : the "time to build" lag between when an individual's stock of unpublished manuscripts is submitted and when a decision on acceptance is received;

* [a.sub.t]: the percentage of the stock of an individual's unpublished manuscripts that were newly submitted at t - [DELTA], that are accepted for publication at time t.

Of course, in a more complete model, these exogenous variables would be endogenous and would reflect optimal choices of individual producers and the supply of journal pages available.

An individual's number of newly submitted manuscripts at any time t is denoted by [P.sub.t], and the number of newly submitted pages is denoted by [p.sub.t]. Thus, given that manuscripts have s pages, ([p.sub.t]/s) = [P.sub.t].

To summarize, we assume that an individual arrives in the profession with a stock [P.sub.0] = ([p.sub.0]/s) of manuscripts. Each year after that, the individual writes M = m/s manuscripts, where m and s are exogenous constants. Each year, all individuals submit every one of their existing unpublished manuscripts that are not in the evaluation process. Then, after a specified period of time, a percentage a of these newly submitted manuscripts are accepted.

To capture the change in the time between submission and publication emphasized by Ellison (2002), we consider two scenarios. In the first scenario, the exogenous percentage a of newly submitted manuscripts are accepted for publication the year following submission. Thus, the time to build is l year ([DELTA] = 1). In the second scenario, the exogenous percentage a of newly submitted manuscripts is accepted for publication two periods after submission. Thus, the time to build is two periods ([DELTA] = 2). To distinguish between these two cases, we denote parameters and variables associated with the l-year lag between submissions and acceptances by putting a "tilde" over the symbol. That is, in the first scenario the number of newly submitted manuscripts at time t is denoted as [[??].sub.t] and so forth.

With a 1-year submission-acceptance lag, the number of newly submitted manuscripts at time t evolves according to the following first-order difference equation:

(1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

This says that the number of newly submitted papers at time t equals the previous year's submissions plus new additions [??] minus accepted submissions from the previous year a [[??].sub.t-1]. With the exogenously given original stock of unpublished papers [P.sub.0], the solution to this difference equation is found by well-known methods to be:

(2) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The number of acceptances per year, denoted by [[??].sub.t], is

(3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The length of an individual's vita at any time t, denoted by [[??].sub.t], is thus given as

(4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Now consider our second scenario, in which the time between acceptance and publication is two periods. In this case, the number of newly submitted manuscripts evolves according to the following second-order difference equation:

(5) [P.sub.t] = M + [P.sub.t-2] - a[P.sub.t-2] = M + (1 - a) [P.sub.t-2].

The key difference here from the first scenario is that newly submitted manuscripts have to wait for two periods for a decision. As a result, submitted manuscripts at time t will still be waiting for a decision at time t + 1; hence those manuscripts will be contained neither among new journal submissions at time t + 1, nor among accepted manuscripts at time t + 1, denoted by [A.sub.t+1].

There are two initial conditions that apply to this problem: P0 is given and [P.sub.1] = m. Thus, the solution to Equation (5) will be

(6) [P.sub.t] =- M/a + [B.sub.1][[theta].sup.t.sub.1] + [B.sub.2][[theta].sup.t.sub.2],

where [B.sub.1] and [B.sub.2] satisfy the two boundary conditions

(7) [B.sub.1] + [B.sub.2] + M/a = [P.sub.0], M/a + [B.sub.1][[theta].sub.1] + [B.sub.2][[theta].sub.2] = M,

and

(8) [[theta].sub.i] = [+ or -] [square root of 1 - a, i [member of] {1, 2}.

Alternatively, we can write the solution as

(9) [P.sub.t] = [1/2 - 1.summation over (i=0)] M [(1 - a).sup.i] + [(1 - a).sup.1/2]

[P.sub.0] = (M/a) + ([P.sub.0] - (M/a))[(1 - a).sup.1/2]

when t is even, and

(10) [P.sub.t] = [t-3/2.summation over (i=0)] M[(1 - a).sup.i] + [(1 - a).sup.t-1/2]

[P.sub.1] = (M/a) (1-[(1 - a).sup.t+1/2])

when t is odd.

The number of acceptances per year, denoted in this scenario as [A.sub.t], are

(11) [A.sub.t] = a [P.sub.t-2].

Thus, the length of an individual's vita in this scenario, denoted as [V.sub.t], is

(12) [V.sub.t] = [V.sub.t-1] + [A.sub.t].

B. Calibrated Examples

To give some sense of the quantitative implications of the model, we calibrate the model based on plausible values of the parameters from the work of Ellison (2002). We explore three separate changes to the publishing environment that Ellison discusses: an increase in publication lags, a decrease in acceptance rates, and increase in the length of manuscripts.

First, we consider a base case meant to represent the historical publishing environment. We take a benchmark of a 20% acceptance rate, one new paper per year as flow of production, an initial stock of three papers at graduation and a 1-year "time to build":

(1) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Second, we consider the effects of increased delays. We change the time to build to 2 years, but keep the same set of parameter values otherwise:

(2) a = 0.20, M = 1 and [P.sub.0] = 3.

Third, we go back to a 1-year lag, but we consider the effect of increasing manuscript length by one-third so that both initial stock and flow of new manuscripts decrease to 75% of the two cases above: (6)

(3) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Finally, we consider a 1-year lag and no increase in manuscript length, but decrease the acceptance rate to 12%: (7)

(4) [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The results are shown in Table 1, and they reveal that new Ph.D.s trying to publish in the historical regime (case 1) are significantly more productive than new Ph.D.s facing any one of the three changes considered above. Agents publishing under the historical regime were 75% more productive than agents facing a 2-year time to build, 33% more productive than agents who must publish longer manuscripts, and 42% more productive than agents facing lower acceptance rates.

These drops are substantial, and would be even more so if we subjected new Ph.D.s to all three changes at once as we do in the real world. Of course, new Ph.D.s may be aware of the current publication environment and may be responding. For example, they may submit more papers while a graduate student or stay in graduate school longer in order to have a better chance at tenure.

III. DATA

The panel dataset we construct consists of two parts: a census of Ph.D. recipients from academic institutions in the United States and Canada who received their economics Ph.D.s between 1986 and 2000 and a complete record of the journal publications of these individuals for the years 1985 to 2006 in EconLit-listed journals.

A. Economics Ph.D. Holders

The record of economics Ph.D. recipients was constructed from two sources. The definitive source is the list provided by the American Economic Association (AEA), based upon an annual survey of economic departments. We use the economics Ph.D. list of the AEA that records economics Ph.D. recipients beginning in 1985, and we supplement the data with information from the "2003-2004 Prentice Hall Guide to Economics Faculty by Hasselback" (hereafter, the Hasselback Directory). This directory contains information about current faculty members of all economics departments in the United States, and of well-known Canadian and European research universities. Using the information about faculty members' graduation year and Ph.D. granting institution found in the Hasselback Directory, a more comprehensive dataset of records of economics Ph.D. recipients from 1986 to 2000 was created. The number of Ph.D. recipients listed in both the AEA economics Ph.D. list, and in the Hasselback Directory is shown in Table A1 (in the Appendix). The total number of Ph.D. recipients in a given year is not simply the sum of the entries from each source as there is considerable overlap between them. From 1988 to 2000 the number of Ph.D.s granted is fairly stable at about 1,000 per year. The significant growth from 1986 to 1988 may be due to less comprehensive coverage of Ph.D.s early in the AEA survey. Pooling all years, the panel contains 14,271 economics Ph.D.s.

B. Journal Publications

The record of journal publications is obtained from two different sources. Our main source is the EconLit Publication Database. The number of publications for the years 1985 to 2006 recorded in EconLit is listed in Table A2. The number of papers has grown from about 9,500 in 1984 to almost 26,000 in 2005. Pooling all years, the panel of publications contains 368,672 peer-review papers. Of course, only a fraction of these are coauthored by Ph.D.s in our sample. (8)

The International Bibliography of Social Sciences (IBSS) database was used to obtain additional information on journal publications that have more than three authors. The reason for this is that EconLit reports only the first author's name, if a journal publication has more than three authors. There were 1,125 such occurrences in the EconLit journal publication database, and 558 of these were in the top 25 journals.

C. Supplemental Data

Raw counts of publications are imperfect measures of the research productivity of individual scholars because of the variation in the quality of those publications. The journal rankings and journal quality indexes from Kalaitzidakis, Mamuenas, and Stengos (2003) are used to account for this variation. We convert their journal quality indexes into American Economic Review (AER) equivalents, meaning that we express the quality of each journal as a fraction of AER quality and use these values to convert each Ph.D.'s publications (pages) into AER-equivalent publications (pages) for subsequent analysis. Ph.D.s in our database published in 992 different journals found in EconLit. The AER equivalence of the top 65 of these journals is reported in Table A3. We assign each of the remaining journals a weight of 0.012. Simply put, according to the impact-adjusted ranking of peer-reviewed journals in economics, a 12-page article published in the AER (a short paper) is equivalent to 1,000 pages in a journal ranked 66th or lower.

One might worry that our results are sensitive to the specific quality index employed here. As a robustness check we use a discrete ranking for journal quality provided by Combes and Linnemer (2010), where journals are not assigned a unique quality index but are grouped into bins. The top five journals form the top quality bin, denoted by AAA in the study by Combes and Linnemer (2010), and we assign these journals a conversion rate of 1 to the AER. The next 15 journals form the second quality bin (denoted by AA), and we assign these journals a conversion rate of 2/3 to the AER. The next 82 journals form the third quality bin (denoted by A), and we assign these journals a conversion rate of 1/3 to the AER. (9) Fortunately, the qualitative results obtained are similar regardless of which approach is used to ranking journals. (10)

By matching manuscripts by authors in our Ph.D. panel with the indices of journal quality, we calculate the number of AER-equivalent pages for article i in journal j as

(13) AER [Pages.sub.ij] = ([(raw pages).sub.i] x [(journal index).sub.j])/([(authors).sub.i])

where "raw pages" is the length of the manuscript, "journal index" is the quality weight converting the number of pages in journal j into an equivalent number of pages of the AER, and "authors" is the number of authors of the manuscript. Dividing by the number of authors assigns each author an equal share of credit for the research output. We also analyze how productivity measures change when we give each author full credit for the research output, and in this case we do not divide by the number of authors. Taking the sum of this index over all publications by an individual Ph.D. recipient in a specific year gives the publication quality index for this individual in that year.

Similarly, the number of AER-equivalent publications (11) for article i in journal j is calculated as

(14) AER [Publications.sub.ij] = ([(journal index).sub.j])/([(authors).sub.i]).

The focus of our analysis is the impact of the ongoing slowdown in the publication process on the productivity of vintages of Ph.D.s indexed by their year of graduation. As the variation in productivity across individuals at a point in time is immense, we also examine subsets of the panel for evidence of a slowdown. This entails controlling for life-cycle patterns of productivity as well as conditioning on the ranking of the institution from which the Ph.D. received her/his degree.

IV. RESEARCH PRODUCTIVITY OF COHORTS BY FLOWS AND STOCKS OF PUBLICATIONS

The main goal of our empirical study is to evaluate the consequences of the slowdown in the publication process across cohorts of new Ph.D.s. As one might expect, there is considerable variation in the publication records of the approximately 1,000 individuals who receive Ph.D.s in the United States and Canada each year. For example, in most cohorts and in most departments, about 40% to 60% of graduates fail to publish a single paper in an EconLit-listed journal in their first 6 years after graduation. These graduates are eliminated from the sample on the grounds that we wish to focus attention on graduates with scholarly research ambitions. Thus, when we use the term "graduates" in what follows it should be taken to mean "graduates who have published at least one paper in an EconLit-listed journal within their first 6 years of graduation." Ph.D.s are organized into five cohorts, each pooling three consecutive years of Ph.D. graduates. For example, the 1987 cohort consists of individuals who had their Ph.D. conferred in either 1986, 1987, or 1988. (12)

A. Productivity Distribution within Cohorts

An interesting way to characterize within cohort heterogeneity is to use an "intellectual Lorenz curve" to quantify the inequality of contributions in each cohort to the aggregate flow of peer-review publications. Table 2 shows the cumulative distribution of all AER-equivalent pages and publications in our panel of peer-review publications as a function of the productivity ranking of Ph.D.s at their 6th year after graduation.

The top 1% of producers generate about 13% to 16% of all AER-equivalent pages. The top 10% produces more than 60% of all published pages and the top 20% produces more than 80% of the output. These proportions are robust across all cohorts and do not change significantly when AER-equivalent publications are used in place of AER-equivalent pages, especially when top 1% and top 5% of Ph.D.s are considered. Thus, the distribution of research productivity across individuals within each cohort is very skewed and this skewness is remarkably stable across cohorts.

B. Graduates from Top 30 and Non-Top 30 Institutions

As documented in the prior section, approximately half of Ph.D.s never publish, and even restricting attention to those Ph.D.s who publish, we observe a very skewed productivity distribution. Thus, our data confirms the conventional wisdom that a very small, highly productive, group of Ph.D.s create a disproportionately large share of total peer-reviewed publications. Although there are always exceptions and outliers, the best performers are mostly graduates of top research universities. (13) We therefore separate all the Ph.D.s in our dataset into two groups: graduates of top 30 economics departments and graduates of non-top 30 economics departments. We use economics departments' ranking of Coupe (2003) for our analysis. (14)

Which departments are "top 30" is open to question, of course, and could be calculated in different ways. Given the length of time covered by this study (14 years) it is doubtless the case that departments have moved in and out of this group. It is not our purpose to make any definitive judgment on which departments deserve this recognition. In our view, it would be better to view our division of departments into two groups as an effort to study how a representative set of "top departments" performs against "non-top" departments.

Table A4 provides two rankings of the top 30 departments in the United States and Canada. In the left column in Table A4 department ranking of Coupe (2003) is reproduced, where departments are ranked by the productivity of their faculty members. Alternatively, one can argue that a department's true quality is measured by the productivity of its graduates. (15) Following this idea, department ranking presented in the right column in Table A4 is based on our calculations. We rank departments according to their Ph.D.s' productivity, which is measured as the average number of AER-equivalent publications accumulated at the 6th year after graduation. Figure A1 shows the distribution of AER-equivalent publications per graduate of each of the top 30 departments aggregated across all cohorts.

[FIGURE 1 OMITTED]

It is interesting to see that the Coupe ranking that focuses on faculty research quality does not line up particularly well with this measure which instead focuses on the research quality of graduates of these programs. Potential graduate students and departments recruiting new Ph.D.s might do well to consider the implied ranking of departments given here when deciding where to apply to graduate school and where to scout young talent, respectively.

C. Life-Cycle Productivity Measured by Flows of Publications

We begin by exploring the annual productivity of graduates of the top 30 departments and non-top 30 departments by various measures. The first measure of productivity is the annual number of raw pages published by graduates of the top 30 economics department and the remaining departments in our dataset. Young scholars at top and non-top 30 departments share a very similar career-cycle pattern of productivity. Annual productivity steadily rises from the year before the Ph.D is granted to a peak at about the 5th year. Productivity then drops off at a decreasing rate for the remaining years in our sample to about 60% of the peak value. This qualitative pattern also holds for all cohorts in both top and non-top 30 departments. It is certainly possible that productivity rises while new Ph.D.s are "eating their yolk." As they exhaust the stock of research questions they studied in graduate school, they become less productive. However, it is at least a little suspicious that this capital stock happens to start declining exactly at the point that tenure decisions are made. Also note that the gradual decline from the 5th year on is consistent with the presence of an overhang of completed and submitted work carried out before the tenure decision that only is accepted in subsequent years. Thus, we find evidence that is at least broadly consistent with the remarkable hypothesis that incentives seem to work in the economics profession.

Figure 1 shows the life-cycle productivity of graduates of the top 30 departments in terms of number of published pages.

Graduates of the top 30 departments peak at about 11 raw pages published in their 5th year, and then slowly decline to about seven pages. Graduates of the non-top 30 departments, in contrast, peak at about 6-7 pages published per year and then decline to about four pages per year. There seems to be no obvious chronological ranking of cohorts by their productivity based on annual number of published pages.

Although published pages has been a standard way of measuring productivity (16) our view is that it does not capture the true productivity incentives faced by new members of the profession. Our experience is that lines on a CV (as opposed to pages published) is more valued by the profession for tenure, promotion, raises, and so on. There are at least two reasons to use manuscripts as the unit of account. First, each paper contains a self-contained research idea and some require more pages to fully articulate than others even when their inherent scientific value is quite similar. Second, there are significant variances in the length of research manuscripts across subfields of economics having to do with the content of the topic and the norms of exposition. This renders the page metric a somewhat dubious unit of account. We therefore consider the annual number of raw publications (i.e., papers) published by members of our sample.

When we measure productivity in terms of number of published papers, we see a similar pattern to that above for all cohorts with productivity peaking at about year five and then declining. Graduates of the top 30 departments peak at about 0.5-0.7 papers per year (see Figure 2), while non-top 30 peak at about 0.3-0.5 papers per year. Chronological ranking of the productivity across cohorts that was not apparent in published pages now becomes apparent, revealing a ranking of cohorts with the oldest cohorts publishing the most and the youngest the least.

These first two measures are still limited in that they take no account of where graduates publish their papers. We therefore produce comparable figures but weight publications according to journal quality to get AER-equivalent pages and papers published annually. Considering AER-equivalent pages, we continue to see the familiar productivity peak at about 5 years. Graduates of the top 30 departments peak at about 1.5-2 AER pages per year, while non-top 30 peak at about 0.4-0.5 pages. Thus, we see that top 30 graduates are more than three times as productive as non-top 30 graduates compared at their peak annual productivity levels.

[FIGURE 2 OMITTED]

If number of AER-equivalent pages is calculated using the discrete quality index provided by Combes and Linnemer (2010), we observe the following: graduates of the top 30 departments peak at about 4-4.5 AER pages per year (see Figure 3), while non-top 30 graduates peak at about 1.4-1.7 pages. Comparing peak levels of their annual productivities, we see that top 30 graduates are about three times more productive than non-top 30 graduates.

Moving to AER-equivalent publications (again using Kalaitzidakis, Mamuenas, and Stengos 2003 as this is our default quality index), graduates of the top 30 departments peak at about 0.08-0.13 AER publications per year compared to 0.02-0.03 for non-top 30 departments. Again we see that top 30 graduates are more than three times as productive as non-top 30 graduates compared at their peak annual productivity levels.

[FIGURE 3 OMITTED]

D. Life-Cycle Productivity and Coauthorship

It is typical to divide the credit evenly over the authors of a paper, as we have done in the data reported above. Thus, if there are two authors on a 10-page AER paper, they are each credited with producing five AER pages. It is debatable whether or not this is fair. It surely takes more than half the time to write a coauthored paper than an equivalent single authored paper (as all the authors of the current paper will surely attest). Again, our experience suggests that the profession tends to look at lines on a CV and only make some smaller discount for coauthorship. Thus, one might wonder what would happen if we gave all authors full credit for a paper.

We find the same pattern of productivity peaking at 5 years and the same slight tendency of the older cohorts to be more productive than the younger cohorts when we do not discount for coauthorship. However, it seems that the drop-off after 5 years is not as steep. This might reflect increased coauthorship with graduate students or more opportunities for collaboration as a scholar becomes better known. The overall effect is to raise the peak productivity in terms of AER-equivalent pages to a range from 2.4 to 3.2 for top 30 graduates and to a range from 0.6 to 0.8 for non-top 30 graduates. This compares to 1.5-2 AER pages per year for top 30 graduates, and 0.4 to 0.5 for non-top 30 graduates with coauthor discounts.

A related question is whether there have been secular changes in the pattern of coauthorship over the years. Based on our extensive panel, coauthorship does appear to be on the rise. Presumably this is because of some combination of fall costs, such as the availability of the Internet and email, and rising benefits, such as more complementary skills or more complex and ambitious projects. We find that the average number of authors per publication in all EconLit journals rises from 1.35 authors in 1984 to 1.6 authors in 2001, while for the top 20 journals, it goes from about 1.5 to 1.8 (see Figure 4).

This secular trend of increased coauthorship is in line with the observation that younger cohorts tend to have more coauthored papers than older ones. We see a very interesting lifecycle pattern in coauthorship as well. Figure 5 shows the average number of authors of publications that are affiliated with at least one member of the respective cohort.

Note the striking U-shaped life cycle of coauthorship. This is a similar observation to that presented by Rauber and Ursprung (2008) about German academic economists. Coauthorship tends to be much less frequent between 1st and 4th years after graduation. This pattern is robust across all cohorts. As we move from older to younger cohorts, we observe that the U-shape is preserved and gradually shifted upwards. This indicates that although the coauthorship pattern for younger cohorts is similar to that for older cohorts, younger cohorts collaborate with larger numbers of coauthors compared with older cohorts over their life cycle. This might be a rational response to increased publication lags, because increased coauthorship would allow Ph.D.s to produce more manuscripts. Then again, more manuscripts may or may not lead to more publications (especially after controlling for quality), and we will investigate this in the next section.

[FIGURE 4 OMITTED]

[FIGURE 5 OMITTED]

E. Life-Cycle Productivity Measured by Stocks of Publications

We now turn to cumulative productivity measures to investigate how different cohorts compare to each other at the same point in time, namely at the end of the 6th year after graduation. We decide to focus on the 6th year for cumulative productivity analysis because tenure or promotion decisions are mostly based on the evaluation of cumulative productivity around this time.

We rank Ph.D.s in each cohort based on their cumulative productivity at their 6th year after graduation. For each cohort, total number of AER-equivalent pages and papers produced by Ph.D.s at 99th, 95th, 90th, 85th, 80th, 75th, and 50th (median) percentiles are reported in Table 3.

Comparing different percentiles of the productivity distribution within any cohort reveals that productivity is very skewed. Based on AER-equivalent pages, a Ph.D. ranked in the 99th percentile is more than twice as productive as a Ph.D. ranked in the 95th percentile and a remarkable 60-70 times more productive than the median Ph.D. One obtains similar results if one looks at AER-equivalent publications instead of pages.

Comparing, on the other hand, a given percentile across cohorts, AER-equivalent pages fail to reveal a clear pattern, whereas AER-equivalent publications do. Especially when we aggregate cohorts 1990 and 1993 into one large cohort, and cohorts 1996 and 1999 into another, the downward trend as measured in AER-equivalent publications becomes evident at all percentiles reported here. These findings may have crucial implications for the tenure evaluation process of younger cohorts. Two Ph.D.s in different cohorts may be ranked at exactly the same percentile within their respective cohorts and yet the economist in the more recent cohort has fewer AER-equivalent publications than the economist in the earlier cohort.

Table 4 provides an overview of various productivity measures attained by an average member of each respective cohort at the end of their 6th year after graduation. We provide two measures for the number of total publications: one which splits credit for a publication equally between coauthors, and one which gives full credit to each of the coauthors. We also provide two versions of AER-equivalent publications and AER-equivalent pages. The first one is calculated as explained above, splitting credit equally between coauthors and using a continuous journal quality index. The second one is calculated by giving full credit to each of the coauthors.

At the end of 6 years, cumulative productivity of graduates of the top 30 departments is consistent with the hypothesis that productivity is decreasing for younger cohorts. Based on total number of raw publications (see second row in Table 4), the 1987 cohort is 45% more productive than the 1999 cohort. This ratio drops to 29% when we assign full credit for coauthored publications; however, this is still quite substantial, and would certainly affect a tenure committee's view of a tenure candidate. To be more precise, at the end of 6 years, we test for each cohort pair the null hypothesis that cohort means are equal against the alternative hypothesis that the older cohort outperforms the younger one. Comparing 1987 cohort to 1990 cohort, 1990 to 1993, 1993 to 1996, and 1996 to 1999 cohort, we obtain p values of .0004, .35, .035, and .0039, respectively. For non-top 30 graduates, the story is less clear, because cohorts' productivity ranking does not follow a strictly monotonically decreasing trend, and cohorts do not line up as hypothesized from the old to the young, when compared at 6 years after graduation.

We get more definitive results for productivity when we look at AER-equivalent publications for graduates of top 30 as well as non-top 30 departments. One can see that the 1990 and 1993 cohorts and the 1996 and 1999 cohorts, respectively, have very similar productivity patterns (see Figure 6).

Thus, we will aggregate these for both the top and non-top 30 departments. At the end of 6 years, the top 30 1987, 1990 + 1993, and 1996 + 1999 cohorts published 0.61, 0.42, 0.37 AER-equivalent publications over all, respectively. We see a very striking decline in productivity over time: the middle cohorts are 13% more productive than the youngest cohorts while the oldest cohort is 45% more productive than the middle and 65% more productive than the youngest. We test the null hypothesis that cohort means are equal against the alternative hypothesis that the older cohort outperforms the younger one. We obtain the following p values: 0 for 1987 and 1990 + 1993, 0 for 1987 and 1996 + 1999, .0089 for 1990 + 1993 and 1996 + 1999.

[FIGURE 6 OMITTED]

For non-top 30 schools we find at the end of 6 years, the 1987, 1990 + 1993, and 1996 + 1999 cohorts published 0.19, 0.16, and 0.12 AER-equivalent publications, respectively. We see an overall trend in which the middle cohorts are 33% more productive than youngest cohorts while the oldest cohort is 19% more productive than the middle and 58% more productive than the youngest. We test the null hypothesis that cohort means are equal against the alternative hypothesis that older cohorts outperform the younger ones. We obtain the following p values: .079 for 1987 and 1990 + 1993, .0003 for 1987 and 1996 + 1999, and .0003 for 1990 + 1993 and 1996 + 1999.

The strong trend of individual cohorts lining up chronologically in decreasing productivity (with the 1999 cohort being only slightly more productive than the 1996 cohort) does not hold, however, if we switch our productivity measure from AER-equivalent publications to AER-equivalent pages. As shown in Table 4, 1987 cohort is the most productive cohort; however, other cohorts do not line up chronologically: 1993 cohort achieves higher productivity than 1990 cohort, and 1999 cohort outperforms 1996 cohort. Considering Ph.D.s from the top 30 departments, although 1990 and 1993 cohorts together have higher average productivity than 1996 and 1999 cohorts together, this is not the case anymore when we assign full credit to each coauthor. Moreover, comparing different cohorts of top 30 departments using different policies to discount for coauthorship (namely equal credit and full credit), we see that the ratio of coauthor-not-discounted to coauthor-discounted AER-equivalence measures, which is denoted as the "ratio of full to equal credit" in Table 4, decreases from 1987 cohort to 1990 cohort, but then increases from older to younger cohorts. Thus, we find quality discounted rates of coauthorship increasing as cohorts get younger. This is in line with our discussion of coauthorship patterns over the life cycle of cohorts in the previous subsection.

This suggests a very important methodological and policy question about what is the right measure of productivity: AER-equivalent publications or AER-equivalent pages? Moreover, should coauthors share the credit or should they be assigned full credit? Our position is that tenure committees look at lines on a CV first and page counts second, if at all. If this is true, then younger scholars look significantly less productive than their older colleagues. In part, this is because papers seem to have gotten longer on average over the years. This leads to a potential double whammy for assistant professors seeking tenure. First, by counting publications instead of pages, more recent tenure candidates will appear unfairly less productive than their colleagues who got tenure in the past. Second, a department that set a standard that could be met by say the top 20% of new tenure candidates who graduated in 1987 will find that it can be met perhaps by only 11% of those who graduated in 1999. This may be good or bad, but at least tenure committees should be aware of the implications of not adjusting tenure standards that reflect the current publishing environment of longer papers, lower acceptance rates, and longer delays.

Another useful way to think about how publishing patterns have changed is to look at the ratio of AER-equivalent publications to total publications. This is a measure of average publication quality across cohorts. One can think of this as a "signal-to-noise ratio" as it indicates what fraction of an AER-quality publication is contained in a given publication. As publication quality of a cohort decreases, the ratio of AER-equivalent publications to total publications will decrease. Table 5 shows the total number of publications within 6 years after obtaining Ph.D., the number of AER-equivalent publications, and the percentage of AER-equivalent publications in total publications (hence the signal-to-noise ratio) across cohorts. We calculate AER-equivalent publications using indices provided by Kalaitzidakis, Mamuenas, and Stengos (2003) as well as by Combes and Linnemer (2010). AER-equivalent publications obtained by two different indices are very highly correlated, thus yielding very similar trends.

While it seems clear that the 1987 cohort of top 30 departments' graduates had higher quality publications on average by any measure, the signal-to-noise ratio decreases from the 1987 cohort to the 1996 cohort. However, 1999 cohort performs better on quality than the previous three cohorts. Overall, there is a clear trend of declining overall average publication quality of top 30 graduates except for the youngest cohort. Turning to cohorts of non-top 30 departments' graduates we find it difficult comparing 1987 cohort to 1990 cohort: whether 1987 cohort outperforms 1990 cohort or not depends on the specific quality index, which can be interpreted as suggesting that the two cohorts do not differ much at all in quality. Younger cohorts perform worse than the oldest two cohorts, and 1999 cohort performs slightly better than 1996 cohort. Thus, we can say that signal-to-noise ratio has worsened across cohorts graduating from the mid-1980s until the late 1990s. This partially confirms the conventional wisdom that the enormous growth of new journals and publication outlets has led to a decline in overall average publication quality. (17)

V. LIFE CYCLE AND COHORT EFFECTS

To this point, our analysis has shown that the hypothesized gradual downward shift in productivity across cohorts is not entirely obvious when only annual productivity is considered. We detect, however, that older cohorts do outperform younger cohorts if we look at cumulative instead of annual productivity. Comparing AER-equivalent publications accumulated at the end of 6 years after graduation, we observe a gradual downward shift across cohorts. In this section we focus again on annual productivity of Ph.D.s. Our aim is to formally flesh out productivity differences across cohorts which manifest themselves clearly at a cumulative level but which are not as obvious at the annual level when only descriptive tools are used. The goal is to distinguish life cycle and cohort effects in the annual productivity measures. As Figures 2-6 reveal, an average Ph.D.'s annual publication productivity follows a distinct hump-shaped life cycle. The number of AER-equivalent publications achieved at any given time is affected both by the location on life cycle after graduation and by cohort specific effects.

A. Estimation Results and the Ellison Effect

We estimate a pooled Tobit model of annual research productivity where we treat the dependent variable as a latent variable. Our dependent variable is the annual research output measured as the number of AER-equivalent publications for a given individual at a given time (which is zero for nearly 60% of all observations in the raw data). Explanatory variables include time polynomials, dummy variables for the graduation year, and a dummy variable to indicate whether the graduate is from a top-30 department. The dummy variables for the graduation year are the key variables of interest as they allow us to test the null hypothesis that younger cohorts are less productive than older ones.

Annual research productivity of individual i at time t is the dependent variable, with time measured as "years after graduation." A third degree polynomial in time captures the life-cycle pattern of research productivity. All individuals' publication records from the first until the 6th year after graduation are covered because our aim is to compare cohorts within their "tenure-track" period which corresponds to approximately the first 6 years after graduation. This leaves us with a total of 42,924 observations. (18)

Cohort effects are captured by dummy variables for graduation years that span from 1986 to 1999, and individuals with the same year of graduation are described as belonging to the cohort for that year. Observations extend to graduates in the year 2000 but the last dummy is dropped to avoid collinearity between the cohort dummies. Thus, the marginal effect of a given cohort dummy variable shows how the respective cohort performs relative to the graduates of 2000 cohort. The cohort dummies are not interacted with time polynomials: we are assuming the year of graduation affects the level of the life cycle and not its slope at different points in the life cycle. If a slowdown in the publication process is occurring over time, the coefficients on the cohort dummies should decrease in value as we move in time from 1986 to 1999. Our hypothesis is that the coefficients on the graduation year dummy, that is, cohort effect, will be highest in the initial year and decline over time as publication lags continue their upward trend. The specification also includes Top30, a dummy variable that takes on the value 1 if individual i is a graduate of a top 30 Ph.D program. This just shifts the conditional mean of the flow of publications but does not alter the shape of the life-cycle profile or the within-group year effects.

Marginal effects for coefficients from four different Tobit models are reported in Table 6. The four Tobit models differ by their dependent variables. Column (1) reports the marginal effects from regressing annual number of AER-equivalent publications adjusted for coauthorship (where credit from publication is equally shared among all coauthors) on life-cycle effects, cohort effects, and a dummy variable capturing the ranking of the graduate institution. We use the significance of point estimates for cohort effects to determine whether a cohort outperforms another cohort in terms of research productivity.

[FIGURE 7 OMITTED]

Because one might claim that the coauthor-adjusted number of AER-equivalent publications is the conventional measure for research productivity that is used in most cases, and because we have been focusing on this particular productivity measure in the previous sections of the paper, we put this measure under the spotlight in this section. However, we run the same Tobit model for three other productivity measures: (a) the annual number of AER-equivalent publications without adjustment for coauthorship where each coauthor gets full credit for the publication (column (2)), (b) the annual number of AER-equivalent publications where AER-equivalence is based on a discrete ranking (19) of journals (column (3)), and (c) the total number of annual publications without any quality weights (column (4)).

Evidently, cohort effects decrease from older to younger cohorts, which means that the predicted difference in annual number of publications is higher between 2000 cohort and any cohort from the late 1980s than between 2000 cohort and any cohort from the late 1990s. This observation is robust across all four different measures of research productivity that we use in Table 6. In order to better visualize the pattern that is revealed by cohort effects of graduates over 14 years, Figure 7 plots the cohort effects from columns (1), (2), and (3) in Table 6.

All three of them show very similar trends. Differences in productivity between 2000 cohort and other cohorts are more pronounced when we assign full credit to each coauthor instead of discounting for coauthorship. It is evident that there is a downward shift that begins with sharp declines in the late 1980s but which seems to settle into a steady-state pattern in the late 1990s. In the last few years, the annual flow differences across cohorts are too small numerically to be statistically distinguishable. Thus, most of the evidence of the slowdown comes in the first half of the cohort sample. The year 1986 seems somewhat of an outlier, so we discount that evidence, but subsequent nearby years are higher than later years, by a statistically significant amount.

Marginal effects of the Tobit model are evaluated around the conditional mean of the dependent variable and show how the conditional mean is affected by the life-cycle effects, cohort effects, or the institution dummy. In order to give a better understanding for the magnitude of the marginal effects, we provide conditional and unconditional means of the respective productivity measure at the bottom of Table 6. Owing to the nonlinearity of the Tobit model and our extensive use of dummy variables in this estimation, only limited intuition can be gained from studying the numerical values of marginal effects. Instead of analyzing quantitative aspects of these marginal effects, we focus on the sign and significance of them. The dummy variable for graduates of top 30 departments is found to be highly significant which confirms our earlier discussions and descriptive findings about the performance differences between graduates of top 30 and non-top 30 departments.

Results in Table 6 show that different ways of measuring research productivity yield different results in terms of statistical significance: 2000 cohort's research productivity cannot be statistically significantly distinguished from that of any cohort after 1991 when we use "the annual number of AER-equivalent publications with full credit to each coauthor" (column (2)) as a measure for the research productivity. On the other hand, when we use "the annual number of total publications" (column (4)) instead, then the picture changes: in this case, 2000 cohort is significantly outperformed by almost all other cohorts, except for the 1997 and 1998 cohorts.

In order to compare cohorts' research productivity pairwise, we make use of significance levels of cohort effects. This can be performed by comparing the point estimate of the marginal effect of a graduation year dummy to the 95% confidence interval of another graduation year dummy's marginal effect. If the point estimate lies to the right of the 95% confidence interval of the other cohort's marginal effect, then we say that the latter cohort is outperformed by the former one. A more rigorous method, which we employ in our analysis below, is to run separate Tobit models, each of which take 1 of the 15 cohorts as base. Then the signs and significance levels of cohort effects will reveal whether the base cohort is outperformed by other cohorts or outperforms other cohorts and at what significance level exactly. For this analysis we measure research productivity in terms of the annual number of AER-equivalent publications first, then we discuss other alternatives.

Results of the pairwise comparisons of cohorts' research productivity are summarized in Table 7. A "+" sign indicates that members of cohorts belonging to the year indicated in the row statistically significantly outperform members of the cohorts indicated in the columns in terms of research productivity.

If older cohorts outperform younger cohorts, the "+" signs should be found above the diagonal of the matrix in Table 7. We see that this is true of almost all comparisons across cohorts on average. An average member of the 1986 cohort outperforms average members of all subsequent cohorts, the 1990, 1991, and 1992 graduates, on average, consistently outperform graduates from later years, whereas the 1989 graduates outperform only 1997 and 1998 graduates on average. There are only two cases where a younger cohort outperforms an older one: 1991 outperforms 1989, and 1999 outperforms 1997, both at a significance level of 10%. Thus, after controlling for life-cycle effects, we reach the following conclusion: there is a significant decrease in Ph.D.s' annual publication productivity from the late 1980s until the mid-1990s. Performance of graduates from 1995 to 2000 cannot be statistically distinguished, except that 1999 statistically significantly outperforms 1997.

As we mentioned above, whether a cohort statistically significantly outperforms another cohort depends on how we measure the annual research productivity. To this end, we run the same procedure as in Table 7 using the other three research productivity measures mentioned above. Tables A5, A6, and A7 document pair-wise comparisons of cohorts when we use AER-equivalent publications with full credit, AER-equivalent publications with discrete quality ranking, and number of publications without quality adjustment as measures for research productivity, respectively. As Tobit estimates in Table 6 already suggest, different productivity measures yield different point estimates and yet robust conclusions. In all three cases documented in Tables A5, A6, and A7, the "+" signs are found almost always above the diagonal of the matrix in the respective table. Hence using different productivity measures does not change our basic conclusions from analyzing Table 7: Ph.D.s' annual publication productivity from the late 1980s until the mid-1990s drops significantly, and research productivities of cohorts from 1995 to 2000 cannot be statistically distinguished, except for only a few cases.

Research productivity differences across cohorts become obvious when we use the number of total publications to measure research productivity, as shown in Table A7. However, we doubt that this is a reliable measure, because quality is an essential part of measuring and evaluating research output. Our other three productivity measures integrate publication quality and thus establish an AER-equivalence for annual research productivity. Comparing these three measures, differences across cohorts are more pronounced when we use the annual number of AER-equivalent publications calculated using a discrete journal ranking. Because journals are grouped into different tiers in this ranking, a shift of publications across tiers would generate a greater, and more importantly, a discontinuous downward shift in research productivity as opposed to what would happen when using a continuous journal ranking. Results shown in Table A6 reveal that younger cohorts might be publishing significantly less in the top-tier and more in the second-tier or third-tier journals compared with older cohorts. Results shown in Table 7 are based on the annual number of AER-equivalent publications that are calculated using a continuous ranking of journals, and they yield fewer statistically significant productivity differences in the pairwise comparison of cohorts compared to the one using discrete rankings. Finally, when we do not discount for coauthorship and assign each coauthor full credit for a publication while using a continuous ranking, then differences across cohorts become even less pronounced. This is in line with what we have already shown in earlier sections, that younger cohorts tend to coauthor more than older cohorts. This may simply reflect a shift in social norms in the profession, or it may be a strategy to get around the Ellison effect. Nevertheless it does not seem to work to save younger cohorts from being outperformed by older ones, as shown in Table A5.

B. Research Productivity within and across Quintiles

As shown in Section IV.A, distribution of research productivity of Ph.D.s is extremely skewed: the top 5% of Ph.D.s produce about 40%, and the top 20% of Ph.D.s produce about 80% of all publications in their respective cohorts. Similar skewedness shows up across all cohorts. Our analysis so far has compared cohorts based on their average performance. The extreme skewedness of productivity distribution within cohorts, however, suggests that comparisons of the output of different productivity percentiles across cohorts would be interesting. To this end, we rank all Ph.D.s within a given cohort based on the number of AER-equivalent publications (20) they achieve at the end of the 6th year after graduation. We then compare annual productivity of Ph.D.s across cohorts who are in the third quintile range (between 40th and 60th percentiles), and then those who are in the second quintile range (between 60th and 80th percentiles), and finally those in the top quintile range. We run pooled Tobit estimation with the same specification as above on these three quintile ranges. (21) Marginal effects for the top three quintile ranges are reported in Table 8.

Interestingly, the positive impact of having graduated from a top 30 department, which is statistically significant for average members of cohorts (see Table 6), has a statistically significant positive impact only for graduates in the top quintile range. Its marginal effect is insignificant for the second and third quintile ranges. This suggests that lower quality graduates of top places are not significantly different from typical graduates of any Ph.D. program. For all three quintile ranges, we observe that cohort effects yield a decreasing trend as we move from older to younger cohorts. Pairwise comparisons of cohorts for the second and the first quintile ranges are reported in Tables 9 and 10, respectively.

Although pairwise comparisons of cohorts for the top three quintile ranges yield more or less similar results as pairwise comparison of cohorts' average performers (see Table 7), they do reveal an interesting pattern. Comparing research productivities of all publishing Ph.D.s in the second quintile of their respective cohorts, we observe that 1986, 1987, and 1988 cohorts outperform most of the younger cohorts, and 1995, 1997, 1998, and 2000 cohorts are outperformed by most of the older cohorts (see Table 9). Table 10 shows how productivity of first quintiles compare across cohorts: 1990, 1991, and 1992 cohorts dominate almost every cohort after 1994, and 1994 cohort dominates every cohort after 1996. Thus, the upper diagonal of the "pairwise comparison matrix" in Table 10 displays more "+" signs than that in Table 9.

Comparison of cohorts' research productivities by productivity quintiles reveals an interesting dimension of the Ellison effect: as we compare higher quintiles across cohorts, the dominance of older cohorts over younger cohorts and the chronological ordering of cohorts in publication productivity become more evident. This means that top performers of each cohort are hit more severely by the publication bottleneck, namely the Ellison effect. This in turn suggests that the top performers of the youngest generation are likely to fall most short of productivity expectations formed on the basis of this historical publishing environment, while middle and lower performers may not look much different than their predecessors. Top departments especially should be aware of these facts when evaluating junior faculty for tenure.

VI. CONCLUSION

Ellison (2002) documents how most journals require today more than double the time they required 30 years ago to evaluate a submitted paper. It is only natural to wonder how this longer "time to build" production process for published manuscripts affects younger Ph.D.s. It is particularly important to investigate whether younger cohorts perform significantly worse than older cohorts in terms of research output. Promotions, job offers, and tenure decisions in academia are based on an individual's publication record. This publication record is not only compared to his/her cohort peers, but also to older cohorts, because institutions may be worried about keeping their established standards. Assuming that younger cohorts are not less smart or working less diligently than older cohorts, a downward trend in publication records as cohorts get younger must have important policy implications for the economics Ph.D. job market.

Reconstructing the journal publication record of 14,271 graduates from U.S. and Canadian Ph.D.-granting economics departments from 1986 to 2000, we obtain strong evidence of productivity decrease as we compare younger to older cohorts. It is evident that there is a downward shift that begins with sharp declines in the late 1980s and seems to settle into a steady-state pattern in the late 1990s. In the last few years, the annual flow differences across cohorts are too small numerically to be statistically distinguishable. Looking at the cumulative number of AER-equivalent publications reached at the end of 6 years after graduation, we see convincing evidence of the expected productivity decline for both top and non-top 30 departments. Thus, unless we believe that recent graduates are fundamentally of poorer quality, the same quality of tenure candidate is significantly less productive today than 10 or 15 years ago.

If we use the number of AER-equivalent pages as a measure of productivity, then the above mentioned drop-off in productivity is not obvious. This is consistent with Ellison's (2002) documentation of the increasing length and decreasing number of published papers. This exposes a significant methodological question about the best way to measure productivity of departments, graduate programs, and individual scholars. Productivity patterns over time look different depending on whether number of papers or pages is chosen as a basis of comparison. We argue that what the profession values when granting tenure, giving raises, or making senior hires is the number of lines on a CV and the quality of the research papers on those lines. It is much harder to distill this into the number of AER-quality weighted pages, and we suspect that this is seldom attempted in practice.

APPENDIX

[FIGURE A1 OMITTED]
TABLE Al
Number of Ph.D.s in Economics by Data Source

Year AEA Hasselback Overlap Total

1986 264 227 61 425
1987 597 216 95 714
1988 787 196 94 883
1989 953 230 147 1,035
1990 947 164 107 1,001
1991 905 178 122 956
1992 928 155 106 970
1993 1,074 173 110 1,132
1994 1,021 182 122 1,077
1995 1,025 170 109 1.078
1996 955 155 104 1,002
1997 935 167 107 990
1998 981 178 113 1,040
1999 866 182 106 936
2000 969 181 110 1,032

Note: The overlap indicates the number of Ph.D.s common to
the AEA and Hasselback data.

TABLE A2
Number of Publications

Year Number

1985 9,918
1986 9,872
1987 9,918
1988 10,552
1989 10,767
1990 11,254
1991 11,905
1992 13,108
1993 13,492
1994 14,374
1995 15,825
1996 17,692
1997 18,385
1998 19,869
1999 20,818
2000 21,835
2001 22,271
2002 21,991
2003 23,510
2004 25,618
2005 25,976
2006 19,722

TABLE A3
Journal Weights Relative to the American Economic
Review

Journal Index

 1. American Economic Review 1.000
 2. Econometrica 0.968
 3. Journal of Political Economy 0.652
 4. Journal of Economic Theory 0.588
 5. Quarterly Journal of Economics 0.581
 6. Journal of Econometrics 0.549
 7. Econometric Theory 0.459
 8. Review of Economic Studies 0.452
 9. JBES 0.384
10. Journal of Monetary Economics 0.364
11. Games and Economic Behavior 0.355
12. Journal of Economic Perspectives 0.343
13. Review of Economics and Statistics 0.280
14. European Economic Review 0.238
15. JEEA 0.238
16. International Economic Review 0.230
17. Economic Theory 0.224
18. Journal of Human Resources 0.213
19. Economic Journal 0.207
20. Journal of Public Economics 0.198
21. Journal of Economic Literature 0.188
22. Economics Letters 0.187
23. Journal of Applied Econometrics 0.166
24. JEDC 0.145
25. Journal of Labor Economics 0.128
26. DEEM 0.119
27. RAND Journal of Economics 0.114
28. Scandinavian Journal of Economics 0.107
29. Journal of Financial Economics 0.099
30 OBES 0.084
31. Journal of International Economics 0.078
32. Journal of Mathematical Economics 0.076
33. JEBO 0.071
34. Social Choice and Welfare 0.069
35. AJAE 0.062
36. IJGT 0.061
37. Economic Inquiry 0.060
38. World Bank Economic Review 0.057
39. Journal of Risk and Uncertainty 0.056
40. Journal of Development Economics 0.055
41. Land Economics 0.051
42. IMF Staff Papers 0.051
43. Canadian Journal of Economics 0.051
44. Public Choice 0.050
45. Theory and Decision 0.049
46. Economica 0.046
47. Journal of Urban Economics 0.044
48. IJIO 0.043
49. JLEO 0.041
50. Journal of Law and Economics 0.039
51. National Tax Journal 0.039
52. Journal of Industrial Economics 0.039
53. Journal of Economic History 0.038
54. Oxford Economic Papers 0.037
55. Journal of Comparative Economics 0.034
56. World Development 0.032
57. Southern Economic Journal 0.031
58. Explorations in Economic History 0.030
59. Economic Record 0.029
60. Journal of Banking and Finance 0.026
61. Contemporary Economic Policy 0.024
62. Journal of Population Economics 0.024
63. JFQA 0.021
64. JITE 0.020
65. Applied Economics 0.020

Notes: Journal of Business and Economic Statistics (JBES),
Journal of the European Economic Association (JEEA), Journal
of Environmental Economics and Management (JEEM), Oxford
Bulletin of Economics and Statistics (OBES), Journal of Economic
Behavior and Organization (JEBO), American Journal of Agricultural
Economics (AJAE), International Journal of Game Theory (IJGT),
International Journal of Industrial Organization (IJIO), Journal of
Law, Economics, and Organization (JLEO), Journal of Financial
and Quantitative Analysis (JFQA), Journal of Institutional and
Theoretical Economics (JITE).

TABLE A4
Top 30 Economics Departments in the United States and Canada

Rank Ordered by Faculty Productivity

1. Harvard University
2. University of Chicago
3. University of Pennsylvania
4. Stanford University
5. MIT
6. UC-Berkeley
7. Northwestern University
8. Yale University
9. University of Michigan
10. Columbia University
11 Princeton University
12. UCLA
13. New York University
14. Cornell University
15. University of Wisconsin-Madison
16. Duke University
17. Ohio State University
18. University of Maryland
19. University of Rochester
20. University of Texas, Austin
21. University of Minnesota
22. University of Illinois
23. UC-Davis
24. University of Toronto
25. University of British Columbia
26. UC-San Diego
27. University of Southern California
28. Boston University
29. Pennsylvania State University
30. Carnegie Mellon University

Rank Ordered by Productivity of Ph.D.s

1. MIT
2. Princeton University
3. Harvard University
4. University of Rochester
5. California Institute of Technology
6. Yale University
7. Northwestern University
8. Carnegie Mellon University
9. University of Chicago
10. UC-San Diego
11 University of Pennsylvania
12. Stanford University
13. University of Toronto
14. Universit of Western Ontario
15. University of Minnesota
16. Brown University
17. University of British Columbia
18. Columbia University
19. SUNY, Stony Brook
20. UCLA
21. University of Iowa
22. UC-Berkeley
23. University of Virginia
24. Duke University
25. Queen's University
26. University of Wisconsin-Madison
27. University of Michigan
28. Johns Hopkins University
29. New York University
30. McMaster University

Rank Quality Index (a) Publishing Ph.D.s (b)

1. 0.801 71%
2. 0.741 81%
3. 0.694 70%
4. 0.674 90%
5. 0.602 74%
6. 0.596 67%
7. 0.578 70%
8. 0.544 63%
9. 0.522 60%
10. 0.502 85%
11 0.487 61%
12. 0.477 73%
13. 0.421 65%
14. 0.401 74%
15. 0.3842 61%
16. 0.3841 68%
17. 0.354 74%
18. 0.353 55%
19. 0.329 38%
20. 0.323 49%
21. 0.317 68%
22. 0.316 64%
23. 0.296 54%
24. 0.290 62%
25. 0.271 63%
26. 0.270 61%
27. 0.265 54%
28. 0.262 56%
29. 0.260 48%
30. 0.251 65%

Note: Departments that are ranked top 30 in only one of the two
rankings are in italics.

(a) "Quality Index" is the average value of AER-equivalent
publications per research-active graduates at the end of 6 years after
graduation.

(b) "Publishing Ph.D.s" is the ratio (percentage) of graduates who
have at least one publication within 6 years after graduation to total
number of graduates (from corresponding economics departments in
"Ordered by Productivity of Ph.D.s" column) between 1986 and 2000.

TABLE A5
Comparison across Cohorts: AER-Equivalent Publications with Full
Credit

 1987 1988 1989 1990 1991 1992 1993 1994

1986 + + + + + + + +
1987 + + + + + +
1988 + (+) (+) + +
1989
1990
1991 (+) + (+)
1992
1993
1994
1995
1996
1997
1998
1999

 1995 1996 1997 1998 1999 2000

1986 + + + + + +
1987 + + + + + +
1988 + + + + + +
1989 +
1990 (+) (+) + +
1991 + + + + + +
1992 + (+) + +
1993 (+)
1994 +
1995
1996
1997
1998
1999 (+)

Notes: "+" indicates the row cohort outperformed the column cohort at
the 5% level of significance, and "(+)" indicates the row cohort
outperformed the column cohort at the 10% level of significance.

TABLE A6
Comparison across Cohorts: AER-Equivalent Publications (Discrete
Quality Ranking)

 1987 1988 1989 1990 1991 1992 1993 1994

1986 + + + + + + +
1987 + + + + + +
1988 + (+) + + +
1989
1990 (+)
1991 (+) +
1992
1993
1994
1995
1996
1997
1998
1999

 1995 1996 1997 1998 1999 2000

1986 + + + + + +
1987 + + + + + +
1988 + + + + + +
1989 + + +
1990 + + + + + +
1991 + + + + + +
1992 + + + + +
1993 + + +
1994 + + + +
1995
1996
1997
1998
1999 (+) +

Notes: "+" indicates the row cohort outperformed the column cohort at
the 5% level of significance, and "(+)" indicates the row cohort
outperformed the column cohort at the 10% level of significance.

TABLE A7
Comparison across Cohorts: Number of Publications (Equal Credit
without Quality Weights)

 1987 1988 1989 1990 1991 1992 1993 1994

1986 + + + + + +
1987 + (+) + + +
1988 + + +
1989
1990 (+)
1991 (+) (+)
1992
1993
1994
1995
1996
1997
1998
1999

 1995 1996 1997 1998 1999 2000

1986 + + + + + +
1987 + + + + + +
1988 + + + + + +
1989 + + +
1990 + + + + + +
1991 + + + + + +
1992 + + + + (+) +
1993 + + +
1994 + + + +
1995 (+)
1996 (+) +
1997
1998
1999 (+) +

Notes: "+" indicates the row cohort outperformed the column cohort at
the 5% level of significance, and "(+)" indicates the row cohort
outperformed the column cohort at the 10% level of significance.


doi: 10.1111/j.1465-7295.2012.00480.x

ABBREVIATIONS

AEA: American Economic Association

AER: American Economic Review

CV: Curriculum Vitae

IBSS: International Bibliography of Social Sciences

MIT: Massachusetts Institute of Technology

REFERENCES

Collins, J. T., R. G. Cox, and V. Stango. "The Publishing Patterns of Recent Economics Ph.D. Recipients." Economic Inquiry, 38(2), 2000, 358-67.

Combes, P.-P., and L. Linnemer. "Where Are the Economists Who Publish? Publication Concentration and Rankings in Europe Based on Cumulative Publications." Journal of the European Economic Association, 1(6), 2003, 1250-308.

--. "Inferring Missing Citations, A Quantitative Multi-Criteria Ranking of All Journals in Economics." GREQAM Working Paper No. 2010-28, 2010.

Coupe, T. "Revealed Performances: Worldwide Rankings of Economists and Economics Departments, 19902000." Journal of the European Economic Association, 1, 2003, 1309-45.

Davis, J. C., and D. M. Patterson. "Determinants of Variations in Journal Publication Rates of Economists." The American Economist, 45(1), 2001, 86-91.

Ellison, G. "The Slowdown of the Economics Publishing Process." Journal of Political Economy, 110(5), 2002, 947-93.

Fafchamps, M., M. J. van tier Leij, and S. Goyal. "Scientific Networks and Co-Authorship." University of Oxford Economics Discussion Paper No. 256, 2006.

Goyal, S., M. J. van der Leij, and J. Moraga-Gonzalez. "Economics: An Emerging Small World." Journal of Political Economy, 114(2), 2006, 403-32.

Hutchinson, E. B., and T. L. Zivney. "The Publication Profile of Economists." Journal of Economic Education, 26(1), 1995, 59-79.

Kalaitzidakis, P., T. P. Mamuneas, and T. Stengos. "Rankings of Academic Journals and Institutions in Economics." Journal of the European Economic Association, 1(6). 2003, 1346-66.

Koenker, R. "Quantile Regression for Longitudinal Data." Journal of Multivariate Analysis, 91, 2004, 74-89.

Levin, S. G., and P. E. Stephan. "Research Productivity over the Life Cycle: Evidence for Academic Scientists." American Economic Review, 81(1), 1991, 114-31.

Rauber, M., and H. W. Ursprung. "Life Cycle and Cohort Productivity in Economic Research: The Case of Germany." German Economic Review, 9(1), 2008, 431-56.

Stock, W. A., and J. J. Siegfried. "Where Are They Now? Tracking the Ph.D. Class of 1997." Southern Economic Journal, 73(2), 2006, 472-88.

JOHN P. CONLEY, MARIO J. CRUCINI, ROBERT A. DRISKILL and ALI SINA ONDER *

* We wish to thank John Siegfried and the AEA staff, particularly Elizabeth Braunstein, for making JEL publication data available to us, as well as Jonathan Lee and Peter Kozciuk for excellent research assistance. We thank Ted Bergstrom, Andreea Mitrut, Laurent Simula, seminar participants at Vanderbilt University, Uppsala University, University of Gothenburg, and University of Hohenheim, session participants in the Public Economic Theory Meeting 2011, the Canadian Economic Association 2011 and the European Economic Association and the Econometric Society European 2011 Meetings, and two anonymous referees for helpful comments. We take responsibility for any errors that remain.

Conley: Professor, Department of Economics, Vanderbilt University, Nashville, TN 37027. Phone 1-615-322-2920, Fax 1-615-343-8495, E-mail j.p.conley@vauderbilt.edu

Crucini: Associate Professor, NBER and Department of Economics, Vanderbilt University, Nashville, TN 37235. Phone 1-615-322-7357, Fax 1-615-343-8495, E-mail mario.j.crucini@vanderbilt.edu

Driskill: Professor, Department of Economics, Vanderbilt University, Nashville, TN 37235. Phone 1-615-3222128, Fax 1-615-343-8495, E-mail robert.a.driskill@vanderbilt.edu

Onder: Assistant Professor, Department of Economics and Uppsala Center for Fiscal Studies (UCFS), Uppsala University, Uppsala, SE 75313, Sweden. Phone 46-18471-5116, Fax 46-18-471-1478, E-mail alisina.onder@nek.uu.se

(1.) This percentage is also found for a small sample of graduates from 1997 by Stock and Siegfried (2006), for a sample of graduates from 1969-1988 by Hutchinson and Zivney (1995), and for a sample of European economists by Combes and Linnemer (2003).

(2.) We use the ranking from Coupe (2003) to determine the top 30 U.S. and Canadian economics departments. His ranking appears highly correlated with other popular rankings.

(3.) Moreover, there is no upward drift in the fraction not publishing in the first 6 years as might be the case if young researchers were becoming increasingly discouraged and abandoning peer-review research.

(4.) Later in the paper we go into more detail about the rankings we used and the various robustness checks we carried out with respect to this.

(5.) Ellison (2002) documents that s has increased over time at a greater rate than the number of high-quality journal pages published. While he speculates on the causes of this change, he does not offer conclusive evidence of a particular cause. Thus, we treat this as exogenous.

(6.) Ellison finds that articles increased in page length about 33%, from about 18 to 24 pages.

(7.) Note that the increase in manuscript length that Ellison documents implies that if the annual page budgets of journals has not increased (which is substantially the case at least with the top journals) then the acceptance rate, a. should fall. Add to this the number of submissions seems to be going up each year, and that for the few journals that actually report acceptance rates, the rates have gone down. and there is strong reason to believe that we see lower acceptance rates in general at economics journals in more recent periods.

(8.) Fafchamps, van der Leij, and Goyal (2006) and Goyal, van der Leij, and Moraga-Gonzalez (2006) construct a similar sample of published papers, but focus on the nature of the coauthor network rather than changes in publication rates across cohorts.

(9.) Combes and Linnemer (2010) group journals into six quality groups: AAA, AA, A, B, C, and D. We use the AER conversion rate of 0.12 for all journals in categories B, C, and D.

(10.) Whenever we refer to AER-equivalent pages or AER-equivalent publications in this paper, the AER equivalence is obtained by employing indices provided by Kalaitzidakis, Mamuenas, and Stengos (2003) unless otherwise noted.

(11.) Kalaitzidakis, Mamuenas, and Stengos (2003) provide impact, age, and self-citation adjusted quality coefficients for academic economics journals. They adjust these quality coefficients to account for differences in journals' number of pages and articles. In order to ensure complete convertibility between our "AER-Equivalent Pages" and "AER-Equivalent Publications" throughout our analysis, we use page-adjusted quality coefficients as "journal index." Because correlation between page-adjusted and article-adjusted quality is very high (correlation coefficient is 0.94), cost of using only page-adjusted coefficients is very low compared to gains from having complete convertibility between "AER Pages" and "AER Publications" for interpretive purposes. See Kalaitzidakis, Mamuenas, and Stengos (2003) for a detailed discussion on interrelation of page-adjusted and article-adjusted quality coefficients.

(12.) Ph.D. cohorts defined in this way are also used in subsequent parts of the paper, except for the regression analysis.

(13.) See for example Davis and Patterson (2001).

(14.) Coupe (2003) ranks economics departments worldwide by the productivity of their faculty. There are only two economics departments within Coupe's top 30 that are outside the United States and Canada: the London School of Economics and the University of Oxford. Because our dataset consists of U.S. and Canadian economics departments' Ph.D.s, we drop the London School of Economics and the University of Oxford and include Coupe's numbers 31 and 32 (both in the United States) instead.

(15.) See for example Collins et al. (2000).

(16.) See for example Combes and Linnemer (2003) and Rauber and Urspmng (2008).

(17.) One should note that graduating cohorts have had approximately the same size from 1989 to 2000. The suggested increase of publications by younger cohorts in lower quality outlets might have been a consequence of three things: being subject to increased publication lags, facing increased supply of economics Ph.D.s from non-U.S. and non-Canadian institutions (hence increased competition from the rest of the world), or a mix of the two.

(18.) Publication records for most cohorts extend beyond 6 years. If we use all available years for each cohort in our pooled Tobit model, we would be using 81,051 observations. Estimating our model using these observations and correcting for the loss in time dummies' efficiency due to unbalanced panel, we obtain qualitatively the same results as with only 6 years as far as significance of time dummies and their signs are concerned.

(19.) Discrete ranking of journals is obtained from Combes and Linnemer (2010), as in the previous section.

(20.) These are calculated using continuous journal rankings and assigning equal share to each coauthor.

(21.) We determine quintile ranges based on cumulative productivity at the end of the 6th year after graduation, because annual productivity is highly volatile. If we look at annual productivity, a highly productive graduate may be found in the lowest quintile range at some years due to this volatility. As a result, determining graduates' productivity quintiles in the above manner and comparing the same quintile range across cohorts using marginal effects from the pooled Tobit regression proves to be a more sensible method than using quantile regression on annual productivity. (See e.g., Koenker 2004).
TABLE 1
The Effect of Lags, Acceptance Rate, and
Manuscript Length on CVs

 Length of Vitae

 (1) (2) (3) (4)
 1-Year 2-Year Longer Low Acceptance
Year Delay Delay Manuscript Rate

1 0.60 0 0.45 0.36
2 1.28 0.60 0.96 0.80
3 2.03 0.80 1.52 1.30
4 2.18 1.48 2.11 1.87
5 3.65 1.84 2.74 2.50
6 4.52 2.58 3.40 3.18

TABLE 2
Intellectual Lorenz Curve

 Percent AER Pages

 1987 1990 1993 1996 1999

Top 1% 13.3 14.5 16.2 13.7 15.7
Top 5% 42.7 45.7 44.2 43.1 45.5
Top 10% 61.7 64.4 62.8 62.9 63.7
Top 20% 81.6 82.5 81.7 82.1 82.7

 Percent AER Publications

 1987 1990 1993 1996 1999

Top 1% 11.9 13.2 14.1 12.7 12.9
Top 5% 37.5 39.4 39.6 39.2 40.1
Top 10% 56.6 58.0 57.5 58.1 58.2
Top 20% 78.1 78.4 78.1 78.7 79.0

TABLE 3
Productivity Percentiles at the End of the 6th Year after Ph.D.

 AER-Equivalent Pages

Percentiles 1987 1990 1993 1996 1999

99th 70.0 57.2 69.6 57.3 65.1
95th 33.9 28.0 27.1 26.7 24.3
90th 20.5 14.5 15.9 15.0 15.0
85th 13.6 9.4 10.6 9.4 9.7
80th 8.4 6.2 7.3 6.2 6.3
75th 6.2 4.0 5.3 4.0 4.3
Median 1.1 0.9 1.0 0.9 0.9

 AER-Equivalent Publications

Percentiles 1987 1990 1993 1996 1999

99th 3.87 3.06 3.23 2.45 2.48
95th 2.00 1.48 1.33 1.28 1.22
90th 1.34 0.98 0.85 0.76 0.73
85th 0.99 0.62 0.61 0.52 0.51
80th 0.62 0.43 0.44 0.37 0.37
75th 0.45 0.31 0.30 0.26 0.26
Median 0.08 0.06 0.06 0.06 0.05

TABLE 4
Per Capita Output at the End of the 6th Year after Ph.D.

 Ph.D.s from Top 30

Equal Credit to Each Coauthor 1987 1990 1993 1996 1999

Total pages 58.0 52.8 56.7 54.2 51.7
Total publications 3.58 3.04 2.99 2.76 2.47
Pages per publication 16.2 17.4 19.0 19.6 20.9
AER pages 9.95 7.56 8.14 7.32 8.04
AER publications 0.61 0.43 0.41 0.36 0.37
Full credit to each coauthor
Total pages 82.0 76.5 82.9 81.5 82.3
Total publications 4.94 4.33 4.27 4.07 3.84
Pages per publication 16.6 17.7 19.4 20.0 21.4
AER pages 14.9 10.9 12.2 11.4 13.6
AER publications 0.89 0.62 0.6 0.56 0.61
Ratio of "Full" to "Equal"
 credit
AER pages 1.50 1.44 1.50 1.56 1.69
AER publications 1.46 1.44 1.46 1.56 1.65

 Ph.D.s from Non-Top 30

Equal credit to each coauthor 1987 1990 1993 1996 1999

Total pages 33.8 34.2 36.3 33.2 38.8
Total publications 2.57 2.38 2.36 2.0 2.16
Pages per publication 13.2 14.4 15.4 16.6 18.0
AER pages 2.42 2.29 2.33 1.88 2.07
AER publications 0.19 0.17 0.14 0.11 0.12
Full credit to each coauthor
Total pages 51.0 51.2 55.3 51.4 63.6
Total publications 3.83 3.57 3.54 3.09 3.49
Pages per publication 13.3 14.3 15.6 16.6 18.2
AER pages 4.04 3.7 3.9 3.04 3.55
AER publications 0.3 0.26 0.23 0.18 0.2
Ratio of "Full" to "Equal"
 credit
AER pages 1.67 1.62 1.67 1.62 1.71
AER publications 1.58 1.53 1.64 1.64 1.67

TABLE 5
Aggregate Cohort Output

 Total Publication Output

 1987 1990 1993 1996 1999

Top 30 Ph.D.s
Total publications 1988 2531 2792 2468 2075
AER (Combes) 772 877 964 843 756
AER (Kalaitzidakis) 340 359 387 325 313
Non-top 30 Ph.D.s
Total publications 1004 1494 1527 1228 1549
AER (Gombes) 250 377 358 271 352
AER (Kalaitzidakis) 72 105 93 68 86
Top 30 Ph.D.s relative to
 non-top 30 Ph.D.s
Total publications 1.98 1.69 1.83 2.01 1.34
AER (Gombes) 3.09 2.33 2.69 3.11 2.15
AER (Kalaitzidakis) 4.72 3.42 4.16 4.78 3.64

 "Signal-to-Noise" Ratio

 1987 1990 1993 1996 1999

Top 30 Ph.D.s
Total publications
AER (Combes) 0.388 0.347 0.345 0.342 0.364
AER (Kalaitzidakis) 0.171 0.142 0.139 0.132 0.151
Non-top 30 Ph.D.s
Total publications
AER (Gombes) 0.249 0.252 0.234 0.221 0.227
AER (Kalaitzidakis) 0.072 0.070 0.061 0.055 0.056
Top 30 Ph.D.s relative to
 non-top 30 Ph.D.s
Total publications
AER (Gombes)
AER (Kalaitzidakis)

Notes: The rows labeled Combes and Kalaitzidakis use AER-equivalent
weights from the Combes and Linnemer (2010) and the Kalaitzidakis,
Mamuenas, and Stengos (2003) studies to aggregate publications in each
cohort.

TABLE 6
Pooled Tobit Model-Marginal Effects

 (1) (2)

Top30 0.0208 *** 0.0306 ***
Life-cycle effects
t 0.0359 *** 0.0484 ***
[t.sup.2] -0.0054 *** -0.0066 ***
[t.sup.3] 0.00022 0.00023
Cohort effects
1986 0.0218 *** 0.031 ***
1987 0.0158 *** 0.0185 ***
1988 0.0121 *** 0.0125 ***
1989 0.004 0.0007
1990 0.0059 ** 0.0049
1991 0.0088 *** 0.0092 **
1992 0.0061 ** 0.0056
1993 0.003 -0.0001
1994 0.0041 * 0.0031
1995 0.00032 -0.002
1996 0.0005 -0.001
1997 -0.0023 -0.0064
1998 -0.0008 -0.0021
1999 0.0015 0.0009
E (y) 0.0472 0.0719
E (y|x, y > 0) 0.1865 0.2759
Observations 42924 42924

 (3) (4)

Top30 0.0375 *** 0.058 ***
Life-cycle effects
t 0.0771 *** 0.2152
[t.sup.2] -0.0116 *** -0.0319 ***
[t.sup.3] 0.0005 * 0.0014 *
Cohort effects
1986 0.0454 *** 0.1168 ***
1987 0.0361 *** 0.093
1988 0.0282 *** 0.0846 ***
1989 0.011 ** 0.041 ***
1990 0.0189 *** 0.0645 ***
1991 0.0208 *** 0.0672
1992 0.0164 *** 0.06 ***
1993 0.0105 ** 0.044 ***
1994 0.0139 *** 0.0509
1995 0.0042 0.0242 *
1996 0.0064 0.0325 **
1997 -0.0003 0.0093
1998 0.0013 0.0136
1999 0.0084 * 0.0336
E (y) 0.1276 0.406
E (y|x, y > 0) 0.3542 0.9737
Observations 42924 42924

* Significant at 10%; ** significant at 5%; *** significant at 1%.

TABLE 7

Comparison across Cohorts: AER-Equivalent Publications with
Equal Credit

 1987 1988 1989 1990 1991 1992 1993 1994

1986 + + + + + + +
1987 + + + + + +
1988 + + + + +
1989
1990
1991 (+) + (+)
1992
1993
1994
1995
1996
1997
1998
1999

 1995 1996 1997 1998 1999 2000

1986 + + + + + +
1987 + + + + + +
1988 + + + + + +
1989
1990 + + + + (+) +
1991 + + + + + +
1992 + + + + (+) +
1993
1994 + + (+)
1995
1996
1997
1998
1999 (+)

Notes: "+" indicates the row cohort outperformed the column cohort
at the 5% level of significance, and "(+)" indicates the row
cohort outperformed the column cohort at the 10% level of significance.

TABLE 8
Marginal Effects for Top Three Quintiles

 P40-P60 P60-P80 P80-P100

Top30 -0.0004618 0.0003573 0.0201221

Life-cycle effects

t 0.0075388 *** 0.0230354 *** 0.0923817 ***
[t.sup.2] -0.0014535 *** -0.0036719 ** -0.0106398
[t.sup.3] 0.0000899 ** 0.0001701 0.0001321

Cohort effects

1986 0.0034581 *** 0.0138791 *** 0.1060625 ***
1987 0.0048398 *** 0.0146142 *** 0.0588261 ***
1988 0.0039493 *** 0.0113483 *** 0.046402 ***
1989 0.0015597 ** 0.003843 0.0173163
1990 0.0026034 *** 0.0064763 ** 0.0239718 **
1991 0.0025138 *** 0.0064635 ** 0.0373514 ***
1992 0.0028776 *** 0.005512 ** 0.0287218 **
1993 0.0016973 ** 0.002337 0.0104932
1994 0.0024074 *** 0.0073632 *** 0.020059 **
1995 0.0016029 ** 0.0004858 0.0065441
1996 0.0022369 *** 0.0033282 0.004813
1997 0.0007187 -0.0005396 -0.0068057
1998 0.0002142 0.0008649 -0.000829
1999 0.0018777 *** 0.0026219 -0.0029431

* Significant at 10%; ** significant at 5%; *** significant
at 1%.

TABLE 9
Publication Comparisons across Cohorts (P60-P80)

 1987 1988 1989 1990 1991 1992 1993 1994 1995

1986 + + + + + + +
1987 + + + + + + +
1988 + + + +
1989
1990 +
1991 +
1992 +
1993
1994 +
1995
1996
1997
1998
1999

1987 1996 1997 1998 1999 2000

1986 + + + + +
1987 + + + + +
1988 + + + + +
1989
1990 + + +
1991 + + +
1992 + + +
1993
1994 + + +
1995
1996
1997
1998
1999

Note: A "+" indicates the row cohort outperformed the column cohort at
the 5% level of significance.

TABLE 10
Publication Comparisons across Cohorts (P80-P100)

 1987 1988 1989 1990 1991 1992 1993 1994 1995

1986 + + + + + + + + +
1987 + + + + + +
1988 + + + + +
1989
1990
1991 + +
1992 +
1993
1994
1995
1996
1997
1998
1999

 1996 1997 1998 1999 2000

1986 + + + + +
1987 + + + + +
1988 + + + + +
1989 + +
1990 + + + + +
1991 + + + + +
1992 + + + + +
1993
1994 + + + +
1995
1996
1997
1998
1999

Note: A "+" indicates the row cohort outperformed the column cohort at
the 5% level of significance.
联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有