THE PUBLISHING PATTERNS OF RECENT ECONOMICS PH.D. RECIPIENTS.
COLLINS, JEFFERY T. ; COX, RICHARD GUY ; STANGO, VICTOR 等
VICTOR STANGO [*]
We present publication data for recent graduates of 50 economics
Ph.D. programs. The data show that publishing output is highly
concentrated among graduates of the top programs; the top three
programs, for example, generate more than 25% of aggregate publishing
output in our sample. We use the data to construct a set of program
rankings based on both per capita and aggregate graduate publication and
a comparison of faculty performance to graduate performance. The
graduate/faculty comparison reveals that programs may be identical in
the output of their faculties but starkly different in the output of
their graduates. (JEL Al0, A14)
I. INTRODUCTION
This article examines the publishing patterns of recent graduates
from 50 highly regarded economics doctoral programs. The purpose of this
study is to inform faculty, university administrators, and prospective
and current graduate students who continually seek information
concerning the relative performance of economics departments. Most
existing studies rank departments by the publication or citation record
of their faculty. Such articles provide faculty and administrators with
a comparison of institutions based on output measures that may, for
example, inform an existing faculty member about the relative strength
of her or his colleagues. They may be less informative to other readers,
such as prospective graduate students, departments hiring freshly minted
Ph.D.'s, and administrators evaluating doctoral programs.
Prospective students, search committees, and administrators are
likely to be concerned more with graduate performance than with faculty
performance. Although faculty-based rankings are useful, it is not clear
that faculty success translates into graduate success. The argument
could be made that students benefit from exposure to prolific
researchers. Conversely, students may have little contact with faculty
whose time is primarily allocated to research and thus, gain little from
their experience. To the extent that the former is true across
institutions, one would expect rankings based on student output to mimic
faculty rankings using the same criterion. To the extent the latter is
true, deviation from faculty-based rankings would be expected. Knowing
about such deviations would be important both to prospective graduate
students and employers.
In this article, we present a number of performance measures based
on the publication output of programs' graduates. We record
publications in a set of 36 journals through the summer of 1997 for a
set of 3,206 students who received their degrees from 50 graduate
programs from 1987 to 1992. For each program in our data set, we can
provide a fairly comprehensive set of statistics describing the
publishing output of its graduates.
The data provide three pieces of information. First, the data allow
us to describe general patterns of publication for a large cohort of
recent Ph.D.'s in economics. Second, the data yield rankings based
on per capita graduate performance for our set of 36 journals and a
subset of five top journals. We also present rankings based on aggregate
output using both sets of journals. Our results suggest that the
inequalities between top programs and lower-ranked programs are
pronounced--re so in terms of graduate output than in terms of faculty
output. Last, we rank programs based on the relative performance of
their graduates to their faculty. We are able to make this comparison
because our performance measures are comparable to those used in
faculty-based rankings. We find that, even for programs with very
similar levels of faculty output, there may be vast differences in
graduate output.
II. RELEVANT LITERATURE
Previous departmental rankings vary primarily in the time period
examined, the number of departments examined, and the set of journals
used. [1] For our purposes, there are two papers that are important to
mention. First, Laband [1985] is one of the few papers to use graduate
publications and citations to construct rankings. Second, Scott and
Mitias [1996] is the most recent and complete ranking of departments by
faculty publication, and we use their method of calculating output in
order to obtain some comparability of results. The Scott and Mitias
article ranks the top 240 universities in the U.S. from 1983 to 1993 and
uses two sets of journals to determine both per capita and aggregate
output of faculty from each institution.
III. DATA AND DESCRIPTIVE STATISTICS
Our sample contains 3,206 observations from 50 institutions
spanning the period 1987-92 [2] Each observation contains data for one
Ph.D. graduate in economics and includes information on the granting
institution, the year granted, and a complete listing of publications in
a sample of 36 journals. [3] For each publication, we have also compiled
the number of pages in the article and the number of authors given
credit for the publication. Of the 3,206 observations, we classify 1,091
as placed in a domestic academic institution; this allows us to
construct rankings using the subset of graduates who have taken domestic
academic positions. [4]
We measure publication output as do Scott and Mitias [1996] in
their faculty-based rankings. Our sets of journals are identical to
theirs: a large set of 36, and a smaller "top five"; we
present data based on both of these sets. [5] We adjust for journal page
size using their weights. As they do, we divide pages in each article by
n, the number of authors. Our sample period (1987-92) is also comparable
to theirs (1984-93). These similarities are convenient because they
allow us to compare our graduate publication rankings to their faculty
publication rankings using a consistent metric and during roughly the
same period.
Our fifty institutions are a "top 50" based on Scott and
Mitias [1996] and Hansen [1991]. This is an effort to adjust the Scott
and Mitias rankings by accounting for the perception-based quality tiers
presented in Hansen. Obviously, the selection of a "top 50" is
a subjective exercise. However, it is important to note that the purpose
of this study is not to provide a list of top 50 programs but to rank
our selection of the top 50. We also note that, although the selection
of a top 50 in terms of faculty publications is difficult, graduate
output is very highly concentrated at the top schools--it is unlikely
that we are leaving out a school that is having a large impact on the
profession through its graduates' publications.
Tables IA and IB present some descriptive statistics for the
sample, using data for those graduates that are domestically placed.
Table IA presents some summary statistics regarding publication in both
the top 36 and the top five journals from Scott and Mitias [1996]. As a
general indicator of how output varies with program quality, we group
our 50 programs into four tiers and present statistics stratified by
these tiers. [6] For each row, we present the total number of
Ph.D.'s, the average number of pages per graduate, the average
number of publications per Ph.D., the average article length, and the
average number of authors per publication. The Number of Publications
variable is measured as the number of publications in which the graduate
receives publication credit. [7]
As the statistics indicate, graduates of Tier 1 programs are
slightly more likely to receive domestic academic placements than any
other group; 39% of these graduates are domestically placed. There is no
clear trend in placement percentage for the remaining tiers. The trend
in publication output is much clearer: the top programs produce
graduates who publish significantly more than do graduates of lower-tier
programs. This trend is more pronounced among the pool of domestically
placed graduates and even more pronounced in the set of top five
journals. For example, in the top 36 journals, domestically placed Tier
1 graduates publish five to six times as many pages as domestically
placed Tier 4 graduates, but in the top five journals they publish more
than 15 times as many pages. There are some other systematic differences
across these tiers. Average article length is longer for graduates of
higher-tier programs. Also, graduates of lower-tier programs are more
prone to coauthorship than graduates of higher-tie r programs; for
domestically placed graduates, the average number of authors per
publication is 1.56 for Tier 1 graduates and 1.88 for Tier 4 graduates.
Table IB shows cumulative distributions of number of publications
and pages per capita. These figures show some stark differences in these
distributions. Consider that a plurality of Tier 1 graduates publish
four or more papers, whereas a plurality of graduates from all other
tiers publish no papers at all. These figures reinforce the view that
the top schools truly dominate the profession in terms of producing
graduates who publish.
IV. PERFORMANCE MEASURES
Table II uses the publication data to rank our 50 programs based on
pages per capita in the top 36 journals. We restrict the calculations to
include only the students we classify as placed domestically;
calculating rankings based on the average over all graduates had little
effect on the order of rankings. [8] The table also includes pages per
capita and rank based on publication in Scott and Mitias's top five
journals. The last columns show the number of domestically placed
graduates from the program, the total number of graduates in our sample
from that program, and the percentage of graduates who are domestically
placed. This gives an idea of the size and placement patterns of
programs.
M.I.T. tops this ranking; on average, its graduates publish more
than 64 weighted pages during the 5-10 years for which they are in the
sample. Carnegie-Mellon and the University of Washington might be viewed
as the biggest surprises in the top ten; they place fifth and eighth.
Note, however, that both of these programs have low percentages of
students who are domestically placed, relative to other programs in the
top ten. These are also small programs, as measured by representation in
our sample; they do well in per capita terms but fare poorly in any
ranking based on aggregate program output. Similar things can be said
about the California Institute of Technology and Iowa, which are tenth
and thirteenth. In general, the ranking based on publication in the top
five journals is similar, although both the University of Washington and
the University of Pennsylvania are schools with much lower top five
rankings than top 36 rankings.
In Table III, we present rankings based on aggregate program output
in our sample. [9] Relative to the per capita rankings, which are
evaluations of a program's representative domestically placed
graduate, we view this as a measure of the total impact that a
program's graduates have on the profession. The order of this
ranking is not surprising. M.I.T., Chicago, Princeton, Harvard,
Berkeley, and Stanford are the top six. What is surprising in this
ranking is the dominance of M.I.T. graduates. Their aggregate output in
the top 36 journals is nearly 30% higher than the output of the
second-ranked group of graduates and alone makes up more than 10% of all
publication by graduates in these journals during our sample. Their
dominance in the set of top five journals is even more striking; their
total is nearly 60% higher than that for the second-ranked program and
makes up nearly 20% of total output in these journals.
In general, aggregate output is highly concentrated at the top
schools; these programs tend to be bigger, and their graduates publish
more than graduates of smaller programs do. The four-program
concentration ratio for output in the top 36 journals is 0.32, and it is
0.45 for output in the top five journals. Aggregate output drops
significantly outside of the top 12 programs in the top 36 journals and
drops significantly outside of the top five programs in the top five
journals. Again, we come away with the view that very few programs
consistently produce graduates who publish at elite levels.
V. GRADUATE/FACULTY COMPARISON
We now turn to another question. How does graduate performance, as
we measure it, compare to faculty performance? In order to get at this
issue, Table IV compares the publication data from Table II to the
faculty-based measures from Table IV in Scott and Mitias. Using
differences in rank number would skew the comparison, so we use the
difference between studentized graduate and faculty pages per person;
this normalization transforms the pages per capita values into variables
with mean zero and variance equal to one. [10]
The results of this table show that even within groups of programs
with similar faculty output, graduate output may vary quite widely. For
example, the University of Washington, Cornell, and Michigan State have
roughly equal faculty output, but Washington's graduates produce
significantly more output than graduates of these other programs. Note
that M.I.T. is ranked relatively low using this measure; this reflects
the fact that its faculty output is an extreme positive outlier, rather
than its ability to produce successfully publishing graduates.
VI. DISCUSSION AND CONCLUSION
The purpose of this study is twofold. First, it provides useful
information by ranking programs based on an alternative metric: graduate
publication, rather than faculty publication. This metric is likely to
be the more relevant one for prospective graduate students or programs
hiring new faculty members. Our results suggest that, although
departments consisting of prolific faculty tend to produce prolific
students, there are significant differences between peer institutions in
terms of graduate performance. As an illustration, there are nine
programs on our list whose faculty output in Scott and Mitias [1996] is
between 20 and 25 pages per capita. The differences in graduate output
among these schools are striking; the mean is 15, the standard deviation
is ten, and the average levels of graduate output range from a high of
35 pages per capita to a low of zero pages per capita. For a prospective
graduate student, this information might be an important influence on
the choice of a program. For a university hi ring a new faculty member,
consider that the difference in Scott and Mitias [1996] between the
fiftieth and the sixtieth ranked school based on aggregate faculty
output is roughly 120 pages. [11] Even from programs with similar
faculty rankings, one could easily produce this difference by replacing
four faculty members from peer schools with low graduate output with
four from schools with higher graduate output.
A second contribution of this article is that it provides some
general information regarding the publishing patterns of a large cohort
of academics. For example, we see systematic patterns in the
distribution of publications, the number of publications, the average
length of papers, and the extent of coauthorship across different groups
of graduates. These patterns raise some interesting questions. Why is
it, for example, that lower-tier graduates tend to collaborate more than
higher-tier graduates? Why is average article length so much higher for
higher-tier graduates?
There are other issues that we do not address here but should be
considered. One potentially important issue is how much of the
publishing that takes place early in one's career is the
student's dissertation and immediate follow-up papers. If this
represents a significant portion of the publications in our study, then
the dissertation advisor's role would be a large factor in our
findings. We may be also measuring faculty quality rather than solely
students' training and career potential. Perhaps the results
indicate cultural differences across schools in terms of the input
offered by advisors or differences in funding for graduate research. It
may be important to examine the life cycle of the typical economist in
terms of placement and publication; this may vary across cohorts of
different vintage and across program qualities within a given cohort.
Our results suggest some behavioral differences between graduates of
different programs; it would be interesting to examine the sources and
consequences of these differences more fully.
(*.) Thanks to the referees and editor for constructive comments on
an earlier draft.
Stango: Assistant professor, Department of Economies, University of
Tennessee
Collins: Professor, College of Business Administration, Lincoln
Memorial University, Harrogate
Cox Graduate student, Department of Economics, University of
Tennessee, Knoxville
(1.) Several authors have examined economics program performance
since the mid-1970s. The studies typically vary by choice of time period
and journal set (e.g., Graves et al. [1982], Hirsch et al. [1984], and
Tschirhart [1989]), although some occasionally use alternative output
measures such as citation or origin of journal referees (e.g., Gearity
and McKenzie [1978] and Dean [1976]). Finally, the consistency of
various alternative measures was examined and determined to be robust in
Stolen and Gnuschke [1977].
(2.) Our list of Ph.D. recipients comes from the Econlit database
(which lists Ph.D. dissertations in addition to journal publications).
We have an average of 540 observations per year, which covers nearly the
entire population of Ph.D.'s that are granted annually in
economics.
(3.) The data exclude graduates of business or policy programs.
Thus, for example, graduates of University of Chicago Graduate School of
Business are not included in the data set; only graduates of the
University of Chicago Department of Economics are included.
(4.) We classified graduates as placed based on the following
criteria. First, if the "affiliation" field of the
graduate's most recent publication (in Econlit) showed a domestic
academic placement, we classified the graduate as placed domestically.
As a follow-up, we checked each name against the 1998 Prentice-Hall
Guide to Economics Faculty and the online American Economic Association
Directory of Members.
(5.) The Top 36 are American Economic Review, Econometrica,
Economic Inquiry, Economic Journal, Economica, Industrial and Labor
Relations Review, International Economic Review, Journal of Business,
Journal of Business and Economic Statistics, Journal of Development
Economics, Journal of Econometrics, Journal of Economic Dynamics and
Control, Journal of Economic History, Journal of Economic Theory,
Journal of Finance, Journal of Financial Economics, Journal of Labor
Economics, Journal of Human Resources, Journal of International
Economics, Journal of International Money and Finance, Journal of Law
and Economics, Journal of Law, Economics, and Organization, Journal of
Legal Studies, Journal of Monetary Economics, Journal of Money, Credit,
and Banking, Journal of Political Economny, Journal of Public Economics,
Journal of Regional Science, Journal of Urban Economics, National Tax
Journal, Public Choice, Quarterly Journal of Economics, RAND, Review of
Economic Studies, Review of Economics and Statistics, and Sou thern
Economic Journal. The Top 5 are: American Economic Review, Econometrica,
Journal of Political Economy, Quarterly Journal of Economics, and the
Review of Economics and Statistics.
(6.) Our tiers (listed at the bottom of Table I) are based on the
rankings presented in the National Research Council's 1993
publication Research Doctorate Programs in the United States.
Carnegie-Mellon appears nowhere in the National Research Council
ranking, which may be a clerical omission or a failure by
Carnegie-Mellon to report survey data. We nonetheless include it in our
third tier, because it appears in the top 30 in nearly every faculty- or
reputation-based ranking of economics departments. We note that this
grouping is intended only as a general classification in order to allow
us to aggregate the descriptive statistics, and has no relationship per
so to our rankings presented below.
(7.) We also constructed rankings using number of publications
rather than number of pages, but we do not present them here because
doing so left the rankings unchanged. These rankings are available on
request.
(8.) The two measures would be different based on cross-sectional
variation in (a) the average output of those students who are
domestically placed and (b) the percentage of students who are
domestically placed. To show how these differences might be important,
we include both the average output of a program's domestically
placed students and the percentage who are domestically placed.
(9.) Thus, this measure includes graduates who are placed in
domestic academic positions.
(10.) To see why using rank order differences would skew the
results, consider a program that is ranked first in faculty output. The
difference between its graduate rank and its faculty rank must be
greater than zero, while the difference can take on positive or negative
values for a program ranked twenty-fifth in faculty output. We normalize the ratios, rather than using the raw ratio of graduate to faculty
output, because the dispersion in graduate output dominates the
dispersion in faculty output. A ranking using the raw ratio would
replicate the rankings based on graduate output.
(11.) See Scott and Mitias, [1996, Table III p. 387].
REFERENCES
Dean, J. W. "An Alternative Rating System for University
Economic Departments." Economic Inquiry, March 1976, 146-53.
Gearity, D. M., and R. B. McKenzie. "The Ranking of Southern
Economics Departments: New Criterion and Further Evidence."
Southern Economic Journal, October 1978, 608-14.
Graves, Philip E., James R. Marchand, and Randall Thompson.
"Economics Departmental Rankings: Research Incentives, Constraints,
and Efficiency." American Economic Review, December 1982, 1131-41.
Hansen, W. Lee. "The Education and Training of Economics
Doctorates: Major Findings of the Executive Secretary of the American
Economic Association's Commission on Graduate Education in
Economics." Journal of Economic Literature, September 1991,
1054-87.
Hirsch, Barry T., Randall Austin, John Brooks, and J. Bradley Moor.
"Economics Departmental Rankings: Comment." American Economic
Review, September 1984, 827-33.
Laband, David N. "An Evaluation of 50 Ranked Economics
Departments--By Quantity and Quality of Faculty Publications and
Graduate Student Placement and Research Success." Southern Economic
Journal, July 1985, 216-40.
National Research Council. Research Doctorate Programs in the
United States. Washington, D.C.: National Research Council, 1993.
Scott, Loren C., and Peter M. Mitias. "Trends in Rankings of
Economics Departments in the U.S.: An Update." Economic Inquiry,
April 1996, 378-400.
Stolen, J. D., and J. E. Gnuschke. "Reflections on Alternative
Rating Systems for University Economics Departments." Economic
Inquiry, April 1977, 277-82.
Tschirhart, John. "Ranking Economics Departments in Areas of
Expertise." Journal of Economic Education, Spring 1989, 199-222.
Publishing Statistics
A. Level of Output, Average Article Length, and
Coauthorship, by Tier of Graduate Program of Origin,
Domestically Placed Graduates
Top 36 Journals
Domestically Placed Number
Graduates Pages / of Publi- Average Authors /
(% of Total Grads) Capita cations Length Paper
Tier:
1 286 (39%) 44.70 3.82 17.50 1.56
2 372 (31%) 22.60 2.24 16.00 1.66
3 243 (29%) 14.99 1.59 15.49 1.79
4 190 (35%) 8.23 1.08 11.98 1.88
All 1091 24.20 2.31 15.89 1.68
Top 5 journals
Number
Pages / of Publi- Average Authors /
Capita cations Length Paper
Tier:
1 14.13 1.24 17.12 1.72
2 4.86 0.49 16.49 1.83
3 2.31 0.26 13.25 1.69
4 0.86 0.15 10.33 2.10
All 6.03 0.58 15.84 1.77
B. Distribution of Number of Publications and
Pages Published in Top 36 Journals, By Tier of
Graduate Program, Domestically Placed Graduates
Number
[greater than]0 [greater than]1 [greater than]2 [greater than]3
Tier:
1 87% 74% 56% 44%
2 69% 51% 43% 24%
3 62% 49% 39% 12%
4 49% 39% 34% 7%
All 69% 55% 46% 24%
Pages
[greater than]0 [greater than]20 [greater than]50 [greater than]80
Tier:
1 87% 67% 36% 18%
2 68% 42% 14% 4%
3 62% 24% 7% 2%
4 49% 11% 4% 1%
All 68% 68% 68% 7%
Note: "Top 36" and "Top 5" are the journal sets
of Scott and Mitias [1996].
Tier 1-Chicago, 1-larvard, M.I.T., Princcton, Stanford, Yale; Tier
2-Brown, Cal. Tech, U.C. Berkeley, U.C.L.A., U.C.S.D., Columbia,
Cornell, Duke, Michigan, Minnesota, Northwestern, U. Penn., Rochester,
Wisconsin; Tier 3-Boston U., Carnegie-Mcllon, Illinois, Iowa State,
Iowa, Johns Hopkins, Maryland, Michigan State, U.N.C,, N.Y.U., Ohio
State, Pittsburgh, Texas A & M, Virginia, Wash. U., U. Wash.; Tier
4-U.C. Davis, U.C.S.B., Florida, Georgetown, Houston, Indiana, N.C.
State, Massachusetts, Penn State, Purdue, Rutgers, U.S.C., SUNY Stonybrook, Texas.
Ranking of 50 Top Graduate Programs by
Pages per Domestically Placed Graduate,
for Students Graduating 1987-92
Set of Journals
Top 36 Top5 Number of Graduates
School Rank Pages Rank Pages Placed Domestically Total
M.I.T. 1 64.1 1 21.0 54 144
Princeton 2 54.7 2 15.8 44 95
Northwestern 3 52.0 6 12.6 17 53
Chicago 4 43.5 3 15.4 50 152
Carnegie-Mellon 5 41.3 4 13.4 4 14
Yale 6 38.4 7 10.7 42 101
Harvard 7 36.0 5 12.7 46 132
U. Washington 8 35.0 25 2.4 15 57
Brown 9 34.3 12 7.5 14 41
Cal. Tech 10 34.1 16 4.3 6 12
U. Penn. 11 29.8 31 1.6 30 121
Stanford 12 29.5 10 8.2 50 116
Iowa 13 26.2 18 3.7 7 14
U.C.L.A. 14 26.1 15 4.5 28 122
Berkeley 15 25.2 8 9.0 48 149
Virginia 16 22.8 26 2.2 11 35
Columbia 17 22.3 9 8.5 32 103
Minnesota 18 21.5 14 4.8 40 155
Michigan 19 20.7 21 3.2 49 127
N.Y.U. 20 19.5 11 7.8 13 57
Johns Hopkins 21 19.5 13 5.6 10 47
Penn State 22 19.4 24 2.4 13 33
Wash. U. 23 17.7 28 2.1 21 48
U.C.S.B. 24 17.2 20 3.2 16 39
Rochester 25 16.9 19 3.6 10 24
U.N.C. 26 16.6 30 1.7 13 31
Illinois 27 16.4 27 2.2 20 50
Wisconsin 28 15.2 17 4.3 36 106
Texas A & M 29 14.5 33 1.4 12 47
U.S.C.D. 30 14.4 37 1.0 17 31
Maryland 31 14.0 22 3.2 25 72
Texas 32 13.6 23 2.6 15 47
Indiana 33 11.3 50 0.0 26 51
Michigan State 34 11.0 50 0.0 27 87
U.S.C. 35 10.5 50 0.0 3 19
Duke 36 10.2 36 1.0 24 52
Cornell 37 10.0 40 0.4 21 90
Boston U. 38 8.5 29 2.0 7 84
Florida 39 7.2 41 0.4 13 26
Ohio State 40 6.6 35 1.0 24 81
Purdue 41 6.2 50 0.0 29 89
U.C. Davis 42 6.0 32 1.5 14 38
Pittsburgh 43 4.7 50 0.0 16 63
N.C. State 44 4.7 38 0.7 13 49
School Percentage
M.I.T. 38
Princeton 46
Northwestern 32
Chicago 33
Carnegie-Mellon 29
Yale 42
Harvard 35
U. Washington 26
Brown 34
Cal. Tech 50
U. Penn. 25
Stanford 43
Iowa 50
U.C.L.A. 23
Berkeley 32
Virginia 31
Columbia 31
Minnesota 26
Michigan 39
N.Y.U. 23
Johns Hopkins 21
Penn State 39
Wash. U. 44
U.C.S.B. 41
Rochester 42
U.N.C. 42
Illinois 40
Wisconsin 34
Texas A & M 26
U.S.C.D. 55
Maryland 35
Texas 32
Indiana 51
Michigan State 31
U.S.C. 16
Duke 46
Cornell 23
Boston U. 8
Florida 50
Ohio State 30
Purdue 33
U.C. Davis 37
Pittsburgh 25
N.C. State 27
Houston 45 3.5 39 0.6 10 21 48
Rutgers 46 2.9 50 0.0 18 38 47
Iowa Statc 47 2.5 34 1.2 18 63 29
U. Mass. 48 0.3 50 0.0 14 33 42
Georgetown 50 0.0 50 0.0 3 28 11
SUNY Stonybrook 50 0.0 50 0.0 3 32 9
Note: The sample includes publications between
January 1987 and summer (July) 1997.
Ranking of 50 Top Graduate Programs by Aggregate
Page Total, for Students Graduating 1987-92
Set of Journals:
Top 36 Top 5
School Rank Pages Rank Pages
M.I.T. 1 5,996 1 2,029
Chicago 2 4,648 3 1,284
Princeton 3 4,145 2 1,305
Harvard 4 3,132 4 1,133
Berkeley 5 2,897 5 909
Stanford 6 2,547 7 522
Michigan 7 2,454 12 309
Yale 8 2,401 6 717
Minnesota 9 2,399 9 377
U.C.L.A. 10 2,172 8 383
U. Penn. 11 1,885 16 174
Northwcstcrn 12 1,804 11 324
U. Washington 13 1,357 24 99
Columbia 14 1,084 10 361
Brown 15 1,066 14 271
Wisconsin 16 1,010 15 254
N.Y.U. 17 848 18 165
Texas 18 751 13 272
Johns Hopkins 19 732 19 162
Illinois 20 692 34 55
Boston U. 21 679 17 170
Cornell 22 651 39 33
Maryland 23 649 20 127
Ohio State 24 640 25 98
Indiana 25 616 21 118
Rochester 26 601 27 83
Wash. U. 27 583 23 100
Michigan State 28 575 49 0
Texas A & M 29 562 29 74
Penn State 30 551 30 65
U.C.S.D. 31 549 37 44
Duke 32 527 38 44
Virginia 33 502 33 57
Purdue 34 502 46 4
U.N.C. 35 458 42 21
U.C.S.B. 36 441 26 96
Cal. Tech 37 391 35 46
Iowa 38 359 32 60
Rutgers 39 284 40 28
Pittsburgh 40 269 22 115
Carnegie-Mellon 41 266 28 82
SUNY Stonybrook 42 234 47 0
Iowa State 43 229 31 63
U.C.Davis 44 193 36 46
N.C. State 45 180 41 22
U.S.C. 46 141 43 10
Florida 47 140 44 9
Georgetown 48 109 48 0
Houston 49 69 45 5
U. Mass. 50 13 50 0
Graduate/Faculty Output Comparison: Ranking by
Difference between Normalized Graduate
and Normalized Faculty Pages per Capita.
Output Normalized Score
Graduates Faculty Graduates Faculty Difference Rank
U. Washington 35.0 24.7 1.02 -0.39 1.42 1
Wash. U. 17.7 13.5 -0.16 -1.35 1.19 2
Northwestern 52.0 42.1 2.18 1.11 1.07 3
N.Y.U. 19.5 17.8 -0.03 -0.98 0.94 4
U.N.C. 16.6 16.8 -0.23 -1.06 0.83 5
Columbia 22.3 21.4 0.16 -0.67 0.83 6
Iowa State 2.5 6.0 -1.19 -1.99 0.80 7
Chicago 43.5 39.9 1.60 0.92 0.68 8
Berkeley 25.2 25.5 0.36 -0.32 0.67 9
Johns Hopkins 19.5 22.8 -0.04 -0.55 0.51 10
Purdue 6.2 12.9 -0.94 -1.40 0.46 11
Stanford 29.5 31.6 0.65 0.21 0.44 12
Iowa 26.2 29.3 0.42 0.01 0.41 13
Carnegie-Mellon 41.3 41.3 1.45 1.04 0.41 14
Princeton 54.7 52.0 2.36 1.95 0.41 15
Rutgers 2.9 10.9 -1.17 -1.57 0.40 16
Illinois 16.4 22.5 -0.25 -0.57 0.33 17
Harvard 36.0 38.3 1.09 0.78 0.31 18
U. Mass. 0.3 10.7 -1.34 -1.59 0.24 19
Indiana 11.3 20.1 -0.59 -0.78 0.19 20
U.C.S.B. 17.2 25.5 -0.19 -0.32 0.12 21
Brown 34.3 39.3 0.97 0.87 0.11 22
Penn State 19.4 27.5 -0.04 -0.15 0.10 23
Texas A & M 14.5 23.9 -0.37 -0.45 0.08 24
Pittsburgh 4.7 17.1 -1.04 -1.04 0.00 25
U.C.L.A. 26.1 34.2 0.42 0.43 -0.01 26
Yale 38,4 44.0 1.25 1.27 -0.01 27
Cal. Tech. 34.1 41.0 0.96 1.01 -0.05 28
Michigan 20.7 30.8 0.05 0.14 -0.09 29
U.S.C. 10.5 23.1 -0.65 -0.52 -0.12 30
Wisconsin 15.2 28.2 -0.33 -0.09 -0.24 31
Michigan State 11.0 25.0 -0.61 -0.36 -0.25 32
Texas 13.6 27.2 -0.44 -0.17 -0.26 33
Virginia 22.8 35.0 0.19 0.50 -0.31 34
Cornell 10.0 25.9 -0.68 -0.28 -0.40 35
Florida 7.2 23.8 -0.87 -0.46 -0.41 36
Duke 10.2 27.5 -0.67 -0.15 -0.53 37
U.C.S.D. 14.4 31.3 -0.38 0.18 -0.56 38
Maryland 14.0 31.4 -0.41 0.19 -0.60 39
M.I.T. 64.1 71.5 3.00 3.62 -0.62 40
Georgetown 0.0 21.6 -1.36 -0.65 -0.71 41
Boston U. 8.5 28.8 -0.79 -0.03 -0.75 42
U. Penn. 29.8 46.3 0.66 1.47 -0.80 43
Ohio State 6.6 28.2 -0.91 -0.09 -0.83 44
U.C. Davis 6.0 28.9 -0.96 -0.03 -0.93 45
Minnesota 21.5 41.3 0.10 1.04 -0.93 46
Rochester 16.9 39.9 -0.21 0.92 -1.13 47
N.C. State 4.7 30.8 -1.04 0.14 -1.18 48
Houston 3.5 30.2 -1.13 0.09 -1.21 49
SUNY Stonybrook ... N/A ... ... ... ...
Note: "Normalized Score" transforms the per capita
figure into a variable with [mu] = 0 and [sigma] = 1.