Fishes, ponds, and productivity: student-advisor matching and early career publishing success for economics PHDS.
Hilmer, Michael J. ; Hilmer, Christiana E.
I. INTRODUCTION
Research productivity is an important concern for members of the
academe. Publishing success is important for individual departments
because more productive faculty increases the department's research
profile, which in turn increases the department's professional
reputation. As evidence, Thursby (2000) and Ehrenberg and Hurst (1998)
find that a department's reputation in the National Research
Council (NRC) ratings increases with published research, while Smyth
(1999) finds that pages published in the top 5 journals have a greater
impact than pages published in other journals. Publishing success is
important for individual faculty members because peer-reviewed
publications have long been shown to be economic currency for academic
economists (Bratsberg, Ragan, and Warren 2003; Moore, Newman, and
Turnbull 1998; Sauer 1988 among others). Research promise is important
for graduate students because students with the greatest promise for
productive careers are likely to receive the best initial job placements
(Buchmueller, Dominitz, and Hansen 1999; Krueger and Wu 2000; Long
1978). Indeed, according to a survey of hiring departments (Carson and
Navarro 1988), every single top 20 domestic economics program considered
the quality of one's research agenda to be of moderate to great
importance in the hiring decision.
An important question then is how to identify which factors are
associated with the likelihood of having a more productive research
career. To date, empirical studies have focused on the program from
which the student graduated (recent examples include Collins, Cox, and
Stango 2000; Davis, Huston, and Patterson 2001; Stock and Alston 2000).
This focus implicitly assumes that because only the very best students
become enrolled in elite programs, graduates of highly ranked programs
should hold the greatest promise for publishing success. (1) In support,
Coupe (2001) and Buchmueller, Dominitz, and Hansen (1999) find that
students graduating from top programs are more likely to publish in core
economics journals, while Hogan (1981) finds that students graduating
from programs with more active faculty publish more journal articles.
Before placing total faith in these results, however, members of
the academe should ask whether such findings hold across all points in
the graduate distribution or whether they are only accurate predictors
on average. Previously published data clearly suggest the latter. For
instance, Buchmueller, Dominitz, and Hansen (1999) find that 30% of Tier
1 graduates do not publish any academic articles within the first 9 yr
after PhD receipt, while 50% do not publish any top 50 articles (as
defined by Liebowitz and Palmer 1984). In other words, there is a
"large amount of unexplained variability" in the future
success of a program's graduates (Krueger and Wu 2000, 93).
Consequently, it is likely that significant overlap exists between the
performance of top program graduates and lesser program graduates, with
significant portions of the latter outperforming significant portions of
the former.
With this in mind, the question becomes: is there a readily
observable factor that is a less noisy predictor of a student's
likelihood for publishing success? We argue that there is. During a
student's postgraduate education, one of the most significant
influences is his or her dissertation advisor. We posit that for timing
reasons, advisors will possess additional insight into a student's
true research potential that was not available to the admission
committee when deciding whether to admit the student into the program.
Consequently, we expect a student's dissertation advisor to be a
more informed predictor of his or her early career productivity than the
program from which he or she receives the PhD. (2) In other words we
believe that considering the identity of a student's dissertation
advisor in addition to the quality of the program from which the student
receives the PhD should help reduce the uncertainty inherent in
predicting the student's position within the early career
productivity distribution.
The current study is the first to address this possibility. Using
readily available data sources, we are able to uniquely identify a
student's primary dissertation advisor as well as his or her
graduate program, dissertation field, sex, domestic/international
status, first postgraduate job, and several different measures of his or
her early career research productivity. Based on Coupe's (2003)
global rankings of the top 1,000 economists, we are able to assign the
advisor's "ranking" within the profession. Combining
these factors, we are able to examine the relationship between the
ranking of a student's dissertation advisor and the student's
early career research productivity.
We analyze a sample of 1,892 individuals receiving economics PhDs
from top 30 programs during a 5-yr period in the early 1990s. Our main
finding is that, controlling for program quality, student-advisor match
is a significant predictor of early career research productivity,
especially for publications in top 36 economics journals. Additionally,
controlling for advisor rank significantly reduces the estimated
productivity differences due to program quality, ceteris paribus,
suggesting that much of the estimated productivity difference previously
attributed to differences in program quality might actually be explained
by differences in the student-advisor match. (3) Finally, simulations
based on our regression results suggest that potentially significant
overlaps exist in the cross-program productivity distribution, as we
find that while Tier 1 graduates with star advisors are statistically
more productive than everyone else, Tier 2 and Tier 3 graduates with
star advisors perform as well as Tier 1 graduates with lower ranked
advisors and Tier 2 and Tier 3 graduates with lower ranked advisors
perform as well as Tier 1 graduates with unranked advisors.
[FIGURE 1 OMITTED]
II. THEORETICAL BACKGROUND
Our central prediction is that students with more prominent
dissertation advisors will be more productive researchers in their early
careers than otherwise similar students with less prominent advisors. We
make this argument based on observations about the timing of the process
by which economics PhD students become matched with their dissertation
advisors as opposed to their PhD programs. We start by noting that
thousands of potential students apply to PhD programs in economics each
year. (4)Presumably, each program desires to enroll the best possible
class of entering PhD students. While several different factors might go
into the determination of the best class (for a detailed discussion, see
Attiyeh and Attiyeh 1997), Krueger and Wu (2000, 81) state that
"one important consideration is the eventual job placement and
professional success of their graduates." In a world with perfect
information about a student's research ability, students with the
highest potential for future success should be admitted to the best
programs. The information available when making admission decisions is
imperfect, however, as it is mostly based on the student's
standardized test scores, prior academic performance, and letters of
recommendation. As Cushing and McGarvey (2004) argue, such observable
measures of a student's potential likely vary little across
advanced-degree aspirants, as those desiring to further their educations
are mostly drawn from the top tail of the student distribution. In other
words, because nearly all applicants, especially to top programs, are
high achieving students with top references, there will be
"considerable uncertainty in forecasting which applicants will be
successful economists" (Krueger and Wu 2000, 93).
This uncertainty likely results in "errors" in the
admissions process, whereby many students who fail to publish in their
early career have their PhDs minted from top programs, while many
students who are relatively prolific in their early careers have their
PhDs minted by lesser programs. This overlapping distribution of early
career research productivity is clearly demonstrated in Figure 1 for the
data set we analyze below. Given that we analyze a sample of students
receiving PhDs from top 30 economics programs between 1990 and 1994, the
box plots should be interpreted as follows: the ends of the boxes
represent the number of articles published per year between PhD receipt
and December 2002 by students at the 25th and 75th percentiles within
Tier 1, Tier 2, and Tier 3 programs, respectively, while the line in the
middle of the box represents the number of articles published by the
median student and the right-most whisker represents the number of
articles published by the student at the top end of the distribution
below outside values. As such, the box plots indicate that there is
significant overlap in the distributions of early career research
productivity across graduates from different program tiers. In other
words, it should be clear from Figure 1 that program rank alone is not a
sufficiently accurate predictor of a student's likelihood for early
career publishing success.
Consider now the process through which PhD students become matched
with their dissertation advisors. A student normally seeks out an
advisor only after he or she has completed significant amounts of
coursework, taken and passed preliminary exams, and potentially worked
for several terms as a research assistant. As such, by the time a
student formally requests that a given faculty member serve as his or
her advisor, the faculty member will have access to significant
additional information as to the student's likely research
potential. Because of this additional readily observable information, we
expect advisors to be much better informed as to a given student's
likelihood for future publishing success than the admission committee
was when deciding to admit the student into the program. As a result,
the signal provided by the student-advisor match should be less noisy
than the signal provided by the student-program match, and we would
expect students with more prominent advisors to be more productive in
their early careers than otherwise similar students with less prominent
advisors.
As with most educational outcomes, there are two potential reasons
why students matched with more prominent advisors might become more
productive researchers. The first is a human capital argument (Becker
1964). Namely, more prominent economists are more prominent because they
themselves are prodigious publishers. It is therefore possible that they
pass along some of their productive knowledge to their advisees. The
other is a signaling argument (Spence 1973) that by becoming paired with
a prominent advisor, graduate students are revealing themselves to
possess the characteristics that will make them successful publishers in
their early careers. We note that this may result from either advisors
screening out lower ability/less motivated students or lower
ability/less motivated students choosing to work with less demanding
faculty members. For the purposes of this study, it is not necessary to
endorse one of these explanations over the other, as they both predict
greater success for students working with highly ranked advisors.
Nonetheless, a very interesting question for future research might be
whether our predicted outcome is due to a signaling or human capital
effect.
III. DATA
A major innovation of this study is the construction of a
first-of-its-kind data set that matches economics PhD recipients to
their dissertation advisors, peer-reviewed publication records, graduate
programs, dissertation fields, sex, domestic/international status, and
first postgraduation jobs. The Dissertation Abstracts database
(published by ProQuest Information and Learning) contains extensive
information on more than 1.2 million dissertations accepted at
accredited North American educational institutions since 1861. In 1990,
the database started including the name of the student's
dissertation advisor for the vast majority of dissertations filed. (5)
We collected information on 1,892 dissertations filed in economics
fields between 1990 and 1994 for students graduating from top 30
economics programs and reporting the identity of their dissertation
advisor. To make sure that we do not include students writing on
economic topics but belonging to different academic disciplines, we
cross-reference our list with the "Doctoral Dissertations in
Economics Annual List" published each December in the Journal of
Economic Literature. The 5-yr time frame is somewhat arbitrary but is
chosen for the following reasons. We begin in 1990 because that is the
first year in which dissertation advisors are included in the majority
of student records. We collect data over multiple years to avoid any
single-year aberrations that might bias the results. (6) Finally, we cut
off the time frame in 1994 because tenure is usually granted after a
faculty member's sixth year, and therefore, at the time we
collected the data, we were observing the student's productivity up
to the point where their initial tenure decision was made (with a lag to
allow for time in press).7
Individual-specific peer-reviewed publication data as of December
2002 are collected from Econlit, which is the American Economic
Association's bibliography of economics literature throughout the
world. The database contains information on articles published in more
than 700 journals, including all the major field and general interest
economics journals. In other words, while some publications may not be
contained in Econlit, they are likely published in more obscure, less
respected journals, or in the words of Coupe (2003, 1310), "one can
claim with a slight exaggeration, first, that if one is not in Econlit,
one did not do academic research in economics and second, that these
journals together form the 'economics literature'." To
define research productivity, we consider three commonly used metrics.
The first two are the total number of articles published and the total
number of articles published in top 36 economics journals according to
Scott and Mitias (1996). The third is included to address the concern
that "an article is not an article" and follows Sauer (1988)
by calculating a measure of pages published that is weighted for journal
quality, number of authors, and number of characters per page (AEQ pages).
To rank economics programs, we follow the three-tier ranking
presented in Siegfried and Stock (2001). The three tiers correspond to
programs 1-6, 7 15, and 16 30, respectively, in the 1995 NRC rankings of
PhD-granting economics programs. (8)
To rank dissertation advisors, we use the global top 1,000
economist ranking of Coupe (2003). This ranking is based on a weighted
average of 11 different historically used metrics of research
productivity. (9) By calculating a weighted average of these metrics,
each of which was developed in response to perceived weaknesses in
previous methodologies, Coupe is hoping to avoid the complaint that
"we were disadvantaged by the specific weighting scheme."
Overall, we define an advisor as being ranked among the worldwide top
250 ("star" advisors), ranked between 251 and 1,000
("lower ranked" advisors), or not ranked in the top 1,000
("unranked" advisors).
Finally, as previous studies by Davis, Huston, and Patterson
(2001), Collins, Cox, and Stango (2000), and Buchmueller, Dominitz, and
Hansen (1999) indicate, an important determinant of a student's
future productivity is whether he or she holds a research-oriented job.
We define research-oriented jobs as those in the academic sector or with
the Federal Reserve. (10) To determine a student's first
postgraduation job, our initial source is the self-reported information
contained in the American Economic Association's Directory of
Members. For students whose information was not listed, we turn to the
author affiliation in Econlit for the first article published after the
student received his or her PhD.
IV. DESCRIPTIVE ANALYSIS
A. Student-Program and Student-Advisor Match
Table 1 provides a summary analysis of several different aspects of
the student-program and student-advisor matchings. Overall, roughly 28%,
39%, and 33% of the students in our sample received their PhDs from Tier
1, Tier 2, and Tier 3 programs, respectively. Given the different number
of programs in each tier, we observe more graduates of top programs,
with an average of roughly 17.5 students per year observed graduating
from each Tier 1 program as opposed to averages of roughly 16.6 and 8.4
observed graduating from each Tier 2 and Tier 3 programs. The higher
concentration of students within top programs is not surprising given
Coupe's (2001) estimate that 80% of PhDs in economics graduate from
just 20 programs and the finding of Pieper and Willis (1999) that top 10
schools produce 47% of the economics faculty at PhD-granting
institutions. As might be expected, there appears to be an increased
supply of ranked advisors within Tier 1 programs, as nearly 65% of Tier
1, 47% of Tier 2, and 32% of Tier 3 advisors are ranked among
Coupe's top 1,000 worldwide economists.
Turning to the middle panel, the clear plurality of students,
roughly 41%, work with unranked dissertation advisors. This is not
surprising given that there are far more than 1,000 global academic
economists. (11) Nonetheless, nearly three-fifths of our students did
have their dissertations directed by a top 1,000 advisor. Turning to the
distribution of advisors, among those we observe directing at least one
dissertation, roughly 18% are stars, 29% are lower ranked, and 53% are
unranked.
The bottom panel of Table 1 combines student demand and advisor
supply by presenting the cross-distribution of students by program tiers
and advisor ranking groups. While we find that the percentage of
students working with star faculty decreases with program tier (50%,
22%, and 18%), our summary statistics suggest that there are significant
cross-tier overlaps in the quality of advisors with which students are
able to work. Specifically, while 18% of Tier 3 students work with star
advisors, 50% of Tier 1 and 78% of Tier 2 students work with either
lower ranked or unranked advisors.
Table 2 presents the distribution of completed dissertations
directed across all advisors we observe lead supervising at least one
completed dissertation between 1990 and 1994. Overall, we observe 741
different lead advisors with 341, or nearly 46%, being ranked among
Coupe's top 1,000. (12) A clear majority of advisors maintain
lighter loads, with nearly 47% lead supervising only one dissertation
and nearly 69% averaging one or fewer per year. At the opposite end, six
faculty members supervise an average of three or more dissertations per
year with the maximum number of advisees being 21, or 4.2 students per
year. It further appears that ranked advisors tend to carry larger
advising loads, as the percentage of advisors who are ranked in the top
1,000 increases with the number of total advisees supervised. These
trends are broadly consistent with those observed in a study of all
dissertations filed at Cornell University from 1996 to 2002 (Crosta and
Packman 2005). During that period, social science faculty chaired an
average of 1.13 dissertations with the top 10% chairing 55% of all
dissertations. Van Ours and Ridder (2003) explain at least a part of
this trend by observing that in the Netherlands, dissertation
supervisors with good research records are paired with better students
and are thus more likely to supervise completed dissertations than
faculty with lesser records who are paired with students who are more
likely to stop short of completion.
B. Early Career Productivity
To move the analysis toward early career productivity, Table 3
presents average values across program tier and advisor rank for each of
our productivity metrics. Overall, by December 2002, our sample of
1990-1994 PhD recipients averaged 4.12 total articles, 0.35 top 5
articles, 1.30 top 36 articles, and 13.69 AEQ pages. These publication
data appear generally consistent with previously published data
(Buchmueller, Dominitz, and Hansen 1999; Coupe 2001), as roughly 68% of
our students publish at least one article within their first 8 12 yr
after graduation. They also illustrate the difficulty associated with
publishing in the very best general interest journals, as only slightly
less than 40% of our students are able to publish a top 36 article in
their early careers (while less than one in seven are able to publish in
a top 5 journal).
The middle panel of Table 3 suggests that large cross-tier
differences exist, with Tier 1 graduates being, on average, more
productive across all metrics than Tier 2 and Tier 3 graduates. Perhaps
most significantly, these average differences are largest for the
highest quality publications. Namely, while Tier 1 students average
roughly 50% more total articles than Tier 2 and Tier 3 students, they
average more than twice as many AEQ pages and more than 2.2 times as
many top 36 articles. In other words, it appears that the biggest
difference between students graduating from elite programs and students
graduating from lesser programs occurs in the propensity to publish in
the very best economics journals. Finally, the bottom panel suggests
similar patterns for students with star advisors relative to those with
lower ranked and unranked advisors.
V. EMPIRICAL RESULTS
The next step in our analysis is to empirically assess the degree
to which the rank of a student's dissertation advisor affects his
or her early career productivity. To isolate this effect, we estimate
productivity functions for each of our three metrics that control for
the rank of a student's dissertation advisor, the rank of his or
her PhD program, and other individual characteristics. Following
standard form, our estimation equations can be written as:
(1) [P.sub.i] = [B.sub.0] + [B.sub.1][A.sub.i] + [B.sub.2][Q.sub.i]
+ [B.sub.3][X.sub.i] + [B.sub.4][O.sub.i] + [[epsilon].sub.i],
where [P.sub.i] represents one of the four productivity measures,
[A.sub.i] is the rank of the student's dissertation advisor,
[Q.sub.i] is the reputation rank of the student's PhD program,
[X.sub.i] is a vector of individual characteristics, and
[[epsilon].sub.i] is an error term. The individual characteristics we
consider are whether the student is male or international, the field in
which the student's dissertation is filed, the number of years
since the student received his or her PhD, and whether the
student's first job was research oriented. As demonstrated in Table
2, advisors differ greatly in their propensity to take on advisees. The
number of other advisees with whom a student's advisor works might
have competing effects on his or her future productivity because, on the
one hand, the increased student load could force the advisor to devote
less time to each student, thereby harming the student's learning.
On the other hand, anecdotal evidence suggests that prominent advisors
might take on increased student loads due to their love of mentoring,
and thus, they may actually devote more time to each of their students
than would have other advisors with smaller student loads. To account
for these possibilities, our vector of individual characteristics also
includes [O.sub.i], which indicates the number of other completed
dissertations that the student's advisor directed during our sample
period. Our main parameters of interest are [B.sub.1] and [B.sup.2],
which indicate the effect that the rank of a student's dissertation
advisor and the reputation rank of a student's PhD program have on
his or her early career productivity, all else constant.
We note two important estimation concerns associated with our
empirical approach. First, our total and top 36 article measures are
count data and are truncated at 0 due to the fact that many students
have not published, especially within top 36 journals. Truncated count
data models are normally estimated as either a Poisson or a negative
binomial, both of which account for the skewed distributions of the
dependent variables (Cameron and Trivedi 1998). A well-known problem
with the Poisson distribution is the presumed equality of the
conditional mean and variance functions. The data in our analysis fail
tests of overdispersion for each productivity measure, suggesting that
the assumption of equidispersion is violated and that the Poisson is not
the appropriate distribution. As a result, we estimate each of our count
data productivity functions with the negative binomial regression model
as that distribution accounts for the skewness of the data without
requiring equality between the conditional mean and variance.
Second, as noted by Buchmueller, Dominitz, and Hansen (1999) and
others, a student's initial job placement is endogenous because the
initial placement that a student receives likely influences his or her
need and/or desire to publish in the early career. Failing to account
for this endogeneity would cause us to overestimate the true
relationship between the student's initial job placement and his or
her early career productivity. The ideal correction for such potential
endogeneity would be an instrumental variables approach. Unfortunately,
as with Buchmueller, Dominitz, and Hansen (1999), we lack readily
observable factors that would influence the student's initial job
placement without also potentially influencing his or her early career
productivity. As a result, we are unable to employ the desired
instrumental variable correction. Nonetheless, we do not want to totally
ignore the potential influence of the endogenous initial job placement.
We, therefore, employ an approach similar to that in Buchmueller,
Dominitz, and Hansen (1999) whereby we estimate our productivity
functions both with and without the first-job research-job variable in
an effort to gauge the impact that the relationship between a
student's advisor and his or her initial job placement has on the
student's early career publishing success. (13)
A. Are Students with Higher Ranked Advisors More Productive?
To examine the effect that the student-advisor match has beyond the
initial student-program match, Table 4 presents results of estimating
Equation (1) for each of our three productivity measures that have been
converted to marginal effects. The first three columns present results
that do not control for initial job placement, while the last three
present results that include the first-job research-job variable. In
every column, advisor rank and program rank are entered as sets of dummy variables, with the omitted group being Tier 3 graduates with unranked
advisors. Hence, the marginal effects presented in Table 4 represent the
estimated differences in each of our productivity measures for students
having an advisor belonging to a given ranking group or graduating from
a given program tier relative to otherwise similar Tier 3 students with
unranked advisors.
Comparing the first three columns of results to the second three
suggests that at least part of the impact that advisor rank (and program
tier) has on early career publishing success takes place through the
impact that advisor rank (and program tier) has on the likelihood of
receiving an initial placement in a research-oriented job. Specifically,
after including the first-job research-oriented variable, the estimated
effect of advisor rank decreases by up to one-third, while the estimated
effect of program tier decreases by more than one-half. Consequently,
the discussion below focuses on the results that control for the type of
initial placement that a student receives.
While not presented here for the sake of brevity, comparing these
results to results that include program tier but not advisor rank
suggests two major findings. First, after adding controls for the global
rank of a student's advisor, the estimated differences between Tier
1, Tier 2, and Tier 3 graduates decrease in magnitude by roughly
one-third for each metric. At the same time, the estimated log
likelihoods increase by amounts large enough to suggest that our
controls for advisor rank are statistically significant. (14) These
results are consistent with our central hypothesis, as they suggest that
significant portions of the difference between graduates of top- and
lower ranked programs might be explained by the match between the
student and his or her dissertation advisor.
Second, and most importantly for this analysis, after controlling
for the quality of program from which a student graduates, students with
ranked advisors, and especially those with star advisors, are
statistically more likely to publish across all metrics than students
with unranked advisors. Specifically, holding program quality constant,
we estimate that students with star advisors produce 1.45 more total
articles, 0.87 more top 36 articles, and 9.69 more AEQ pages than
otherwise similar students with unranked advisors, while students with
lower ranked advisors produce 0.65 more total articles, 0.62 more top 36
articles, and 6.74 more AEQ pages than otherwise similar students with
unranked advisors. Hence, the estimated differences suggest that the
student-advisor match provides a strong signal as to whether the student
will publish any articles and an especially strong signal as to the
likelihood that a student will publish in top economics journals early
in his or her career.
Turning to the remaining variables, our results suggest that,
controlling for program reputation, years since PhD receipt, and
domestic/international status, males are statistically likely to publish
more total and top 36 articles but not AEQ pages than otherwise similar
females, while, all else constant, international students are less
likely to publish across all three metrics than otherwise similar
domestic students. These results are consistent with previous findings
in Buchmueller, Dominitz, and Hansen (1999). The number of other
advisees that a student's advisor supervises during our 5-yr period
is never estimated to have a statistically significant effect on the
student's early career productivity. Finally, while not presented
here, as with Buchmueller, Dominitz, and Hansen (1999), we find few
statistically significant differences across fields of study, suggesting
that, all else equal, a student's chosen field does not have much
impact on his or her early career productivity.
B. Fishes and Ponds: Do Cross-Program Productivity Distributions
Overlap?
As a final exercise, we calculate predicted productivity measures
for a hypothetical student under each of the possible advisor rank/
program tier combinations. In this case, our hypothetical student is a
male, domestic student holding a research job, having had his PhD for
the sample average 9.81 yr, and having an advisor who supervised the
sample average 3.91 other completed dissertations. In essence, the
results in Table 5 replicate the experiment of: (a) sending the
hypothetical student to a program in each quality tier and having him
work with a star, a lower ranked, and an unranked advisor within each of
those programs, (b) having the student attain a research-oriented job,
(c) observing his early career productivity in each instance, and (d)
comparing the results.
A notable finding emerges from Table 5. Figure 1 suggests that
there is significant overlap in the productivity distributions of
students graduating from Tier I, Tier 2, and Tier 3 programs. The
predicted productivity measures in Table 5 suggest that the
student-advisor match might help reduce the noise associated with
determining a student's research potential. Namely, outcomes for
our hypothetical student fall into four statistically different groups.
The student is predicted to be most productive across all four metrics
if he attends a Tier 1 program and works with a star advisor. This is
not at all surprising. What is potentially surprising is the pairings
belonging to the remaining outcome groups. The second most productive
grouping is for star advisors at Tier 2 programs, lower ranked advisors
at Tier 1 programs, and star advisors at Tier 3 programs. The third most
productive grouping is for unranked advisors at Tier 1 programs, lower
ranked advisors at Tier 2 programs, and lower ranked advisors at Tier 3
programs. The least productive grouping is for unranked advisors at Tier
2 and Tier 3 programs. In other words, Tier 2 and Tier 3 graduates
working with star advisors are predicted to do as well as all Tier 1
graduates not working with star advisors, while Tier 2 and Tier 3
graduates working with lower ranked advisors are predicted to do as well
as Tier 1 graduates working with unranked advisors. Put another way, it
appears that there are potentially tangible benefits to being a
"big fish in a small pond" in that students might perform
better if they attend a lower ranked program but are able to work with a
more prominent advisor.
C. Are the Results Robust to Different Specifications of Advisor
Rank and Program Tier?
Given that advisor quality and program rank could clearly be
entered into our productivity functions in a number of different ways
(for instance, as linear measures, series of dummy variables, or a
number of quadratic terms), we believe that it is important to note why
we choose our specific three-tier definition of advisor rank and program
tier. Entering advisor rank and program tier as linear measures requires
the assumption that there are distinct, constant differences between
advisors (and programs) of a given rank and advisors (and programs) who
are ranked either one position higher or one position lower. Given the
imperfect science associated with quantifying research productivity (as
evidenced by the extensive literature attempting such rankings), we
believe that it is possible to quibble over whether a given individual
(or program) should be ranked, say 17th or 18th. At the same time,
however, we believe that broader groupings are highly accurate in terms
of relative research productivity. This feeling is similar to that of
Kingston and Smart (1990, 149) who suggest that such a categorical approach is preferable to a linear specification when comparing
graduates of different-quality colleges because "it is likely that
differences throughout most of the academic hierarchy are
inconsequential [which would imply only a small overall effect of
program rank] ... but that going to an elite school does make a
difference." This is the very argument that Stock and Alston (2000)
employ to motivate their use of the three-tier program quality measure
in their analysis of the effect of program quality on initial job
placements and is presumably why other studies, such as Buchmueller,
Dominitz, and Hansen (1999), employ a categorical approach to defining
program quality.
While not presented here, we note that our results appear to be
robust across the numerous different specifications of the advisor rank
and program tier that we estimated. For example, in a regression run
only for students with star or ranked advisors, when entered as a dummy
variable, our estimated marginal effect for star advisors is 0.8911.
When entered as a continuous measure, on the other hand, the estimated
marginal effect is 0.00162, suggesting that, all else equal, every one
position increase in an advisor's relative standing is associated
with his or her student averaging 0.00162 more total articles. Given
that the middle ranking in the star advisor category (advisors ranked
1-250) is 500 positions above the middle ranking in the ranked advisor
category (advisors ranked 251-1,000), the results suggest that a student
of the former-type advisor would average 0.8137 more total articles than
a student of the latter-type advisor, ceteris paribus.
Another potential concern is that our categorical approach to
defining program rank might be masking potential cross-program
differences in the relationship between the advisor-student match and
the early career publishing success. For example, because Harvard has
significantly more advisors ranked in Coupe's top 1,000 (49 as
opposed to 34 for Chicago, 29 for MIT, 27 for Stanford, 23 for
Princeton, and 18 for Yale), it could be possible that the statistical
significance of the advisor rank variable for Tier 1 programs is simply
picking up some sort of "Harvard effect." To address this
concern, we estimated separate productivity functions that included an
exhaustive set of 29 program dummies (Harvard omitted) rather than our
categorical variables. Again, the results were nearly identical in
significance and magnitude to those reported in the text. Moreover,
simple correlations between advisor rank and student productivity for
every program with sufficiently large numbers of student observations
yield the expected positive correlations.
Finally, to close our analysis, we perform two additional
robustness checks. First, to investigate the possibility that the
estimated relationships might differ across program tiers, we estimated
the models separately for each program tier, finding that the star and
lower ranked advisor variables were positive and statistically
significant within each program tier for all three productivity
measures. Second, to loosen the restriction that the effect of advisor
rank is the same across the different program tiers, we estimated
productivity functions that included interaction terms between advisor
rank and program tier, finding that the interaction terms were
statistically insignificant and that their inclusion did not affect the
estimated relationship between advisor rank and early career
productivity in all cases.
VI. CONCLUSIONS
This paper is the first to examine the effect that the
student-advisor match has on a student's early career productivity.
Regression results confirm the significance of working with a ranked
dissertation advisor, as we find that students working with ranked
advisors average significantly more publications, especially in terms of
top 36 articles and AEQ pages, than students working with unranked
advisors, ceteris paribus. This result holds even after controlling for
the first job that a student holds. Consequently, it appears that the
"quality" of a student's dissertation advisor is an
important predictor of early career success beyond the reputation of the
program from which the student graduates. Moreover, in many cases, this
advisor effect appears to outweigh the school effect to such an extent
that students attending lower ranked programs but working with superstar
advisors are predicted to publish significantly more total and top 36
articles and more quality-adjusted pages than students attending
top-ranked programs but working with less prolific advisors.
Our results are potentially important for economics departments
that are considering which applicants to pursue on the job market,
especially those lower ranked departments that might be choosing between
lower distribution students from top programs and higher distribution
students from lower ranked programs. As a specific example, Smyth (1999)
estimates that by publishing one additional 10-page article in a top 5
journal, the Department of Economics at Louisiana State University could
increase its NRC ranking from 51 to 40 and its overall professional
perception from "adequate" to "strong." Our results
suggest that by hiring a Tier 2 student with a star advisor instead of a
Tier 1 student with an unranked advisor, the department could increase
its productivity by nearly 5 AEQ pages, thereby significantly increasing
its professional reputation.
Not only are our results important to hiring committees, but they
are potentially important to current and potential economics PhD
students as well. Namely, our results suggest that students attending
lower ranked programs may outperform students attending top 6 programs
if they are able to work with a superstar advisor. Given the importance
that hiring departments place on research potential (Carson and Navarro
1988), these results suggest that students might actually be better off
attending lower ranked programs and having the opportunity to work with
more respected advisors than by attending a top program, falling in the
lower tails of the student ability distribution, and being left to work
with a lesser known advisor. In other words, it appears that many
students might benefit by being a "big fish in a small pond"
rather than a "small fish in a big pond." These results are
also likely to translate into labor market success. For instance, Sauer
(1988) estimates that each additional AEQ page published in the
top-ranked journal increases salary by 0.17% ($151.36 in 2006 dollars),
so the above-cited 5-AEQ page increase from having a star advisor at a
Tier 2 program as opposed to an unranked advisor at a Tier 1 program
likely results in tangible monetary rewards.
ABBREVIATIONS
MIT: Massachusetts Institute of Technology
NRC: National Research Council
NYU: New York University
OLS: Ordinary Least Squares
UC: University of California
REFERENCES
Attiyeh, G., and R. Attiyeh. "Testing for Bias in Graduate
School Admissions." Journal of Human Resources. 32, 1997. 524-48.
Bauwens, L. "A New Method to Rank University Research and
Researchers in Economics in Belgium." CORE Discussion Paper, 1998.
Becker, G. S. Human Capital. 1st ed. New York: National Bureau of
Economic Research, 1964.
Bratsberg, B., J. F. Ragan, and J. T. Warren. "Negative
Returns to Seniority: New Evidence in Academic Markets." Industrial
and Labor Relations Review, 56, 2003, 306-23.
Buchmueller, T. C., J. Dominitz, and W. L. Hansen. "Graduate
Training and the Early Career Productivity of Ph.D. Economists."
Economics of Education Review, 18, 1999, 65-77.
Cameron, A. C., and P. K. Trivedi. Regression Analysis and Count
Data. Cambridge: Cambridge University Press, 1998.
Carson, R., and P. Navarro. "A Seller's (and
Buyer's) Guide to the Job Market for Beginning Academic
Economists." Journal of Economic Perspectives, 2, 1988, 137-48.
Collins, J. T., R. G. Cox, and V. Stango. "The Publishing
Patterns of Recent Economics Ph.D. Recipients." Economic Inquiry,
38, 2000, 358-67.
Coupe, T. "Basic Statistics for Economists." Unpublished
manuscript, 2001.
--. "Revealed Performances: Worldwide Rankings of Economists
and Economics Departments, 1990 2000." Journal of the European
Economic Association, 1, 2003, 1309-45.
Creel, M. D., and J. B. Loomis. "Theoretical and Empirical
Advantages of Truncated Count Data Estimators for Analysis of Deer
Hunting in California." American Journal of Agricultural Economics,
72, 1990, 431-41.
Crosta, P. M., and I. G. Packman. "Faculty Productivity in
Supervising Doctoral Students' Dissertations at Cornell."
Economics of Education Review, 24, 2005, 55-65.
Cushing, M. J., and M. G. McGarvey. "Sample Selection in
Models of Academic Performance." Economic Inquiry, 42, 2004,
319-22.
Davis, J. C., J. H. Huston, and D. M. Patterson. "The
Scholarly Output of Economists: A Description of Publishing
Patterns." Atlantic Economic Journal, 29, 2001, 341-49.
Ehrenberg, R. G., and P. J. Hurst. "The 1995 Ratings of
Doctoral Programs: A Hedonic Model." Economics of Education Review,
17, 1998, 137-48.
Ellison, G. "The Slowdown of the Economics Publishing
Process." Journal of Political Economy, 110, 2003, 947-93.
Hirsch, B. T., R. Austin, J. Brooks, and J. B. Moore.
"Economics Departmental Rankings: Comment." American Economic
Review, 74, 1984, 822-26.
Hogan, T. D. "Faculty Research Activity and the Quality of
Graduate Training." Journal of Human Resources, 16, 1981, 400-15.
Kalaitzidakis, P., T. P. Mamuneas, and T. Stengos. "European
Economics: An Analysis Based on Publications in the Core Journals."
European Economic Review, 43, 1999, 1150-68.
Kingston, P. and J. Smart. "The Economic Payoff to Prestigious
Colleges," in The High Status Track: Studies of Elite Private
Schools and Stratification, edited by P. Kingston and L. S. Lewis.
Albany, NY: SUNY, 1990.
Krueger, A. B., and S. Wu. "Forecasting Job Placements of
Economics Graduate Students." Journal of Economic Education, 31,
2000, 81-94.
Laband, D. N., and M. Piette. "The Relative Impact of
Economics Journals." Journal of Economic Literature, 32, 1994,
640-66.
Liebowitz, S. J., and J. P. Palmer. "Assessing the Relative
Impacts of Economic Journals." Journal of Economic Literature, 22,
1984, 77-88.
Long, J. S. "Productivity and Academic Position in the
Scientific Career." American Sociological Review, 43, 1978,
889-908.
Moore, W. J., R. J. Newman, and G. K. Turnbull. "Do Academic
Salaries Decline with Seniority?" Journal of Labor Economics, 16,
1998, 352-66.
Pieper, P. J., and R. A. Willis. "The Doctoral Origins of
Economics Faculty and the Education of New Economics Doctorates."
Journal of Economic Education, 30, 1999, 80-8.
Sauer, R. "Estimates of the Returns to Quality and
Coauthorship in Economic Academia." Journal of Political Economy,
96, 1988, 855-66.
Scott, L. C., and P. M. Mitias. "Trends in Rankings of
Economics Departments in the U.S.: An Update." Economic Inquiry,
34, 1996, 378-400.
Siegfried, J. J., and W. Stock. "So You Want to Earn a Ph.D.
in Economics? How Long Do You Think It Will Take?" Journal of Human
Resources, 36, 2001, 364-78.
Smyth, D. J. "The Determinants of the Reputations of Economics
Departments: Pages Published, Citations and the Andy Rooney Effect." American Economist, 43, 1999, 49-58.
Spence, A. M. "Job Market Signaling." Quarterly Journal
of Economics, 87, 1973, 355-74.
Stock, W. A., and R. M. Alston. "Effect of Graduate-Program
Rank on Success in the Job Market." Journal of Economic Education,
31, 2000, 389-401.
Thursby, J. G. "What Do We Say about Ourselves and What Does
It Mean? Yet Another Look at Economics Department Research."
Journal of Economic Literature, 38, 2000, 383-404.
van Ours, J. C., and G. Ridder. "Fast Track or Failure: A
Study of the Graduation and Dropout Rates of Ph.D. Students in
Economics." Economics of Education Review, 22, 2003, 157-66.
Wu, S. "Recent Publishing Trends at the AER, JPE, and
QJE." Unpublished manuscript, Hamilton College, 2004.
(1.) While the language used in much of the previous research leads
us to infer that the authors were implicitly making this assumption, we
note that such a stringent assumption is not required for the prediction
to hold. Specifically, even if it is not the case that "only the
very best students" enroll in elite programs, the graduates of such
programs will still be predicted to be the most productive if those
programs are the best trainers of future academic economists.
(2.) We realize that faculty might choose to take on advisees for
reasons other than their potential research ability. Unfortunately, it
is not possible to control for such unobserved factors.
(3.) We understand that observed differences in student-advisor
matches are affected by cross-program differences in the supply of
faculty with differing levels of research prominence. As such, the
observed differences relating to advisor research prominence will to
some extent be picking up the "faculty quality" element of
cross-program quality differences.
(4.) As a specific example, Krueger and Wu (2000) report that 344
students applied to a given top 5 economics program in 1989, with 65
being admitted and 27 choosing to enroll.
(5.) We do note that there are a number of PhD recipients during
the period from 1990 to 1994 for whom no dissertation advisor is
specified. While small, the percentage is largest for 1990 graduates
from the University of Chicago. The percentage missing decreases quickly
over time and is practically nonexistent for the last 3 yr of our study
period.
(6.) We estimated all models with smaller samples of years without
significant differences in the results.
(7.) There is evidence of an increasing slowdown in the economics
publication process. Specifically, Ellison (2003) finds that the average
publication time had increased from 6 mo to over 2 yr.
(8.) Tier 1 programs are Harvard, Chicago, MIT, Stanford,
Princeton, and Yale. Tier 2 programs are UC Berkeley, Pennsylvania,
Northwestern, Minnesota, UCLA, Columbia, Michigan, Rochester, and
Wisconsin. Tier 3 programs are UC San Diego, NYU, Cornell, Cal Tech,
Maryland, Boston University, Duke, Brown, Virginia, North Carolina,
University of Washington Seattle, Michigan State, Illinois, Washington
University (St. Louis), and Iowa.
(9.) These metrics represent different methods for weighting
publications according to journal quality, article length, authorship
configuration, and article impact and include total number of articles
and pages in all journals; total articles, adjusted articles, total
pages, and adjusted pages according to Laband and Piette (1994); total
articles in ten top journals (Kalaitzidakis, Mamuneas, and Stengos
1999); total articles with page size corrections in 24 top journals
(Hirsch et al. 1984) and 36 top journals (Scott and Mitias 1996); and
the methods for calculating an article's impact factor, based on
citations, presented in Laband and Piene (1994) and Bauwens (1998).
(10.) According to author affiliation statistics in Wu (2004),
among the 25 programs that publish more than 1% of all articles in the
American Economic Review, 22 are top-ranked economics programs and 3 are
members of the Federal Reserve System.
(11.) According to Coupe (2003), between 1969 and 2000, close to
131,000 individuals contributed articles to the economics literature.
Among these, 71,983 contributed only one article, while 1,230
contributed ten or more articles with the maximum for one individual
being 238.
(12.) A natural question might be what are the non-advising 659 of
the top 1,000 economists doing? Among Coupe's 1990-2000 top 1,000
economic publishers, 486 were affiliated with top 30 U.S. economics
programs, 145 with other U.S. PhD-granting economics departments, 45
with other U.S. academic programs, 2 with U.S. agricultural economics
programs, 52 with foreign academic programs, 40 with the U.S. Federal
Reserve System, 14 with the World Bank or International Monetary Fund, 7
with U.S. government agencies, 6 with think tanks, and 5 with private
firms.
(13.) One notable difference between our approach and that of
Buchmueller, Dominitz, and Hansen (1999) is that we estimate our
productivity functions using a negative binomial specification, while
they estimate theirs using ordinary least squares (OLSt. Buchmueller,
Dominitz, and Hansen (1999) motivate their approach based on the fact
that OLS is a more general specification. An important factor in our
choice is that we wish to eventually calculate predicted productivity
values, and in their comparison of out-of-sample predictions, Creel and
Loomis (1990) find that "count data models predict substantially
better than do OLS." Nonetheless, we note that as with Buchmueller,
Dominitz, and Hansen (1999), we estimated our results both ways and
"the sign and significance of key parameter estimates did not
differ substantially across specifications."
(14.) The log-likelihood statistic for total articles, top 36
articles, and AEQ pages are 374.08, 294.22, and 156.58, respectively.
Each of these values is above the tabled chi-square value of 11.35 at a
significance level of .01.
MICHAEL J. HILMER and CHRISTIANA E. HILMER *
* We would like to thank Ron Ehrenberg, Dan Goldhaber, Kangoh Lee,
Jayson L. Lusk, James F. Ragan, Jr., Mark Showalter, and seminar
participants at Virginia Tech for helpful comments on previous drafts.
M. J. Hilmer: Assistant Professor. Department of Economics, San
Diego State University, San Diego, CA 92182-4485. Phone 1-619-594-5662,
Fax 1-619-594-5062, E-mail mhilmer@mail.sdsu.edu
C. E. Hilmer: Assistant Professor, Department of Economics, San
Diego State University, San Diego, CA 92182-4485. Phone 1-619-594-5860,
Fax 1-619-594-5062, E-mail chilmer@mail.sdsu.edu
TABLE 1
Summary Distribution of Students and Advisors
(a) Students and Advisors by Program Tier
Tier 1 Tier 2 Tier 3
Student observations 524 743 625
Percentage of students 0.277 0.393 0.330
Total advisors 190 272 279
Percentage of advisors 0.653 0.465 0.323
ranked in top 1,000
(b) Students and Advisors by Advisor Ranking Groups
Lower
Star Ranked Unranked
Student observations 536 573 783
Percentage of all 0.283 0.303 0.414
students
Total advisors 132 213 396
Percentage of all 0.178 0.287 0.534
advisors
(c) Students across Advisor Rank and Program Tiers
Program Tier
Advisor Rank Tier 1 Tier 2 Tier 3
Star 0.500 0.219 0.178
Lower ranked 0.269 0.366 0.256
Unranked 0.231 0.415 0.566
TABLE 2
Distribution of Advisees across Advisors
All Advisors Ranked Advisors
Number of Total Total Total Total
Advisees Observations Percentage Observations Percentage
1 343 46.29 105 30.61
2 142 19.16 66 46.48
3 91 12.28 51 56.04
4 57 7.69 40 70.18
5 37 4.99 27 72.97
6 28 3.78 19 67.86
7 6 0.81 6 100.00
8 15 2.02 11 73.33
9 6 0.81 4 66.67
10 5 0.67 3 60.00
11 2 0.27 1 50.00
12 3 0.40 3 100.00
13 -- -- -- --
14 -- -- -- --
15 2 0.27 2 100.00
16 -- -- -- --
17 -- -- -- --
18 1 0.13 1 100.00
19 1 0.13 1 100.00
20 1 0.13 1 100.00
21 1 0.13 -- --
Total 741 -- 341 46.02
TABLE 3
Summary Research Productivity by Program and Advisor Rank
Publish Any Publish Top
Articles 36 Articles Total Articles
All students 0.681 0.394 4.119 (5.486)
Program rank
Tier 1 0.781 0.542 5.489 (6.279)
Tier 2 0.661 0.376 3.795 (5.059)
Tier 3 0.622 0.293 3.355 (5.043)
Advisor rank
Star 0.778 0.560 5.481 (6.163)
Lower ranked 0.675 0.428 4.347 (5.644)
Unranked 0.619 0.257 3.019 (4.578)
Top 36 Articles AEQ Pages
All students 1.295 (2.483) 13.730 (27.854)
Program rank
Tier 1 2.216 (3.415) 24.899 (38.806)
Tier 2 1.073 (2.046) 11.098 (22.871)
Tier 3 0.787 (1.703) 7.494 (17.749)
Advisor rank
Star 2.118 (3.066) 22.794 (34.508)
Lower ranked 1.464 (2.681) 15.680 (30.743)
Unranked 0.608 (1.503) 6.098 (15.896)
TABLE 4
Marginal Effects for Negative Binomial Regressions Controlling
for Advisor Rank and Program Tier
Total Articles Top 36 Articles
Advisor rank
Star 1.6710 ** (0.4074) 1.1688 ** (0.1948)
Lower ranked 1.2262 ** (0.3358) 0.9758 ** (0.1586)
Program rank
Tier 1 1.6963 ** (0.4097) 1.0623 ** (0.1899)
Tier 2 0.4664 (0.3006) 0.2866 (0.1183)
First-job type
Research position -- --
Individual characteristics
Years since PhD 0.3541 ** (0.0891) 0.0854 ** (0.0322)
International student -0.8741 ** (0.2537) -0.3370 ** (0.0926)
Male 1.0320 ** (0.2702) 0.2536 ** (0.1007)
Advisor time constraint
Other students supervised 0.0000 (0.0317) -0.0002 (0.0113)
Log likelihood -4,616.66 -2,652.62
[R.sup.2] .0151 .0424
[alpha] 1.5903 (0.0691) 2.3965 (0.1492)
AEQ Pages Total Articles
Advisor rank
Star 13.2519 ** (3.3032) 1.4506 ** (0.3117)
Lower ranked 11.9321 ** (2.7052) 0.6482 ** (0.2448)
Program rank
Tier 1 14.6468 ** (3.5038) 0.8336 ** (0.2866)
Tier 2 4.1986 (1.8942) 0.0836 (0.2253)
First-job type
Research position -- 4.1040 ** (0.2231)
Individual characteristics
Years since PhD 0.6456 (0.5179) 0.2086 ** (0.0676)
International student -2.4872 * (1.4603) -0.4962 ** (0.1939)
Male 1.5332 (1.6575) 0.9362 ** (0.2051)
Advisor time constraint
Other students supervised -0.0194 (0.1873) -0.0114 (0.0242)
Log likelihood -4,805.17 -4,429.62
[R.sup.2] .0122 .0550
[alpha] 7.7999 (0.3411) 1.1692 (0.0560)
Top 36 Articles AEQ Pages
Advisor rank
Star 0.8650 ** (0.1346) 9.6845 ** (2.3493)
Lower ranked 0.6214 ** (0.1060) 6.7381 ** (1.7160)
Program rank
Tier 1 0.5270 ** (0.1152) 6.6990 ** (2.0257)
Tier 2 0.0991 (0.0817) 1.7346 (1.2750)
First-job type
Research position 1.3011 ** (0.0814) 14.2184 ** (1.4755)
Individual characteristics
Years since PhD 0.0376 * (0.0226) 0.3165 (0.3583)
International student -0.1661 ** (0.0654) -1.9280 ** (1.0659)
Male 0.2203 ** (0.0698) 1.3361 * (1.1726)
Advisor time constraint
Other students supervised -0.0040 (0.0079) -0.0897 (0.1302)
Log likelihood -2,505.51 -4,731.88
[R.sup.2] .0955 .0273
[alpha] 1.6147 (0.1125) 6.7764 (0.3044)
Notes: Entries listed in the column heading are the dependent
variables. Regressions also include dummy variables indicating the
field in which the student's dissertation was written.
** Significant at 5% level; * significant at 10% level.
TABLE 5
Predicted Differences in Research Productivity Measures
Total Top 36
Articles Articles AEQ Pages
Star, Tier 1 9.326 (0.237) 3.267 (0.336) 34.691 (0.509)
Star, Tier 2 8.576 (0.233) 2.839 (0.329) 29.727 (0.493)
Lower ranked, Tier 1 8.523 (0.240) 3.023 (0.342) 31.745 (0.522)
Star, Tier 3 8.492 (0.229) 2.740 (0.321) 27.992 (0.493)
Unranked, Tier 1 7.875 (0.238) 2.402 (0.336) 25.007 (0.518)
Lower ranked, Tier 2 7.773 (0.231) 2.595 (0.327) 26.781 (0.487)
Lower ranked, Tier 3 7.690 (0.226) 2.496 (0.320) 25.046 (0.489)
Unranked, Tier 3 7.041 (0.221) 1.875 (0.310) 18.308 (0.480)
Unranked, Tier 2 7.125 (0.227) 1.974 (0.320) 20.043 (0.483)