Classroom experiments: not just fun and games.
Durham, Yvonne ; McKinnon, Thomas ; Schulman, Craig 等
I. INTRODUCTION
Historically, economics was the classic example of a science in
which laboratory methods are impossible. Consistent with this view,
economics has traditionally been taught as a theory-intensive science
rather than as an experimental one. However, the view that economics is
not a laboratory science has been changing, in large part, due to the
increase in research conducted using experimental methods. In addition,
as evidence of the benefits of active learning for students has been
gathered, leaders in the field of education have been urging faculty to
actively engage students in the process of learning. Introducing
experimental methods into the economics classroom is a natural step,
warranted both by the evolution of the discipline itself and the need to
more actively involve students in the learning process.
This paper presents results from a research project examining the
effectiveness of using economics experiments in the classroom. The
project involved designing course packages for the principles of
microeconomics and principles of macroeconomics classes that integrate
basic economic experiments into the curricula and then evaluating their
success as a teaching tool. The evaluation phase focused on assessing
the impact of this curriculum on student performance, student attitudes
towards economics, and economic knowledge retention.
The project was conducted over a three-year period. During the
initial phase of the research project, both the course materials and an
assessment instrument for evaluating the effectiveness of those
materials were developed. (1) Next, a controlled experiment using
students at the University of Arkansas was conducted. Students who
enrolled in principles of macroeconomics and principles of
microeconomics courses were separated into control and treatment
classes. Both groups were taught using a general lecture/class
discussion format, but experiments designed to illustrate specific
economic concepts were used in the treatment sections in place of the
additional lecture, class discussion, and examples used in the control
classes for those topics. Relative performance on the assessment
instrument serves as the basis for evaluating the impact of the new
curricula on student learning, while controlling for other factors that
might affect student performance.
An attitude survey was administered to the students at the
beginning and at the end of the courses in both the treatment and
control groups to gain insight into their attitudes towards the study of
economics and whether those attitudes changed over the course of the
semester. (2) In addition to assessing the impact of classroom
experiments on student performance and attitudes towards economics,
their impact on knowledge retention was also investigated. Students were
tracked into a required advanced business course where their performance
on a test of material covered in the introductory economics courses was
used to evaluate the impact of classroom experiments on their retention
of this material.
Although we had no a priori expectations of what the specific
outcomes from the evaluative portion would be, we hypothesized that
students would benefit overall from the use of experiments and that this
benefit would be dispersed differentially based on student learning
styles. We also anticipated that we would find an improved attitude
towards the study of economics and an increase in knowledge retention
when experiments were implemented as active learning tools.
II. BACKGROUND
In their 1985 edition of Principles of Economics, Samuelson and
Nordhaus argued:
One possible way of figuring out economic laws ... is by controlled
experiments ... Economists [unfortunately] ... cannot perform the
controlled experiments of chemists or biologists because they
cannot easily control other important factors. Like astronomers or
meteorologists, they generally must be content largely to observe.
(1985, 8)
This view of economics has been slowly changing. In fact, in the
1992 edition of their text, Samuelson and Nordhaus acknowledged the
virtue of controlled economic experiments. As researchers started to
view economics as a laboratory science, some economics instructors also
began incorporating this idea into their classrooms. Bartlett and King
(1990, 186) argue that this is a necessary step, indicating that
"'Economists need to rethink and reorganize their courses in
order to bring the pedagogy of undergraduate economics courses into line
with the practices of the discipline." Although classroom
experiments have been increasing in popularity, most economics
instructors still rely on the traditional methods used to teach
economics.
Leaders in the field of higher education have been urging college
and university faculty to actively engage students in the process of
learning. Active learning has been found to be superior to lectures in
promoting the development of thinking and writing skills. In addition,
Bonwell and Eison (1991) indicate that students seem to prefer
strategies promoting active learning to traditional lectures. As
Bergstrom and Miller (2000) note, experiments allow students to be both
participants and observers, thereby providing them with this opportunity
to be actively involved in learning.
A. How is Economics Being Taught?
Despite the research on the benefits of active learning, several
survey studies indicate that traditional lecture methods still dominate
university classrooms. Theilens (1987) and Benzing and Christ (1997)
both found lecturing to be the dominant mode of instruction for
professors across various disciplines, including economics. Becker and
Watts (1996, 2001) reported similar findings from two successive surveys
in 1995 and 2000, both of which indicated that "chalk and
talk" dominated undergraduate economics classrooms, with over 80%
of class time spent lecturing. While extensive use of the lecture format
is not necessarily a bad thing for all students, it may not suit some
students' learning styles.
B. The Impact of Classroom Experiments on Learning
Many of the educators using experiments and games in the classroom
assert that they make a difference. For example, Bergstrom and Miller
(2000) have high praise for the effect of classroom experiments on both
the students and the instructor:
We have tried it and it works ... they [students] are enthusiastic
about what they are doing. They love getting involved with markets
and then figuring out what happened rather than simply being
lectured at. They have fun. As instructors, we feel the same way.
This classroom experience is a lot more rewarding than trying to
interest sleepy students in abstractions with which they have no
experience. Evidence from their performance on homework and
examinations suggests that students are learning well. (2000, vi)
While the use of experiments in economic classrooms is strongly
supported by those who are already using them, much of this evidence is
anecdotal.
Calls for controlled analysis of the impact of classroom
experiments began soon after instructors started using it. As Fels
(1993, 365) notes, the evidence in support of classroom experiments has
often amounted to the proponents saying "'This is what I do,
and I like it. So do my students'.... No serious attempt, however,
has been made to evaluate any of them [classroom experiments]."
Because using classroom experiments is costly to an instructor, both in
terms of start-up costs and valuable classroom time, an instructor needs
to be fairly certain that the benefit to the students will outweigh these costs. While those who use experiments in the classroom believe
that they are beneficial, the need exists for a body of data to support
this belief and to explain the way in which experiments impact the
educational experience. If no significant effect on learning is found,
then the educators currently praising this method may simply be
supporters of the enthusiasm and excitement they generate in the
classroom. This would be a significant finding as well.
Although not conclusive, there have been several recent efforts
made to provide this evidence in a variety of settings. Dickie (2006)
assesses the impact of classroom experiments on student scores on the
Test of Understanding in College Economics (TUCE) (Saunders 1991). He
finds significantly larger TUCE score improvement when students in two
principles of microeconomics courses taught with experimental materials
are compared to those in a concurrent course taught in the traditional
manner, with some evidence that adding grade incentives for success in
the experiments negatively impacts these benefits. He also finds that
the impact of classroom experiments varies with student aptitude.
Emerson and Taylor (2004) examine data from two sections of
introductory microeconomics taught with significant use of classroom
experiments and seven sections taught in the traditional lecture/class
discussion format during one semester, and they find that students in
the experiment sections experienced a significantly larger improvement
in their TUCE scores than those in the lecture/ discussion sections.
They find that classroom experiments do not significantly affect
performance on the departmental final exam, student evaluations, or
class attrition rates. The authors suggest that further studies should
examine this impact with varying class sizes and in ways that allow for
a more explicit accounting of unobserved instructor effects.
Yandell (1999) finds that student performance on the final exam in
an integrated micro- and macroeconomics principles course taught using
more experiments is not significantly different from that of students in
the micro principles sections taught with only a few experiments in the
previous year. Frank (1997) finds that students involved in an
experiment about common-property resources performed better in a test
over that material than students who were not involved with the
experiment. Gremmen and Potters (1997) find that students who
participated in a macroeconomic simulation game performed significantly
better in an exam covering that material than their counterparts who had
received lectures on the same material.
Cardell et al. (1996) examined the effects of re-organizing
economic instruction into a laboratory format. Laboratory sessions
involved the use of personal computers, data on actual economic
behavior, and some experiments. This change was implemented in an
intermediate macroeconomics course at Denison University, and the
authors found a positive and significant difference in TUCE score
improvements for students in the laboratory course as compared to
students 10 years previously. At Washington State University (WSU), the
change was implemented in introductory courses, and the authors found no
net positive statistical impact of the laboratory experiment on TUCE
scores when compared to those of students concurrently enrolled in the
introductory courses with the standard lecture/discussion group format.
The results from these studies provide some evidence that classroom
experiments make a difference, but they are not conclusive. Dickie
(2006), Emerson and Taylor (2004), Frank (1997), and Gremmen and Potters
(1997) find that their measures of performance are improved with the use
of experiments. The Denison demonstration, while not strictly an
evaluation of classroom experiments, indicates that the laboratory work
in intermediate economics may have affected student learning when
measured by the TUCE. However, since no control group was used, it is
unclear whether other factors could be causing this change.
Yandell (1999) and the results from the WSU experiment indicate
that there is no significant difference between the students who are
exposed to experiments and those who are not. In Yandell (1999), the
comparison is between students in two different classes, and the
students in the non-experiment course were actually exposed to two of
the experiments. Therefore, in some sense, it examines the effects of
additional experiments on performance.
C. Other Considerations
The current study is a more comprehensive examination of the
effectiveness of classroom experiments than those discussed above, and
it addresses several of the issues that Emerson and Taylor (2004)
suggest. We look at students in both the principles of microeconomics
and macroeconomics and examine whether experiments add to their
understanding of the particular topics addressed by these experiments.
In addition to simply examining academic performance, we attempt to
discover whether students with different learning styles benefit
differentially from the use of experiments and what the impact of
classroom experiments is on both student attitudes towards economics and
their retention of economic knowledge. This is done while controlling
for instructor effects, class time, and class size.
Learning Styles. It is well known that individuals learn in many
different styles. Different teaching methodologies may be more or less
productive for students with different learning styles. Siegfried and
Fels (1979) find that different teaching methods have little impact on
student learning. However, Charkins, O'Toole, and Wetzel (1985)
point out that one reason for this finding may be that introducing
different teaching methods may cause the distribution of benefits to
vary--certain techniques may help some students learn, but may hinder others. Therefore, there would be no reason to expect any significant
change in aggregate results. Borg and Shapiro (1996) find that
personality type significantly affects performance in the college
economics classroom. While introducing classroom experiments may not
significantly affect aggregate performance, it may positively impact
certain types of learners. We hypothesize that the use of classroom
experiments will differentially benefit students based on their
particular learning styles. This would argue for the use of a variety of
teaching methods, including experiments, in the economics classroom.
Attitudes. Improving student attitudes towards economics may
enhance learning and is therefore an important component of the learning
experience. Karstensson and Vedder (1974) find that students more
favorably disposed towards economics perform better in economics
classes. Charkins, O'Toole, and Wetzel (1985) find that the greater
the divergence between teaching style and learning style, the less
positive the student attitudes, and the smaller the gain in achievement.
They argue that instructors can improve economic understanding and
attitudes towards economics by utilizing varied teaching methods. For
this reason, we hypothesize that the use of experiments in the classroom
will result in a positive change in student attitudes towards economics.
Retention. Walstad (2001) finds the rather troubling result that
economics study at the university level seems to have little effect on
what students know about basic economics when they graduate and
afterward. Walstad and Allgood (1999) find only a two-point difference
on a 15-item test between the scores of college seniors who had or had
not taken an economics course. Given this poor retention performance,
finding a teaching method that improves retention would be a significant
result. We hypothesize that the use of experiments in the classroom will
increase the retention of economic knowledge.
III. EXPERIMENTAL METHODS AND PROCEDURES
The experimental method used in this study employs a treatment
group/control group design. The treatment and control groups were
separated across semesters to allow for control of many confounding factors such as instructor, class time, class duration, and class size.
During the test period (Fall 2000 to Spring 2002) a total of 16 class
sections were included in the experiment--two sections of the principles
of macroeconomics and two sections of the principles of microeconomics
per semester for four semesters. The four fall semester class sections
were the control sections, while the four class sections in each of the
spring semesters were treatment sections. (3)
To account for instructor effects, the macroeconomics sections were
taught by one instructor and the microeconomics sections by two other
instructors. Additionally, to examine class size effects, class size was
held constant across control and treatment sections at 120 seats the
first year and 60 seats the second year. Class times were also held
constant across control and treatment sections, and all classes were
taught during the morning hours. Elements of the experimental design
included to control for contributing factors other than the treatment
are presented in Table 1.
During the semester, students were required to sit for several
within-term exams and a final exam. Items from the instrument developed
during the initial phase of the project were incorporated into these
exams, along with other items relevant to the material covered by the
exam. Relative performance on an item-by-item basis provides a test of
the impact of classroom experiments on student learning. Data on
additional factors thought to affect performance, such as ACT scores,
attendance, grade point average (GPA), etc., were also collected. Table
2 presents the means and associated means difference test for each of
these variables for both the control and treatment groups. In the
microeconomics courses, there is a small but significant difference in
the means for the control and treatment groups for college GPA at the 5%
level and Age, ACT scores, and high school GPA at the 10% level. For the
macroeconomics courses, there is a small but significant difference in
mean ACT scores and the proportion of students who were business majors
at the 5% level. (4) The means for the other explanatory variables are
not significantly different across the control and treatment samples.
While the experimental design allows for control of important
factors such as instructor and class time, it does not allow us to
control for possible inherent differences between the fall and spring
semesters. Recent experience at the University of Arkansas does not
suggest any systematic difference in either enrollment or student
performance across the fall and spring semesters in courses for the two
principles. (5) For our sample, as discussed above, there are some
significant differences in high school and college GPAs, ACT scores, and
the proportion of business majors across the fall and spring semesters,
but the sizes of these differences are quite small, and we are able to
control for them in our statistical analysis. Although we cannot
completely rule out unseen semester effects, those that are apparent do
not appear to be large and our analysis can account for them.
Another concern with running the control sections during the fall
semester and the treatment sections during the spring semester is that
students might try to self-select. This would have been very difficult
to do during the first year of the study because students had no way of
knowing that treatment sections would occur in the spring. It is
possible that they may have figured this out during the second year. In
an effort to explore this possibility of self-selection, we examined the
number of students who withdrew from a control section and appeared in a
treatment section of the same course in the following quarter or
withdrew from a treatment section and appeared in a control section of
the same course the following quarter. This occurred 20 times in the
micro classes and eight times in the macro classes, with only a small
number (5/20 and 4/8) of these occurring during the second year of the
study. (6) Additionally, it does not appear that students in a treatment
section of one of the courses attempted to take the treatment section of
the other course since only five students from the study appeared in
both treatment groups. There were also no significant differences across
treatments in course GPAs during the first year to provide students with
an impetus to favor one course over the other. During the second year,
one of the micro instructors had a significantly higher GPA in the
treatment section than in the control section. However, since this could
probably not have been predicted by the students, it is likely not the
cause of any self-selection bias.
Although the TUCE has generally been the instrument used to
evaluate student knowledge of economics, we chose to create a new
instrument that would indicate whether students are learning the
specific material that the experiments were designed to teach. Three
multiple-choice questions were designed for each topic. The first
question in each set was designed to measure knowledge and
comprehension, the lowest level of Bloom's (1956) taxonomy of
cognition. The second question was designed to measure simple
application, a somewhat higher level of cognition. The final question
dealt with the highest level of cognition, namely analysis, synthesis,
or evaluation. The questions were included as part of the
instructors' regular exams, and performance on these particular
items is used to assess differences in knowledge gained. (7,8)
The control sections were conducted using a traditional
lecture/class discussion format. The treatment sections supplemented the
lecture and class discussion with either eight (micro) or five (macro)
classroom experiments, each designed to illustrate a key economic
concept. The concepts covered are shown in Tables 3 and 4. Students were
not assigned grades for their performance in the experiments. The pacing
of material during the semester was closely matched between the control
and treatment groups so that each group received approximately the same
amount of "contact time" for a particular set of material.
Matching the pace of coverage in the treatment and control groups--by
supplementing the traditional lecture method with additional discussion,
examples, and problems in the control group--provides a more robust test
of whether any measured treatment effects are due to the use of
classroom experiments rather than simply additional time spent on a
topic. The same textbook was used in both the control and treatment
sections of each course. In order to keep the workload as consistent as
possible between the control and treatment sections, no additional
homework was assigned over the experiments. The only case of extra work
occurring outside of class with the treatment group was the monopoly experiment in the microeconomics course. The students were asked to
individually access the program outside of class in one of the computer
labs. Students could complete the experiment easily in under an hour.
The fact that no additional homework over the experiments was assigned
might possibly lower the benefit of using experiments in the classroom,
and therefore bias our results against finding a significant treatment
effect.
In addition to the controls noted above, a variety of other types
of information was gathered from the students. The VARK learning style
questionnaire (Fleming and Bonwell 1998) was administered at the
beginning of the semester to determine students' preferred learning
styles. An attitude survey was also given to the students, both at the
beginning and at the end of the course. An analysis of the responses to
this survey provides us with an indication of how experiments affect
attitudes toward economics. In order to assess whether students taught
introductory economics using classroom experiments retain more knowledge
than those taught in the standard manner, students were tracked into a
required advanced business course. Items from the assessment instrument
were incorporated into a pretest administered at the beginning of this
course. Student performance on this pretest was used to assess the
impact of classroom experiments on students' retention of economic
knowledge.
IV. DATA AND METHODOLOGY
As noted above, student performance on specific questions related
to each of the target concepts is the metric used to assess the effect
of the classroom experiments on student learning. Three exam questions
were associated with each of the concepts. Average performance on the
set of three questions related to each concept and average overall
performance on all the questions (24 total for microeconomics and 15 for
macroeconomics) became the dependent variables in logistic regressions
to test for a treatment effect. (9) The specific control variables used
in the estimation are those defined in Table 2, with the addition of
Treatment, a dummy variable set equal to 1 if a student was part of a
treatment section and 0 if they were part of a control section, and
Instructor, which is a dummy variable set equal to 1 for one of the
microeconomics instructors.
The results discussed below are based on an analysis of data for a
total of 1585 students--754 students in the microeconomics sections and
831 in the macroeconomics sections. Student withdrawals and missing data
left a varying number of observations for estimation. Regression results
for student performance on individual concepts and on the complete set
of assessment questions are presented in Table 3 for the microeconomics
sections and Table 4 for the macroeconomics sections.
To assess the extent to which the treatment effect varies by
student learning style, the treatment variable was split into five
different dummy variables set equal to 1 if the student was part of a
treatment section and exhibited a particular learning style: multimodal,
visual, aural, read-write, or kinesthetic. Results for these regressions
are presented in Tables 5 and 6.
To assess whether the treatment had an impact on student attitudes
towards the study of economics, three additional regressions were
estimated. The students' average attitude scores (across eight
questions, each question allowing a 1-5 rating) on the survey
administered at the beginning of the semester, the end of the semester,
and the change in the attitude scores were dependent variables with
control variables the same as the performance regressions. Results for
these regressions are presented in Table 7.
Lastly, the impact of classroom experiments on the retention of
economic knowledge was examined with two regressions using student
performance on the retention exam as the dependent variable, with
treatment dummies and ACT scores as control variables. Results for these
regressions are shown in Table 8.
V. RESULTS
Examining the regression results for student performance on All
Concepts, the last column m Table 3 for the microeconomics courses and
Table 4 for the macroeconomics courses, we find a positive and
significant treatment effect. (10) Students in the treatment group
scored significantly higher on the assessment questions, indicating that
participation in classroom experiments enhances student performance on
average, regardless of learning style.
A. Concept Performance
Microeconomics. In the microeconomics courses, in addition to
Treatment, all other control variables except Ethnicity and Business
Major are significant determinants of student performance on All
Concepts (at the 5% level). The well-documented gender effect is evident
here, Jensen and Owen (2001), Robb and Robb (1999), Dynan and Rouse
(1997), Feiner and Roberts (1995), and Ferber (1995) find various
reasons for better academic performance by males in economics courses.
As hypothesized and consistent with previous research of Marburger
(2001), Durden and Ellis (1995), and Romer (1993), attendance positively
affects performance. Surprisingly, Small Class has a significantly
negative impact on performance, perhaps because University of Arkansas
principles sections tend to be large. Therefore, the instructors
involved are used to teaching These courses with larger class sizes.
Examining the effects of the experiments on knowledge of individual
concepts in the microeconomics sections, we find that four experiments
positively and significantly affect performance (three at the 5% level
and one at the 10% level in a one-tailed test), two are insignificant,
and two negatively and significantly affect performance. The four
experiments that have a positive and significant impact on performance
(resource allocation, demand and supply, cartels, and public goods) are
some of the more lengthy and complicated experiments in the group. These
experiments tend to use a significant portion of the class period and
involve several decision-making periods. It is certainly possible that
one or both of these characteristics is important in enhancing learning.
The two experiments that do not have a significant impact on
performance (comparative advantage and production and costs) are
shorter, with one being a demonstration and the other involving only a
single decision. Both the shorter amount of time and the opportunity to
make only one decision may work to make these experiments less
effective. It is also possible that the concepts themselves are
straightforward enough in nature and that experiments are not a superior
method of teaching this material. The fact that participation in these
activities does not significantly enhance performance simply indicates
that the lecture/discussion format is as equally effective as using
these particular experiments to teach these specific concepts.
More troublesome are those experiments that appear to negatively
and significantly affect performance. For those topics, the
lecture/discussion format may be a superior teaching method. The
activities that appear to be inferior to lecture/discussion are the
diminishing marginal utility (mu) demonstration and the monopoly
experiment. Again, the diminishing marginal utility activity is a
demonstration in which the students observe the behavior of only a few
volunteers from the class, and therefore, although different from a
lecture or reading, each student does not get a hands-on experience.
This hands-on experience may be one of the characteristics of most
experiments that adds value. The monopoly experiment, as indicated by
some student feedback, may be a bit too complicated, perhaps obscuring
the basic lesson to be learned from the experiment. It is also possible
in both of these experiments that students are learning things that are
not being captured by the assessment questions. We measure performance
on specific questions that were designed to ascertain if students are
learning what we expect them to learn. They may not discern what
students are actually learning from these experiments.
The results from the regression analysis allow us to determine
which of the variables are significant factors in determining
performance, but the magnitude of this effect is also a concern. The
marginal effect of the treatment, which measures the increase in the
average student's percentage score (either on the three questions
for each concept or the set of all 24 questions in the overall case)
owing to the treatment, can be calculated. (11) The marginal treatment
effects for the microeconomics sections can be found in Table 3. The
demand and supply experiment and the cartel game have the largest
impacts on performance (39.03 and 13.68 percentage points,
respectively), while the impacts of the resource allocation and public
goods experiments are a bit more modest. The diminishing marginal
utility and monopoly experiments have moderate impacts, but as discussed
above, in the wrong direction. Overall, participating in this particular
set of microeconomic classroom experiments increases the average
student's score on all the assessment questions by 3.24 percentage
points.
Macroeconomics. For the macroeconomics sections, shown in Table 4,
student performance on All Concepts is significantly affected by
Treatment and all other control variables except Age and Business Major.
Once again, performance on the assessment instrument is better with a
large class size. While we hypothesized that attendance would improve
performance, it is, surprisingly, negatively related to performance,
though not highly significantly (in a one-tailed test at the 10% level).
This may be an artifact of the way in which attendance was measured.
Because of the large number of students involved and the fact that the
instructors did not usually take attendance, a graduate student recorded
attendance on randomly selected days throughout the semester. Therefore,
it is possible that our random sample of attendance may not be a good
indicator of overall attendance in the macroeconomics sections.
If we consider specific macroeconomic concepts, knowledge of equity
and efficiency, money creation, and the federal funds market is
positively and significantly affected by participation in these
experiments (all at the 5% level in a one-tailed test). The effect of
the savings and consumption activity, while negative, is not
significant. However, the effect of the Consumer Price Index (CPI)
experiment is significant and negative. Once again, the insignificant
effect of the savings and consumption activity may simply be the case of
a concept that is transparent enough that an experiment or demonstration
is not superior to lecture and classroom discussion. The negative impact
on performance for the CPI experiment may be due, in large part, to the
instructor's discomfort with this particular experiment. He
indicated that he was not initially at ease with the experiment and felt
that it did not go well in a couple of his classes. This stresses an
important intuitive point. For experiments to be most useful as a
teaching tool, instructors need to find those experiments that they feel
comfortable implementing.
Again, in order to evaluate the magnitude of the impact of these
experiments on performance, the marginal effects are calculated.
Clearly, the marginal effects of the individual macroeconomic
experiments tend to be larger than those of the microeconomic
experiments. The experiments that significantly and positively affect
performance have relatively large impacts, ranging from 22.02 to 46.13
percent-age points for individual concepts. This larger impact for
macroeconomic concepts is an important finding since the use of
experiments tends to be more common in microeconomics courses than in
macroeconomics courses. The results here indicate that perhaps, from the
students' point of view, this practice should be reversed. Overall,
participation in this set of macroeconomic experiments raises the
average student's score on the set of macroeconomic assessment
questions by 9.63 percentage points.
B. Learning Styles
While there are significant gains overall from the treatment
effect, the magnitude of the gain does appear to be somewhat stronger
for students with particular learning styles. The majority of the
students in this sample (72.6% in microeconomics and 69% in
macroeconomics) are multimodal learners. Kinesthetic learners make up
14.8% of the micro students and 17.5% of the macro ones. In both
classes, read-write learners make up roughly 4%, aural 7%, and visual 2%
of the students. We had no a priori hypotheses about the manner in which
particular learning styles would interact with the treatment to affect
student performance. The results of the analysis of learning style
effects, along with the marginal treatment effects, can be found in
Tables 5 and 6.
For the microeconomic concepts, the performance on All Concepts of
multimodal learners (at 5%) and kinesthetic learners (at 10%), which
together make up 87.4% of the students, is significantly improved with
the use of experiments. In fact, the average student in each of these
categories sees an increase in his/her percentage score on the
assessment instrument of 3.23 and 3.56 percentage points, respectively.
The visual, aural, and read-write learners do not show significant
improvement when experiments are used. These students do just as well
with the lecture/discussion method.
In the case of the macroeconomic concepts, the data suggest that
there are significant gains to all types of learners except read--write
learners. Again, the overall size of these impacts is larger in the
macroeconomics case than in microeconomics (even after accounting for
the smaller total number of macroeconomic assessment questions), ranging
from 7.63 to 16.74 percentage points improvements. Again, read-write
learners (which make up only 4% of this sample) are not significantly
affected.
C. Attitudes
Attitudes toward the study of economics in the treatment groups
improved over the course of study (see Table 7). We hypothesized that
classroom experiments would improve student attitudes toward economics.
For the microeconomics sections, beginning attitudes were not
significantly different from the control group, but final attitudes were
significantly more favorable. The change was positive and statistically
significant (at the 10% level in a one-tailed test). For the macro
sections, beginning attitudes of the treatment group were significantly
less favorable than the control group, but by the end of the semester,
were not significantly different. Once again, the attitude change was
positive and statistically significant. Experiments clearly positively
affect students' attitudes toward the study of economics. This is
consistent with the observation that experiments generate enthusiasm in
the classroom and is likely an important contribution of classroom
experiments to the educational experience.
D. Retention
The question of the impact of using experiments on student
retention of knowledge is addressed in Table 8. The students who took
the retention exam in the upper division business course fall into one
of two categories--they either participated in some combination of a
micro control, macro control, micro treatment, or macro treatment
section or they were not a part of the initial study. Results from two
regressions examining retention levels are presented here.
The first model regresses student performance on a pretest covering
the 13 concepts (from both micro and macro) used in the study on four
dummy variables representing the four experimental categories and
student ACT scores. If students participated in either one of the micro
or macro treatment sections, their performance on the retention exam was
significantly and positively affected. Students in the control groups
also scored higher on the exam than those not in the study, but students
in the treatment sections performed even better. This might possibly be
expected since these students were previously tested over these concepts
even in the control groups. The second model classifies students only as
either part of the treatment group or not. The results from this
regression indicate that exposure to experiments in the principles
course significantly improves performance on the retention exam. The
marginal effect of the treatment is 4.66 percentage points.
VI. CONCLUSIONS
Our results indicate that classroom experiments improve student
performance on questions covering the topics that the experiments are
designed to explore. This is true in the case of both microeconomic and
macroeconomic principles, although the impact seems to be larger for the
macroeconomic concepts. While experiments tend to be used more
frequently in microeconomics courses than in macroeconomics courses,
this outcome would indicate that they may be more valuable in the
macroeconomics classroom.
A more detailed look at the data indicates that students seem to
benefit more from some experiments than from others. This may be
explained in part by the varying lengths of and number of decisions
involved in each experiment. Another possible factor may be the nature
of the material being demonstrated. Experiments may be very useful for
demonstrating concepts that are either less concrete or more complicated
to understand. Experiments may do nothing more than generate enthusiasm
for topics that are less complex in nature, which is important in and of
itself. They are another, equally effective, way in which to transmit
knowledge, and students enjoy them.
The data on learning styles suggest that gains vary across students
with different preferred learning styles, in conjunction with whether
the course is a macroeconomics or a microeconomics course. Kinesthetic
and multimodal learners (which make up the vast majority of students in
this sample) are significantly affected in both courses, while visual
and aural learners are significantly affected only in the macroeconomics
course. It appears that read--write learners perform just as well in the
standard lecture/discussion format in both classes.
The use of classroom experiments is a significant factor in the
improvement of student attitudes toward the study of economics for both
the macroeconomics and microeconomics courses. This is a significant
finding. Better attitudes toward economics are desirable for several
reasons. As previously mentioned, research suggests that attitudes
affect cognitive gains and retention. Moreover, better attitudes will
likely result in students enrolling in advanced economics courses.
Favorable attitudes may also give immediate results in better attendance
and student involvement in the course. While certain experiments may not
improve performance relative to the standard lecture/discussion teaching
mode, one could argue for their use simply because they help improve
student attitudes toward economics.
In addition to the impact of experiments on performance and
attitude, we find some evidence that students who experience economic
experiments in their principles courses retain more knowledge of the
concepts covered than those who do not. While the size of the impact is
not large, it is significant and positive.
This study suggests that classroom experiments have significant
impacts on the educational experience. First, using experiments
increases students' cognitive gains overall, but may be more
helpful for teaching some topics than others. Second, classroom
experiments impact students differently depending on their learning
styles. In the case of multimodal or kinesthetic learners, experiments
facilitate learning more effectively than lecture/discussion in both
micro and macro principles. Read-write learners are not significantly
affected. Third, student attitudes towards economics are improved by the
use of experiments, which may mean more enthusiasm for learning and a
better class atmosphere. Finally, the use of classroom experiments
increases knowledge retention. The results from this study provide
evidence that the start-up and class time costs of implementing
experiments in the principles classroom may very well be balanced by
improved knowledge, attitudes, and retention.
doi: 10.1111/j.1465-7295.2006.00003.x
REFERENCES
Bartlett, R. L., and P. G. King. "Teaching Economics as a
Laboratory Science." Journal of Economic Education, 21, 1990,
181-93.
Becker, W. E., and M. Watts. "Chalk and Talk: A National
Survey on Teaching Undergraduate Economics." American Economic
Review Papers and Proceedings, 86, 1996, 448-53.
--. "Teaching Methods in U.S. Undergraduate Economics
Courses." Journal of Economic Education, 32, 2001, 269-79.
Benzing, C., and P. Christ. "A Survey of Teaching Methods
among Economics Faculty." Journal of Economic Education, 28, 1997,
182-8.
Bergstrom, T. C., and J. H. Miller. Experiments with Economic
Principles. 2nd ed. New York: McGraw-Hill Companies Inc., 2000.
Bloom, B. S., ed. Taxonomy of Educational Objectives. New York:
David McKay Company Inc., 1956.
Bonwell, C. C., and 5. A. Eison. "Active Learning: Creating
Excitement in the Classroom." ASHE-ERIC Higher Education Report No.
1. Washington, DC: The George Washington University, School of Education
and Human Development, 1991.
Borg, M. O., and S. L. Shapiro. "Personality Type and Student
Performance in Principles of Economics." Journal of Economic
Education, 27, 1996, 3-25.
Cardell, N. S., R. Fort, W. Joerding, F. Inaba, D. Lamoreaux, R.
Rosenman, E. Stromsdorfer, and R. Bartlett. "Laboratory-Based
Experimental and Demonstration Initiatives in Teaching Undergraduate
Economics." American Economic Review Papers and Proceedings, 86,
1996, 454-64
Charkins, R. J., D. M. O'Toole, and J. N. Wetzel.
"Linking Teacher and Student Learning Styles with Student
Achievement and Attitudes." Journal of Economic Education, 12,
1985, 111-20.
Dickie, M. "Do Classroom Experiments Increase Learning in
Introductory Microeconomics?" Journal of Economic Education, 37,
2006, 267-88.
Durden, G. C., and L. V. Ellis. "The Effects of Attendance on
Student Learning in Principles of Economics." American Economic
Review Papers and Proceedings, 85, 1995, 343-6.
Dynan, K., and C. E. Rouse. "The Underrepresentation of Women
in Economics: A Study of Undergraduate Economics Students." Journal
of Economic Education, 28, 1997, 350-68.
Emerson, T. L., and B. A. Taylor. "Comparing Student
Achievement across Experimental and Lecture-Oriented Sections of a
Principles of Microeconomics Course." Southern Economic Journal,
70, 2004, 672-93.
Feiner, S., and B. Roberts. "Using Alternative Paradigms to
Teach about Race and Gender: A Critical Thinking Approach to
Introductory Economics." American Economic Review, 85, 1995,
367-71.
Fels, R. "This Is What I Do, and I Like It." Journal of
Economic Education, 24, 1993, 365-70.
Ferber, M. A. "The Study of Economics: A Feminist
Critique." American Economic Review, 85, 1995, 357-61.
Fleming, N. D., and C. C. Bonwell. VARK. 1998.
Frank, B. "The Impact of Classroom Experiments on the Learning
of Economics: An Empirical Investigation." Economic Inquiry, 35,
1997, 763-9.
Gremmen, H., and J. Potters. "Assessing the Efficacy of Gaming
in Economic Education." Journal of Economic Education, 28, 1997,
291-303.
Jensen, E. J., and A. L. Owen. "Pedagogy, Gender, and Interest
in Economics." Journal of Economic Education, 32, 2001, 323-43.
Karstensson, L., and R. K. Vedder. "A Note on Attitude as a
Factor in Learning Economics." Journal of Economic Education, 5,
1974, 109-11.
Marburger, D. R. "Absenteeism and Undergraduate Exam
Performance." Journal of Economic Education, 32, 2001, 99-109.
Nunnally, J. C., and I. Bernstein. Psychometric Theory 3rd ed. New
York: McGraw-Hill Companies Inc., 1994.
Robb, R. E., and A. L. Robb. "Gender and the Study of
Economics: The Role of Gender of the Instructor." Journal of
Economic Education, 30, 1999, 3-19.
Romer, D. "Do Students Go to Class? Should They?" Journal
of Economic Perspectives, 7(3), 1993, 167 74.
Samuelson, P. A., and W. D. Nordhaus. Principles of Economics.
12th. ed. New York: McGraw-Hill Companies Inc., 1985.
Saunders, P. E. Test of Understanding in College Economics. Third
Edition: Examiner's Manual. New York: Joint Council on Economic
Education, 1991.
Siegfried, J. J., and R. Fels. "Research on Teaching College
Economics: A Survey." Journal of Economic Education, 17, 1979,
923-69.
Theilens. W., Jr. "The Disciplines and Undergraduate
Lecturing." Paper presented at an annual meeting of the American
Educational Research Association, Washington, DC, April 1987, ED 286
436, 57 pp. MF-01 : PC-03.
Walstad, W. B. "Improving Assessment in University
Economics." Journal of Economic Education, 32, 2001, 281-94.
Walstad, W. B., and S. Allgood. "What Do College Seniors Know
about Economics?" American Economic Review, 89, 1999, 350-54.
Yandell, D. "Effects of Integration and Classroom Experiments
on Student Learning and Satisfaction." Working Paper, University of
San Diego, 1999.
(1.) Complete descriptions and citations for the experiments used
in this study and the assessment instrument developed are available upon
request.
(2.) The attitude survey is available upon request.
(3.) Due to an inadvertent error in the placing of the questions on
exams in the macro sections during the fall of 2001, the control
sections for the second year were rerun during the fall semester of
2002.
(4.) Note that the Small Class, Gender, Ethnicity, and Business
Major means represent the proportion of the total population in those
respective categories. Although withdrawal rates are not included as an
explanatory variable, our analysis of that data indicates that those
rates did not significantly differ across the control and treatment
groups.
(5.) Examination of course GPAs and enrollments across the two
semesters for the two years previous to this study indicate no
significant difference between fall and spring semester grades or
enrollments.
(6.) The regressions discussed in the latter part of this paper
were run without these 28 students, and no significant change occurred.
(7.) The questions on the assessment instrument were carefully
reviewed by the authors and instructors involved. Nunnally and Bernstein
(1994) indicate that "content validity primarily rests on rational
rather than empirical grounds." They argue that potential users
should agree that the procedure is sensible and that the important
content has been sampled and cast into the test items. Although the
primary test of validity occurs rationally, they also suggest an
empirical test that provides important circumstantial evidence that the
instrument is valid. This test involves computing the biserial
correlation between the score on the item and the score on the overall
exam. Nunnally and Bernstein indicate that most item-total correlations
range from 0 to 0.4 and use, as an "arbitrary guide," a
cut-off value of either 0.2 or 0.3 to define a discriminating item. When
biserial correlations are computed for the questions used, 29/39 have a
correlation of 0.3 or higher, 34/39 have a correlation of 0.25 or
higher, and 36/ 39 have a correlation of 0.2 or higher.
(8.) An initial analysis of the impact of experiments on the
"depth" of student learning, as characterized by Bloom's
taxonomy, provides no clear evidence of an impact in either direction.
Results from this analysis are available upon request.
(9.) In the concept regressions, the dependent variable can take on
one of four values--0, 1/3. 213, or 1--since there are three questions.
In the overall micro and macro regressions, it can take one of 25 or 16
values, respectively.
(10.) The p-values indicated in the tables are for two-tailed tests. Note that our hypothesis is that the use of experiments will
improve student performance. Therefore, our test is a one-tailed test.
The relevant p-value is effectively half that shown on the tables.
(11.) Note that since Treatment is a dummy variable, the marginal
treatment effect measures the vertical shift in the cumulative logistic
distribution function measured at the mean of the right-hand-side
variables.
YVONNE DURHAM, THOMAS MCKINNON, and CRAIG SCHULMAN *
* The authors gratefully acknowledge the financial support of the
National Science Foundation, Grant No. DUE-9950825.
Durham: Associate Professor, Western Washington University,
Bellingham, WA 98225. Phone 1-360-650-2794, Fax 1-360-650-6315, E-mail
yvonne.durham@wwu.edu
McKinnon: University Professor Emeritus, University of Arkansas,
Fayetteville, AR 72701. Phone 1-479-5753266, Fax 1-479-575-3241, E-mail
tmckinnon@walton.uark.edu
Schulman. Visiting Associate Professor, Texas A&M University
and Principal, LECG, LLC, College Station, TX 77845. Phone
1-979-694-5790, Fax 1-979-694-5790, Email cscbulman@lecg.com
TABLE 1
Elements of the Experimental Design
Treatment/
Class/Section Class Size Class Time (a) Control
Macro Section 1-F 2000 120 8:00-9:20 a.m. Control
Macro Section 2-F 2000 9:30-11:00 a.m.
Macro Section 3-S 2001 8:00-9:20 a.m. Treatment
Macro Section 4-S 2001 9:30-11:00 a.m.
Macro Section 5-F 2001 60 8:00-9:20 a.m. Control
Macro Section 6-F 2001 9:30-11:00 a.m.
Macro Section 7-S 2002 8:00-9:20 a.m. Treatment
Macro Section 8-S 2002 9:30-11:00 a.m.
Micro Section 1-F 2000 120 8:30-9:20 a.m. Control
Micro Section 2-F 2000 9:30-10:20 a.m.
Micro Section 3-S 2001 8:30-9:20 a.m. Treatment
Micro Section 4-S 2001 9:30-10:20 a.m.
Micro Section 5-F 2001 60 8:30-9:20 a.m. Control
Micro Section 6-F 2001 9:30-10:20 a.m.
Micro Section 7-S 2002 8:30-9:20 a.m. Treatment
Micro Section 8-S 2002 9:30-10:20 a.m.
(a) Eighty-minute class periods meet twice per week for 16 weeks.
Fifty-minute class periods meet three times per week for 16 weeks.
TABLE 2
Explanatory Variable Means
Microeconomics Sections
p-Value for
Overall Control Treatment Difference
Variable Mean Mean Mean Test
Small class 0.344 0.344 0.343 0.977
Age 20.750 20.964 20.541 0.056
Gender 0.593 0.584 0.602 0.623
Ethnicity 0.102 0.096 0.108 0.581
ACT composite 21.561 21.303 21.812 0.089
GPA 2.640 2.586 2.692 0.038
HS GPA 3.364 3.328 3.399 0.077
Attend 0.621 0.611 0.631 0.255
Business major 0.793 0.789 0.797 0.799
N 754 375 379
Macroeconomics Sections
p-Value for
Overall Control Treatment Difference
Variable Mean Mean Mean Test
Small class 0.298 0.291 0.305 0.664
Age 19.757 19.699 19.812 0.557
Gender 0.526 0.543 0.509 0.329
Ethnicity 0.075 0.077 0.073 0.836
ACT composite 22.189 21.852 22.513 0.033
GPA 2.786 2.792 2.781 0.828
HS GPA 3.456 3.445 3.466 0.579
Attend 0.761 0.770 0.753 0.560
Business major 0.613 0.647 0.580 0.047
N 831 405 426
Notes: Small Class: Dummy variable set equal to 1 if the section was
a 60-seat section. As these smaller sections were all taught in the
second year, this also serves as a dummy for the second year. Age: Age
of the student. Gender: Dummy variable set equal to 1 if the student
was male. Ethnicity: Dummy variable set equal to 1 if the student was
African-American. Hispanic, or an American Indian. ACT Composite:
Student's composite score on the ACT exam. GPA: Student's cumulative
college grade point average (four point scale) at the beginning of the
semester they were observed. The student's high school GPA (HSGPA) was
used if they were observed in their first college semester. Attend:
Student's attendance rate (randomly sampled throughout the semester).
Business Major: Dummy variable set equal to 1 if the student was a
business major.
TABLE 3
Microeconomic Regression Results: Concepts
Resource Comparative Demand &
Allocation Advantage Supply
Intercept -1.244 -4.167 -4.685
(0.370) (0.000) (0.001)
Treatment 0.285 0.178 1.651
(0.158) (0.279) (0.000)
Instructor -0.072 -0.404 -1.488
(0.728) (0.017) (0.000)
Small Class -0.085 -1.057 -0.911
(0.714) (0.000) (0.000)
Age 0.011 0.046 0.073
(0.838) (0.282) (0.149)
Gender 0.085 0.199 -0.218
(0.688) (0.255) (0.296)
Ethnicity -0.342 0.041 -0.019
(0.289) (0.878) (0.951)
ACT Composite 0.081 0.058 0.094
(0.007) (0.018) (0.001)
GPA 0.415 0.633 0.384
(0.035) (0.000) (0.046)
Attend 0.117 0.508 0.464
(0.821) (0.231) (0.360)
Business 0.254 -0.013 0.152
Major (0.353) (0.953) (0.571)
Degrees of
Freedom 559 576 560
R-Squared 0.056 0.127 0.257
Diminishing Production &
MU Cost Monopoly
Intercept -2.103 -2.904 -2.303
(0.096) (0.029) (0.044)
Treatment -0.809 0.189 -0.485
(0.000) (0.333) (0.004)
Instructor 0.654 0.453 -1.556
(0.001) (0.023) (0.000)
Small Class 0.699 0.455 -1.461
(0.002) (0.037) (0.000)
Age 0.078 0.054 0.053
(0.098) (0.278) (0.209)
Gender 0.195 0.671 -0.001
(0.331) (0.001) (0.995)
Ethnicity 0.183 -0.863 -0.008
(0.548) (0.007) (0.977)
ACT Composite 0.033 0.114 0.024
(0.257) (0.000) (0.356)
GPA 0.654 0.562 0.661
(0.001) (0.005) (0.000)
Attend 0.274 0.939 -0.300
(0.573) (0.066) (0.500)
Business 0.024 0.057 0.204
Major (0.923) (0.832) (0.375)
Degrees of
Freedom 507 544 544
R-Squared 0.145 0.174 0.256
Public All
Cartels Goods Concepts
Intercept -7.157 -2.639 -1.834
(0.000) (0.013) (0.000)
Treatment 0.723 0.848 0.141
(0.000) (0.000) (0.002)
Instructor 0.568 1.051 -0.146
(0.004) (0.000) (0.002)
Small Class -0.160 0.394 -0.102
(0.461) (0.024) (0.048)
Age 0.136 0.048 0.023
(0.005) (0.217) (0.042)
Gender 0.402 0.103 0.116
(0.054) (0.539) (0.017)
Ethnicity 0.205 -0.153 -0.096
(0.520) (0.555) (0.192)
ACT Composite 0.101 0.106 0.043
(0.001) (0.000) (0.000)
GPA 0.614 0.424 0.265
(0.003) (0.009) (0.000)
Attend 0.459 0.430 0.354
(0.378) (0.308) (0.003)
Business 0.613 0.090 0.056
Major (0.022) (0.676) (0.368)
Degrees of
Freedom 537 534 602
R-Squared 0.128 0.230 0.292
Marginal Treatment Effects
Resource Allocation 2.61#
Comparative Advantage 4.34
Demand & Supply 39.03*
Diminishing MU -6.27*
Production & Cost 3.12
Monopoly -11.37*
Cartels 13.68*
Public Goods 3.18*
All Concepts 3.24*
Notes: p-values for two-tailed test are given in parentheses.
For the marginal treatment affects. boldface indicates significance
at the 10%, level, boldface italic indicates significance at the
5% level.
Note: Boldface indicates significance at the 10% level
indicated with #.
Note: Boldface italic indicates significance at the 5% level
indicated with *.
TABLE 4
Macroeconomics Regression Results: Concepts
Equity vs. Savings & Money
Efficiency Consumption Creation
Intercept -3.859 -1.822 -4.338
(0.024) (0.136) (0.006)
Treatment 1.316 -0.047 2.323
(0.000) (0.756) (0.000)
Small Class 0.634 -1.077 -1.888
(0.012) (0.000) (0.000)
Age 0.026 0.054 -0.048
(0.709) (0.268) (0.440)
Gender 0.545 0.147 0.206
(0.015) (0.353) (0.305)
Ethnicity -1.520 -0.493 -0.666
(0.000) (0.097) (0.081)
ACT Composite 0.094 0.044 0.080
(0.002) (0.041) (0.004)
GPA 0.611 0.355 0.806
(0.002) (0.011) (0.000)
Attend -0.281 -0.015 -0.152
(0.298) (0.939) (0.529)
Business 0.107 -0.106 -0.184
Major (0.649) (0.528) (0.390)
Degrees of Freedom 623 607 595
R-Squared 0.149 0.108 0.336
CPI Federal All
Bias Funds Market Concepts
Intercept -3.610 -3.024 -1.762
(0.014) (0.027) (0.000)
Treatment -1.352 1.266 0.393
(0.000) (0.000) (0.000)
Small Class -0.789 -0.396 -0.318
(0.000) (0.044) (0.000)
Age 0.085 -0.008 0.012
(0.150) (0.887) (0.466)
Gender 0.026 0.455 0.174
(0.893) (0.010) (0.001)
Ethnicity -0.495 -0.152 -0.331
(0.164) (0.648) (0.001)
ACT Composite 0.148 -0.011 0.047
(0.000) (0.659) (0.000)
GPA 0.552 0.431 0.226
(0.001) (0.006) (0.000)
Attend 0.014 0.076 -0.086
(0.951) (0.719) (0.192)
Business 0.208 0.421 0.017
Major (0.298) (0.025) (0.769)
Degrees of Freedom 613 595 635
R-Squared 0.194 0.111 0.275
Marginal Treatment Effects
Equity vs. Efficiency 22.02*
Savings & Consumption -0.99
Money Creation 46.13*
CPI Bias -13.59*
Federal Funds Market 22.93*
All Concepts 9.63*
Notes: p-values for two-tailed test are given in parentheses. For the
marginal treatment affects. boldface indicates significance at the
10%, level, boldface italic indicates significance at the 5% level.
Note: Boldface indicates significance at the 10% level indicated with #.
Note: Boldface italic indicates significance at the 5% level
indicated with *.
TABLE 5
Microeconomics: Learning Style Effects
Resource Comparative Demand &
Allocation Advantage Supply
Intercept -1.170 -4.211 -4.692
(0.400) (0.000) (0.001)
Treatment & Multimodal 0.328 0.281 1.601
(1.134) (0.119) (0.000)
Treatment & Visual -0.592 1.012 -0.230
(0.550) (0.219) (0.812)
Treatment & Aural -0.233 11.119 1.904
(0.705) (0.855) (0.002)
Treatment & Read-Write 1.123 -0.105 1.012
(0.145) (0.869) (0.178)
Treatment & Kinesthetic 0.192 -0.263 2.110
(0.601) (0.385) (0.000)
Instructor -0.089 -0.413 -1.490
(0.667) (0.015) (0.000)
Small Class -0.087 -1.050 -0.877
(0.709) (0.000) (0.000)
Age 0.007 0.046 0.073
(0.896) (0.279) (0.150)
Gender 0.086 0.224 -0.243
(0.690) (0.205) (0.246)
Ethnicity -0.371 0.053 -0.024
(0.252) (0.842) (0.939)
ACT Composite 0.080 0.058 0.096
(0.008) (0.018) (0.001)
GPA 0.406 0.636 0.372
(0.040) (0.000) (0.053)
Attend 0.179 0.512 0.483
(0.731) (0.228) (0.340)
Business Major 0.290 0.001 0.142
(0.292) (0.995) (0.598)
Degrees of Freedom 555 572 556
R-Squared 0.061 0.134 0.266
Diminishing Production &
MU Cost Monopoly
Intercept -2.053 -2.889 -2.291
(0.101) (0.030) (0.043)
Treatment & Multimodal -0.696 0.201 -0.598
(0.001) (0.347) (0.001)
Treatment & Visual -1.091 -1.391 -0.778
(0.375) (0.142) (0.335)
Treatment & Aural -1.571 0.268 0.235
(0.008) (0.650) (0.630)
Treatment & Read-Write 1.477 0.927 -1.767
(0.093) (0.232) (0.008)
Treatment & Kinesthetic -1.363 0.163 -0.014
(0.000) (0.641) (0.963)
Instructor 0.620 0.427 -1.536
(0.002) (0.033) (0.000)
Small Class 0.662 0.458 -1.437
(0.004) (0.037) (0.000)
Age 0.075 -0.054 0.053
(0.112) (0.273) (0.205)
Gender 0.230 0.643 -0.023
(0.252) (0.002) (0.897)
Ethnicity -0.278 -0.900 0.039
(0.359) (0.005) (0.888)
ACT Composite 0.031 0.114 0.024
(0.282) (0.000) (0.356)
GPA 0.641 0.552 0.672
(0.001) (0.006) (0.000)
Attend 0.395 1.000 -0.346
(0.415) (0.051) (0.435)
Business Major 0.069 0.075 0.178
(0.783) (0.781) (0.439)
Degrees of Freedom 503 540 540
R-Squared 0.165 0.180 0.270
Public All
Cartels Goods Concepts
Intercept -7.159 -2.644 -1.830
(0.000) (0.013) (0.000)
Treatment & Multimodal 0.750 0.714 0.140
(0.001) (0.000) (0.005)
Treatment & Visual 0.784 0.435 0.000
(0.405) (0.565) (0.541)
Treatment & Aural 0.287 0.963 0.191
(0.615) (0.036) (0.173)
Treatment & Read-Write 0.277 1.441 0.156
(0.719) (0.120) (0.387)
Treatment & Kinesthetic 0.830 1.294 0.155
(0.019) (0.000) (0.068)
Instructor 0.578 1.064 -0.149
(0.004) (0.000) (0.001)
Small Class -0.149 0.371 -0.099
(0.497) (0.035) (0.057)
Age 0.134 0.050 0.023
(0.006) (0.199) (0.045)
Gender 0.424 0.076 0.111
(0.044) (0.652) (0.023)
Ethnicity 0.222 -0.152 -0.100
(0.487) (0.560) (0.177)
ACT Composite 0.102 0.106 0.043
(0.001) (0.000) (0.000)
GPA 0.608 0.421 0.265
(0.003) (0.009) (0.000)
Attend 0.461 0.436 0.360
(0.379) (0.302) (0.002)
Business Major 0.619 0.069 0.058
(0.022) (0.751) (0.356)
Degrees of Freedom 533 530 598
R-Squared 0.130 0.237 0.294
Marginal Treatment Effects Multimodal Visual Aural
Resource Allocation 2.96 -7.54 -2.60
Comparative Advantage 6.87 24.74 2.26
Demand & Supply 37.92* -4.47 44.29*
Diminishing MU -5.15* -9.62 -16.93*
Production & Cost 3.30 -31.34 4.32
Monopoly -13.80* -17.44 5.85
Cartels 14.11* 14.62 6.00
Public Goods 2.83* 1.94 3.44*
All Concepts 3.23* -3.39 4.37
Marginal Treatment Effects Read-Write Kinesthetic
Resource Allocation 7.47 1.82
Comparative Advantage -2.49 -6.10
Demand & Supply 23.86 48.14*
Diminishing MU 4.37# -13.50*
Production & Cost 12.18 2.71
Monopoly -32.07* -0.35
Cartels 5.81 15.30*
Public Goods 4.29* 4.07*
All Concepts 3.59 3.56#
Notes: p-values for two-tailed test are given in parentheses.
For the marginal treatment affects. boldface indicates significance
at the 10% level, boldface italic indicates significance at the
5% level.
Note: Boldface indicates significance at the 10% level
indicated with #.
Note: Boldface italic indicates significance at the 5% level
indicated with *.
TABLE 6
Macroeconomics: Learning Style Effects
Equity vs. Savings & Money
Efficiency Consumption Creation
Intercept -4.049 -2.037 4.255
(0.019) (0.095) (0.007)
Treatment & Multimodal 1.173 -0.202 2.227
(0.000) (0.250) (0.000)
Treatment & Visual 2.827 0.493 2.341
(0.002) (0.437) (0.004)
Treatment & Aural 1.544 0.498 1.792
(0.008) (0.220) (0.001)
Treatment & Read-Write 1.070 -0.937 3.016
(0.108) (0.039) (0.000)
Treatment & Kinesthetic 1.485 0.308 2.547
(0.000) (0.193) (0.000)
Small Class 0.668 -1.045 -1.888
(0.008) (0.000) (0.000)
Age 0.032 0.065 -0.050
(0.641) (0.182) (0.430)
Gender 0.559 0.136 0.173
(0.013) (0.393) (0.392)
Ethnicity -1.480 -0.470 -0.664
(0.000) (0.111) (0.082)
ACT Composite 0.094 0.043 0.080
(0.002) (1.050) (0.004)
GPA 0.616 0.357 0.795
(0.002) (0.010) (0.000)
Attend -0.270 0.002 -0.147
(0.318) (0.990) (0.542)
Business Major 0.126 -0.083 -0.183
(0.591) (0.618) (0.394)
Degrees of Freedom 619 603 591
R-Squared 0.154 0.123 0.340
Federal
CPI Funds All
Bias Market Concepts
Intercept -3.640 -3.143 -1.832
(0.013) (0.021) (0.000)
Treatment & Multimodal -1.481 1.033 0.309
(0.000) (0.000) (0.000)
Treatment & Visual -1.041 1.979 0.705
(0.174) (0.005) (0.001)
Treatment & Aural -1.762 1.163 0.448
(0.000) (0.009) (0.001)
Treatment & Read-Write -1.693 0.894 0.163
(0.002) (0.073) (0.298)
Treatment & Kinesthetic -0.884 1.886 0.600
(0.002) (0.000) (0.000)
Small Class -0.788 -0.359 -0.307
(0.000) (0.068) (0.000)
Age 0.090 0.001 0.017
(0.128) (0.983) (0.321)
Gender -0.009 0.417 0.165
(0.963) (0.018) (0.002)
Ethnicity -0.490 -0.133 -0.320
(0.168) (0.687) (0.001)
ACT Composite 0.145 -0.014 0.046
(0.000) (0.556) (0.000)
GPA 0.556 0.429 0.225
(0.001) (0.006) (0.000)
Attend 0.024 0.095 -0.080
(0.917) (0.652) (0.222)
Business Major 0.219 0.451 -0.028
(0.274) (0.015) (0.624)
Degrees of Freedom 609 591 631
R-Squared 0.201 0.127 0.293
Marginal Treatment Effects Multimodal Visual Aural
Equity vs. Efficiency 20.42* 31.35* 14.30*
Savings & Consumption -4.37 9.16 9.24
Money Creation 43.81* 46.59* 32.98*
CPI Bias -15.66* -9.15 -20.78*
Federal Funds Market 17.66* 40.44* 20.54*
All Concepts 7.63* 16.74* 10.93*
Marginal Treatment Effects Read-Write Kinesthetic
Equity vs. Efficiency 19.14 23.76*
Savings & Consumption -12.16* 5.99#
Money Creation 61.55* 51.50*
CPI Bias -19.46* -7.25*
Federal Funds Market 14.72# 38.13*
All Concepts 4.06 14.42*
Notes: p-values for two-tailed test in parentheses. For the
marginal treatment affects. boldface indicates significance at
the 10%, level, boldface italic indicates significance at the 5% level.
Note: Boldface indicates significance at the 10% level
indicated with #.
Note: Boldface italic indicates significance at the 5% level
indicated with *.
TABLE 7
Attitudes
Microeconomics
Beginning Ending Attitude
Attitude Attitude Change
Intercept 3.279 2.493 -0.858
(0.000) (0.000) (0.140)
Treatment -0.047 0.182 0.140
(0.457) (0.046) (0.138)
Instructor -0.104 -0.014 0.125
(0.109) (0.883) (0.198)
Small Class 0.040 0.087 0.078
(0.577) (0.388) (0.458)
Age -0.005 -0.017 -0.015
(0.735) (0.412) (0.466)
Gender 0.036 0.365 0.363
(0.589) (0.000) (0.000)
Ethnicity -0.153 -0.319 -0.059
(0.151) (0.037) (0.726)
ACT Composite 0.009 0.011 0.013
(0.325) (0.400) (0.352)
GPA 0.027 0.013 -0.056
(0.676) (0.889) (0.558)
Attend 0.000 0.449 0.439
(0.999) (0.087) (0.106)
Business 0.071 0.192 0.134
Major (0.411) (0.114) (0.286)
Degrees of Freedom 473 340 291
R-Squared 0.022 0.116 0.088
Macroeconomics
Beginning Ending Attitude
Attitude Attitude Change
Intercept 4.589 3.940 -0.698
(0.000) (0.000) (0.227)
Treatment -0.263 -0.062 0.206
(0.000) (0.352) (0.002)
Instructor
Small Class -0.094 -0.072 0.023
(0.153) (0.391) (0.785)
Age -0.026 -0.003 0.029
(0.149) (0.907) (0.220)
Gender 0.053 0.110 0.051
(0.365) (0.110) (0.454)
Ethnicity -0.206 -0.209 -0.061
(0.043) (0.112) (0.636)
ACT Composite -0.004 -0.011 -0.008
(0.636) (0.268) (0.402)
GPA -0.073 0.011 0.074
(0.154) (0.855) (0.247)
Attend -0.107 0.137 0.218
(0.115) (0.073) (0.003)
Business 0.050 0.149 0.147
Major (0.435) (0.047) (0.054)
Degrees of Freedom 540 474 427
R-Squared 0.067 0.034 0.057
Notes: p-values for two-tailed test are given in parentheses.
TABLE 8
Retention Results
Variable Model A Model B
Intercept -2.088 -1.998
(0.000) (0.000)
Micro Control 0.122
(0.131)
Macro Control 0.139
(0.094)
Micro Treatment 0.162 0.187
(0.059)
Macro Treatment 0.193 (0.004)
(0.024)
ACT Composite 0.085 0.083
(0.000) (0.000)
Degrees of Freedom 502 505
R-Squared 0.201 0.193
Notes: p-values for a two-tailed test are given in parentheses.
The p-value for [H.sub.0]: Micro Control = Macro Control =
0 is 0.105. The Marginal Treatment Effect = 4.66.