Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
WITHIN-COLLEGE HUMAN CAPITAL AND RACIAL ETHNIC DIFFERENCES IN ACADEMIC PERFORMANCE*
Kenneth I. SpennerDuke University
Sarah MustilloPurdue University
Nathan D. MartinDuke University
April 2, 2008
Word Count: 8,978
Running Head: Within-College Human Capital and Academic Performance
*Direct correspondence to Kenneth Spenner, Department of Sociology, Duke University, Durham, North Carolina, 27708; email: [email protected]. The authors gratefully acknowledge the support for this research provided by grants from the Andrew W. Mellon Foundation and Duke University. The authors bear sole responsibility for the contents of the paper.
1
Ken Spenner is Professor of Sociology and Psychology, and Director of the Markets & Management Studies Program at Duke University. Over the years his research has focused on careers, work and personality, technology and market transitions in Eastern Europe. In recent years his research centers various aspects of undergraduate education at an elite university, using the Campus Life and Learning data.
Sarah Mustillo is an Associate Professor of Sociology and faculty associate with the Center for Aging and the Life Course at Purdue University. Her research focuses on medical sociology and quantitative methodology with particular interests in child mental health, family well-being and longitudinal models. Substantively, much of her work investigates the ways in which adverse mental health outcomes are transmitted from parent to child. Quantitatively, her work involves issues of modeling change over time with categorical dependent variables. Additionally, she has interests in racial and ethnic differences in health and educational achievement.
Nathan Martin is a doctoral student in the Department of Sociology at Duke University and research assistant for the Campus Life and Learning Project. His general research and scholarly interests include education, globalization, labor and work, social theory, and inequality. He is currently working on his dissertation, which explores social class in contemporary US post-secondary education. A recent article (with David Brady) examining unionization in less developed countries appeared in American Sociological Review.
2
WITHIN-COLLEGE HUMAN CAPITAL AND RACIAL ETHNIC DIFFERENCES IN ACADEMIC PERFORMANCE
ABSTRACT
The academic performance gap has received substantial scholarly attention. Most studies of human capital investigate pre-college explanations for the gap, or in panel studies, fixed independent variables. Our paper addresses this lacuna by investigating within-college variations in human capital in three areas: academic and intellectual skills; self-esteem, academic self-confidence, and student identity; and academic effort and engagement. How do variations during college in these capital areas affect the trajectories of academic performance as measured by semester-by-semester grade point average? Data for this analysis come from the Campus Life and Learning Project, a panel study of two recent cohorts of students attending Duke University and surveyed before college and during the first, second and fourth college years. The data show that black students narrow the achievement gap at the start of college by about 60 percent over the college career. A latent growth curve model is used to estimate the effects of fixed and changing capital measures on academic performance trajectories. Consistent with theorizing, four of nine indicators of within-college capital have statistically significant but small associations with increases in GPA: work hard attributions, the importance of a good student identity, self-assessed ability, and self-assessed smartness. A fixed effects model generally confirms these results and eliminates unobserved heterogeneity as an explanation. Within-college capital variations do help us understand the racial ethnic performance gap but do not explain a large share of the gap.
3
WITHIN-COLLEGE HUMAN CAPITAL AND RACIAL ETHNIC DIFFERENCES IN ACADEMIC PERFORMANCE
Gaps in academic performance among racial and ethnic groups persist at all levels of
education in the United States. Bowen and Bok’s (1998) seminal study refocused attention on
such gaps in higher education. They found the differences are large (for example, 30 percentile
points between black and white college students in the College and Beyond sample), enduring
over recent decades, and perhaps even somewhat larger in the most selective institutions. Most
studies that attempt to explain the gap in terms of human capital factors focus on pre-college
differences in human capital and generally find that pre-college factors account for up to about
one-half of the black-white gap (Bowen and Bok 1998; Massey et al. 2003; see Jencks and
Phillips [1998] for review). Below we present new evidence that the academic achievement gap
among racial and ethnic groups at an elite university declines substantially over the course of the
college career even after controlling for pre-college factors. This raises the important question:
what accounts for the decline?
The research that attempts to understand the gap contains several lacunae. First, the
typical study design is cross-sectional or a limited panel, and includes controls for pre-college
socioeconomic and family background factors, and perhaps single-point in time measures of
within-college capital such as test scores or major choice. Studied outcomes include verbal,
quantitative and subject matter competence, cognitive skills and growth, grades, persistence in
school, and graduation rates. Fine-grained studies of within-college capital acquisition are in
more limited supply (Pascarella and Terenzini [2005] provide comprehensive review). For
example, how do effort, time use and academic engagement differ over the course of college
4
among white, black, Asian and Latino students? Do racial and ethnic differentials in academic
self-concept and identities help us understand trajectories of academic achievement?
Second, the major methodological challenge to such designs is unobserved heterogeneity.
How do we know the design measured all of the capital factors that might explain the gap? Left
out unmeasured factors will bias estimates of the gap (Allison 1994; Halaby 2004). A few
studies in economics attempt to control for unobserved heterogeneity but few to none of the
studies in education and sociology using college samples attempt such controls, for example with
fixed-effects, panel designs (Pascarella and Terenzini 2005: 67). Our study design offers a
contribution in attempting to control for unobserved heterogeneity in a multi-point panel design
taken over the college years, with fixed and random effects statistical models.
Our research design features a prospective panel study of two entering cohorts at an elite
Southern university. A probability sample of entering students were surveyed in the summer
before starting college and then in the spring semester of the first, second and fourth college
years. As a dependent variable for academic achievement we use semester grade point average
(GPA) in the spring semester of each survey year as taken from Registrar’s records.
WITHIN-COLLEGE CAPITAL
Human capital generally refers to the knowledge, skills, health and values that people
possess (Becker 1975). Human capital is distinct from the financial capital and physical assets
that people hold. We also hold it conceptually distinct from (but correlated with) cultural
(Bourdieu 1986) and social capital (Coleman 1988), which we have or will address with our
panel study elsewhere (XXXXX 2005; XXXXX 2008). For this paper, our interest is in forms of
human capital that are changeable during college and that might be germane to academic
5
performance. Our reading of the literature identifies three sub-types of human capital within
college populations: (1) academic and intellectual skills; (2) global and specific (academic) self-
esteem, academic self-confidence, and identity; and (3) academic effort and engagement. This
conceptualization is not exhaustive but rather is aimed at capturing the major variations, and
ones that are susceptible to reasonable measurement in a panel design.
Academic and Intellectual Skills
Pascarella and Terenzini (2005: 65-75) review a large body of research that shows an
increase by a quarter to nearly a full standard deviation over the college career in a set of subject-
matter and general academic competencies and skills. These skills include: understanding
fundamental concepts or theories, applying knowledge and concepts, analyzing ideas and
arguments, synthesizing and integrating information, oral expression and writing. The measures
in various studies include both standardized and self-report measures of skills (Maes et al. 1996;
Thorndike and Andrieu-Parker 1992). These skills are related to earned grades.
The few studies that examine racial and ethnic differences in changes in these skills are
inconsistent. For example, Myerson and colleagues (1998) find that black students experience
larger gains in the Armed Forces Qualification Test (a composite of verbal and quantitative
scores) compared with a cohort of white students in the National Longitudinal Study of Youth,
after controlling for SES and age. In contrast, Flowers (2002) reports that black students
experienced significantly smaller gains on standardized tests compared with white students,
using data from the National Study of Student Learning and data from 56 institutions in the
College Basic Academic Subject Examination, and a more extensive set of controls including
demographics, pre-college test scores, indicators of academic effort, and social involvement in
college life.
6
Global and Specific Self-Esteem, Academic Self-Confidence and Identity
Self-esteem (global) generally refers to an individual’s attitude toward the self as a
totality (Rosenberg et al. 1995). Self-esteem has been linked to academic performance in a
number of studies (Osborne 1995; Massey et al. 2003; Morgan and Mehta 2004), including racial
and ethnic differences in such. Few studies address within-college differences, and most studies
are interested in self-esteem as an outcome rather than a cause of academic performance, a
possibility that we will examine.
Rosenberg and colleagues (1995) suggest that specific self-esteem -- in this case
academic self-concept -- is more closely related to behavioral outcomes including academic
performance as measured by grades compared with global self-esteem. Data from the Youth in
Transition Study support their argument for high school boys. We will examine variations in
both global and specific (academic) self-esteem.
Closely related to self-esteem, academic self-confidence refers to a person’s judgment
about the likelihood that their actions will lead to a successful academic outcome. Psychological
research has long held (Bandura 1982) that people who are confident in their ability are more
likely to endure and persist in the face of difficulty, likely a resource in challenging academic
settings.
Identities refer to self-in-role meanings for role incumbents and group members (i.e.,
parent, fraternity member, good student) (see Burke [2004] for theoretical statement and review;
Reitzes and Burke [1980] specifically address the college student identity). Identities have
affective, motivational and behavioral implications. For the college student identity, one key
dimension of role performance is academic achievement. The college student identity might
vary over the college career as a function of earlier academic performances, one’s social
7
networks, and intellectual climate in courses and social settings. We posit that closer
identification with being a good student will yield stronger academic performance and improved
academic performance over time. As a corollary, those with stronger identification with a good
student role should invest more time and energy in academic matters and engagement in the
academic enterprise in college, which introduces the third area of within-college capital.
Academic Effort and Engagement
Research in education, economics and sociology shows that college students vary in the
levels of effort and academic engagement in college and these variations are positively
associated with a range of outcomes, including test scores, subject matter competencies, and
other measures of academic performance and skill acquisition (Astin 1993; Pascarella and
Terenzini [2005:119-120] provide review). Our focus will be on several sub-dimensions of this
broader concept.
Effort attributions refer to the self-inferences that students make about the role of their
own effort in academic tasks, for example the extent to which they worked hard in challenging
academic courses. At the level of individual challenging courses, we find that effort attributions
do predict course grade outcomes (XXXXX 2006).
A number of studies address the role of time use, or hours spent studying, as an indicator
of effort and academic engagement (Schuman et al. 1985; Rau and Durand 2000; Stinebrickner
and Stinebrickner 2004). Most studies show small-to-modest positive effects of time spent
studying and in related activities on indicators of academic performance, although there is
disagreement about the size of effects and the optimal measures of time use (see exchange
between Schuman [2001] and Rau [2001] and detailed methodological work by Juster and
Stafford [1991]). In general, multiple and more detailed measures provide larger estimates of the
8
effects of study time on outcomes (Stinebrickner and Stinebrickner 2004). We posit that time
use varies over the college career and we investigate whether such variations are related to
trajectories of academic performance. One weakness of time use measures is that they do not
directly capture the efficiency with which students expend time, which might vary substantially
over time and is a function of a myriad of factors. Our design includes some information on
effective time management, which we bring in to adjust for this limitation.
Finally, the larger education literature suggests multiple dimensions of academic
engagement are positively associated with knowledge acquisition and academic performance
(Pascarella and Terenzini 2005). This might include not only time allocation but engagement of
faculty and fellow students to work on challenging material, use of tutoring and academic skills
centers and advisors, particularly for students having academic difficulty, and engaging
resources at the class level that might assist in challenging classes, such as teaching assistants,
organized review sessions, and peer study groups. More generally, we posit that academic
engagement varies over the college career, and perhaps by racial and ethnic group, and may be
associated with trajectories of academic achievement.
DESIGN, MEASURES AND METHODS
Study Design
The larger research design, The Campus Life and Learning Project (CLL), involves a
multi-year, prospective panel study of two consecutive incoming cohorts of students admitted to
Duke University and who accepted admission (incoming classes of 2001 and 2002; graduating
classes of 2005 and 2006). Duke is a private research university located in Durham, North
Carolina with a total undergraduate enrollment of about 6,000 students from the United States
9
and a number of foreign countries. The sampling frame included undergraduate students who
planned to enroll in the college of arts & sciences and in the engineering school. The design
randomly selected about one-third of white students, about two-thirds of Asian students, all black
and Latino students, and about one-third of bi- and multi-racial students, based upon self-
reported racial ethnic status on the admission application form.1 In contrast to studies that
examine samples from multiple institutions, this study is designed to capture the rich details of
students’ experiences at a single institution with multiple data points and merges of various types
of institutional data, usually unavailable in other studies.
The final sample for both cohorts included 1536 students (602 whites, 290 Asians, 340
blacks, 237 Latinos, and 67 bi- or multi-racial). Respondents were surveyed in the summer
preceding college enrollment. About 77 percent of sample members (n = 1181) completed the
mail questionnaire and about 91 percent of these respondents provided signed release to their
institutional records as well. Refusals were low at 1.8 percent of sample members. Response
rates to in-college waves, administered by mail and web (senior year only) were 71 percent for
the first year, 65 percent for the second year, and 59 percent for the senior year. Our analysis
sub-sample for this paper includes all students who participated in at least one wave and
provided signed release to their institutional records (n=1255). Elsewhere we have provided
detailed comparisons of non-response bias, possible drop-out bias, and patterns of missing data,
1 For the actual placement of respondents in racial ethnic categories we used US Census type questions that measure
first whether or not the respondent is Hispanic and then elicit a racial category. Virtually all of our “Hispanic”
respondents also reported their race as white, so we classify this group as Latino. Other groups are placed on the
basis of the race question, which includes bi- and multi-racial options. If data were missing on the census questions,
we used the admission form race item when possible. The race category frequencies differ slightly here from
previous publications with the CLL data as we previously did not make the missing data adjustment (XXXXX
2005).
10
and generally conclude that the effects are quite small (XXXXX 2005).2 The Appendix provides
further information.
Measures
Dependent Measures
Our dependent measure is semester GPA as taken from Registrar’s records for the end of
the spring semester of the first, second and fourth college years. The in-college surveys were
administered at the start of the spring semester. A majority of those who responded had done so
by Spring Break; over 95 percent of those who responded had done so before the end of the
semester and the reporting of final grades. This should minimize causal order problems with
independent measures taken during early in the semester relative to grades assigned at the end of
the semester.
In using grade performance as a dependent variable we do not assume a perfect measure
of intellectual and cognitive development and learning. Grades are an imperfect measure in
other respects as well. Grade standards and use might vary from academic department to
department. Grades reflect a range of student inputs from prior academic achievement to
personal traits, and even situational sources of variation. We use grades because they are
sociologically important. As Pascarella and Terenzini (2005: 396) note, grades are the lingua
franca of the undergraduate academic world. First, they are instrumental in determining student
academic standing, honors, admissions to some programs, and degree completion. Second,
2 Elsewhere we also provide detailed comparisons between Duke and other elite universities in the United States and
other major research universities in the United States (XXXXX 2005). In short, our study is not designed to be
representative of the population of U.S. colleges and universities. Rather it is fairly representative of highly
selective institutions of higher education in the United States. Again, some evidence suggests the achievement gap
is largest in these types of institutions (Bowen and Bok 1998).
11
grades represent important signals for access to post-graduate education programs and
professional schools, and for job interviews and labor market placement. They are socially
recognized as indicators of potential in future role positions. Third, grades potentially feed-back
on a range of social psychological outcomes such as self-confidence in achievement settings, and
expectations and aspirations for future educational, occupational and income achievements.
Figure 1 reports the semester GPAs (on a four-point scale) by racial ethnic group for CLL
respondents. Several patterns are apparent. First, CLL respondents replicate well-know national
differences in college grades (Bowen and Bok 1999; Massey et al. 2003). In the first college
year, Asian and white students score from one-tenth to two-tenths of a letter grade higher than
bi-multi-racial and Latino students, and four-to-five tenths of a letter grade higher compared with
black students. Second, racial ethnic differences in grades are at their maximum in the first
college year, and the gaps narrow progressively over the remainder of the college career. The
decline in the gap is quite dramatic. For example, by the second semester of the senior year, the
achievement gap between black and white students has narrowed from .45 of a letter grade to .18
of a letter grade, a reduction of 60 percent. Gaps between other groups have narrowed by the
senior year. Third, most of the narrowing of the gap occurs in the second and third college years,
a time when students have settled into college majors, minors and certificate programs, and
students’ experiences with large classes have declined or largely ended at Duke. These
differences form the basis for the comparisons that follow.
Time-Invariant Measures
Table 1 reports descriptive statistics for the independent measures, both fixed and
changing (i.e., measured at multiple time points in college), by racial ethnic group. Given the
breakdown by racial ethnic category, the data in Table 1 are unweighted; model estimates that
12
follow use weighted data. We include a standard set of fixed controls, including race, sex,
mother’s and father’s education, US citizenship, and SAT mathematics score. SAT verbal scores
are highly correlated with SAT mathematics scores so we use the mathematics score alone.
Other time-invariant measures include four social psychological indicators of pre-college
academic capital: good student identity, a two-item scale for self-perceived ability (in last
challenging high school mathematics and English courses), academic self-confidence
mathematics and academic self-confidence English. Also, we include a measure of time
management taken in the first college year. Finally, we control for college major (natural
science/engineering, social science, humanities, and undeclared; the latter category refers to a
handful of students who left college prior to declaring an academic major). If a student had
multiple majors that include a natural science or engineering discipline, we coded major as
natural science/engineering; in other cases we coded the first major listed.
Time-Varying Measures
Table 1 also includes the focal nine indicators of within-college capital as measured in
each survey year. Self-assessed academic skills are measured with an 8-item scale (Cronbach’s
alpha: .79, .78 and .79 for the first, second and senior year scales). The items include
remembering factual knowledge; understanding fundamental concepts or theories; applying
knowledge, concepts or theories to a specific situation; analyzing ideas, arguments; synthesizing
and integrating information; conducting research in a specific field; oral expression; and writing
skills. The response scale was 5 points, ranging from very low to very high. White and bi-multi-
racial students rate themselves the highest at each survey year. Interestingly, Asian student rate
themselves the lowest even though they have the highest SAT scores by a clear margin, and
attain higher first semester GPA than other racial ethnic groups. Self-assessed academic skills
13
increase by two-thirds to three-quarters of a standard deviation over the college career for all
racial ethnic groups, increasing the most for black students (by .85 of the senior year standard
deviation).
Academic self-confidence is measured by a single question in the most challenging class
set of questions in each survey year (5-point scale from not all confident to extremely confident).
Bi-multi-racial students report the highest levels in each survey year, while blacks and Asians in
the senior year report the lowest. All groups gain in confidence over college, although the gain
for Asian students is minimal. Work hard attributions also come from a single question in the
most challenging class loop in each survey year. The distributions show only minor variations
over time and by racial ethnic group.
Active learning behaviors were measured by six binary items in the most challenging
class loop (first and second year surveys only). These scales included items like studying with
students from the class, receiving informal tutoring, use of review sessions, and receiving
tutoring from the university’s academic skills center. Black students were somewhat more likely
to engage in these behaviors; Asian students were the least likely. Good student identity was
measured on a five point scale (ranging from not at all important to extremely important). Black
and bi-multi-racial students consider this identity to be the most important while white students
report the lowest levels by a clear margin. The importance of the good student identity declines
progressively over the college career for all racial ethnic groups.
Global self-esteem was measured with three items from the Rosenberg self-esteem scale
(Rosenberg 1989) (Cronbach’s alpha: .66, .70 and .71 for the first, second and senior year
scales). As found in other studies, black students tend to report somewhat higher levels of self-
esteem compared with other racial ethnic groups. Academic (specific) self-esteem was measured
14
with two questions asked in each wave. Self-assessed ability refers to students’ judgments about
their ability compared with others in most challenging class taken in the previous semester. Self-
assessed smartness refers to students judgments in each wave about how smart they are
compared to the average Duke student. These two items are generally similar to academic self-
concept measures used by Rosenberg and colleagues (1995) and Morgan and Mehta (2004).
Finally, hours spent studying was measured with a single item (referent: a typical non-exam
week) in each survey year, and was part of a larger set of time-use items. Asian and bi-
multiracial students spent more time studying outside of class, particularly in the first college
year; Latino students generally report the lowest levels in each survey year.
Methods
For the longitudinal analyses, we estimated both latent growth curve models and fixed
effects models. The latent growth curve models include a latent intercept and latent slope, which
allow individuals to vary on their initial GPA and their rate of change over time. In a latent
growth curve model (also known as a random coefficients model), time-
specific individual-level measures are assumed to contain input from two
sources: the latent process under consideration and random error. If we
assume that the process of interest follows a linear pattern over time, the
individual measures can be modeled with an individual-specific intercept and
slope across time plus error. With time-varying covariates, the Level 1
equation is:
15
where is the response variable for individual i at time t; i is a subject-specific intercept term;
is the subject-specific slope multiplied by time; is the time-specific influence of
covariate w for individual i at time t; and is the disturbance for individual i at time t. This
portion of the model captures within individual change over time and is
equivalent to the Level 1 sub-model in the HLM framework. In a structural
equation modeling framework, the variance of the errors can be fixed or
forced to be equal across time. We allow them to vary.
The second level of the model allows the random intercepts and slopes
to be a function of covariates. In this model, the random intercept and slope
are allowed to correlate. The Level 2 equations are:
In these equations, αi and βi are the intercept and slope for individual i and
and are the means of the intercept and slope when the x variables equal zero. The remaining
part of each equation sums for K time-invariant variables, the effect of each predictor on the
random intercept and slope and includes a disturbance term representing deviation from the
mean intercept and slope for individual i, respectively.
Although the distribution of the GPA variable is skewed, violating the
assumption of multivariate normal distribution, we use a linear model. We
tried several alternative analytic strategies, including different modeling
techniques and different transformations of GPA, but the findings were
16
similar across models. Figure 2 provides a graphic depiction of the model we
estimate.
The models were run with weights and robust standard errors using M Plus 5.0 (Muthén
and Muthén 2007). Factor loadings on the intercept were fixed to 1.0, while the factor loadings
on the slope were fixed at 0, 1, 2 and 3 for the four years of college. Within-college human
capital variables were included as time-varying covariates (w), while demographics, final major,
and pre-college human capital variables were included as time-invariant covariates (X). The
time-invariant covariates affected both the latent intercept and the latent slope of GPA, while the
time-varying covariates examined the effect of within-college capital variables on GPA, above
and beyond the random growth process of GPA (Bollen and Curran 2006). We first estimated an
unconditional linear growth model to examine whether the characteristics of individual
trajectories of GPA varied across subjects and then we estimated the model with only racial
ethnic group indicators (results not shown) to assess racial ethnic differences in GPA without any
explanatory variables and whether there was significant residual variance (e.g., was there a
growth process to explain?). Finally, we tested the model with the human capital variables as
time-varying covariates to examine their association with deviation from each subject’s predicted
GPA trajectory (Bryk and Raudenbush 1992; Curran and Hussong 2002; Curran, Muthén, and
Harford 1998; Bollen and Curran 2006).
We used full-information maximum likelihood estimation for our model (FIML) because
not all of the subjects responded at every wave. FIML requires the assumption that data are
missing at random (MAR)3. Using all available data, FIML computes a case-wise likelihood
3 Although the MAR assumption is difficult, if not impossible, to test, we did attempt to explore its feasibility by
examining missing value patterns on covariates, based on subjects’ values on the same covariates at different waves.
For example, to explore whether academic self-confidence was feasibly missing at random in the sophomore year,
17
function using only those variables that are observed for individual i. Additionally, data from
partially complete cases contribute to the estimation of parameters that involve missing data.
Simulations have shown that under the MAR assumption, FIML performs better than list-wise
deletion and multiple imputation in terms of both bias and efficiency (Enders and Bandalos
2001). We include several measures of fit (RMSEA and standardized root mean square error) and
both indicate good model fit.
Because random effects models assume that the disturbances from the Level 2 equations
are uncorrelated with the predictors, we also present fixed effects models of GPA on the time-
varying covariates. Fixed effects models do not require this assumption and are preferred in
cases where correlation exists (Allison 2005; Halaby 2004).4 Our model is based on the equation
we looked at missing values on academic self-confidence in the freshman and senior years to see if there was any
association between scores at other waves and the probability of being missing at the wave in question. Although
this technique does not provide formal evidence to support the MAR assumption, it does give us confidence that this
assumption is feasible in our data and that the use of FIML is justified.
4 We ran the Hausman test on unweighted data despite the fact that the models we present are weighted, as Stata
does not allow probability weights for random effects models. Consequently, we ran both a fixed effects model and
a corresponding random effects model without weights for the purposes of the test. The assumption that one of the
estimators has minimal asymptotic variance is violated in the case of probability weighted observations as well. In
addition to the weighting issue, the random effects model we include in the comparison for the Hausman test differs
from the model we present in other regards. We cannot include the time-varying covariates in the random effects
model for the Hausman test in order for it to match the fixed effects model. Finally, to run the Hausman test, we ran
our random effects model in the same statistical software as our fixed effects model, but the model we present in the
paper was run using the SEM framework (as weighting is allowed). Therefore, while the Hausman test run on
unweighted data favors the FE model, we still present both the fixed effects model and the latent growth curve
model. We believe that the latent growth curve model answers important questions not answered by the fixed
effects model and tests indicate acceptable model fit.
18
Yit = αi + β1X1it + β2X2it + …+ βkXkit + eit
where αi is a unique constant for each individual i that controls for time-invariant characteristics.
An individual’s GPA at each time point is treated as a deviation from that individual’s mean
GPA across all years of college:
Because this treatment examines difference, we cannot include time-invariant covariates,
as the term will reduce to 0; however the fixed effects account for all time-invariant
characteristics of individuals. The coefficients from this model describe the within-subject
effects of the time-varying covariates. Unlike the growth curve model, the fixed effect model
employs list-wise deletion on a person-year basis. Therefore, we used a multiple imputation
procedure to impute the missing data before we ran the fixed effect model.5 FE models were run
in Stata 10 (StataCorp 2007) and include probability weights and robust standard errors.
FINDINGS
Growth Curve Model
5 ICE in Stata (Royston 2005) performs multiple imputation via chained equations (van Buuren et al. 1999). The
first step of the procedure is to impute values 10 times for all variables with missing values using an iterative
multivariable regression technique, after which missing values are filled in by taking observations at random from
the conditional distribution of missing observations. The imputation model is repeated for each variable in the
analytic model with missing data. The imputation models use the appropriate family and link function, based on the
distribution of the variable and 5 imputed data sets are generated. The fixed effects model was run on all 5 imputed
datasets and the results combined across models (Rubin 1987).
19
Table 2 presents the parameters for the random effects growth curve model for GPA.
The table includes three types of parameter estimates for the effects of independent variables,
both fixed and changing: estimates for effects on intercept (a starting value for GPA), estimates
for overall effects on slope of GPA (how a variable affects changes in GPA over college), and in
the time varying equations, effects of time varying independent variables for within-college
capital on GPA in the first, second year and fourth college year, independent of expected GPA
trajectory. That is, the effect of time-varying covariates can be interpreted as disturbances that
shift trajectories up or down.
The intercept estimates are analogous to the coefficients one sees in models of race
differences in academic achievement in a cross-sectional design (or panel design that does not
explicitly model change). Net of other variables in the equation, blacks start college with a .28
letter grade deficit in GPA compared with whites, and Latino and bi-multi-racial students have a
one-eighth of letter grade deficit. Most of the other variables, including pre-college capital
variables increase GPA at the start of the college career, including the effects of being female,
higher SAT (math) scores, having high self-assessed academic abilities, and stronger time
management skills. Also, eventual social science majors begin college with higher GPAs
compared with eventual natural science majors, by about one-tenth of a letter grade, net of other
variables in the equation. Perhaps the only surprise is the effect of pre-college self-confidence in
challenging mathematics courses, which is negative. This might reflect that those with the
highest mathematics skills (and confidence) often take advanced, demanding mathematics
courses in the first and second college semesters, which might produce somewhat lower grades.
Only one time-invariant independent variable exerts a significant effect on GPA slope (or
the growth in GPA) over the college years. As we saw in Figure 1, blacks increased GPA at a
20
faster rate compared with whites, while Asians increased at a somewhat slower rate. Black
students increase GPA by .04 of a letter grade per year compared with whites. While seemingly
small, the effect indexes the effects of being black on changes in GPA per year, about .16 of a
letter grade when taken over the college career. This effect is comparable to the corresponding
estimate taken from a fixed effects model (see below). Further, this effect is net of time-
invariant, within-college capital variables as they might operate for black students.
Most important to our theorizing are the time-varying equations that provide estimates of
each changing within-college capital variable by year (first, second and fourth). Six of nine
within-college capital variables have significant effects on GPA for at least one time point. The
effects for four variables are consistent with theorizing in the literature. Importantly, most of the
effects are small-to-modest in size, so within college capital does not provide an overwhelming
explanation for the changes in GPA.
Changes in self-reported academic skills affect GPA in the second and fourth college
years but the effect is negative, not in the expected direction. Recall, most of the change in GPA
is in earlier college years. Also, active learning behaviors have a significant negative effect on
GPA only in the first year, contrary to our theorizing. The active learning behaviors measure
was taken from a set of questions on the most challenging class, as identifying by students, in the
previous semester (first and second year surveys only). It could be that the measure is detecting
these behaviors for students who are struggling in the most challenging class, and hence our
measure is not a particularly effective one of broader active learning behaviors.
Attributions that students worked hard in challenging class situations have an expected
positive effect on GPA in the first college year. Reported hard work increases changes in GPA
by one-tenth of a letter grade compared with students who did not report the hard work
21
attribution in challenging class situations, controlling for other variables. This effect is not
statistically significant at conventional levels at later points in the college career.
More strongly ascribing to the good student identity increases GPA. A one unit increase
in the importance of being a good student is associated with .04 of a letter-grade increase in the
first college year, a .03 increase in the second college year, and a positive but not significant
increase in the senior year. This is consistent with our theorizing. The size of the coefficients
may not seem large, but these effects are above and beyond the regular growth process in GPA,
while controlling for other variables.
The effects of changes in global self-esteem are not significant in any of the college
waves, not a surprise given the earlier work by Rosenberg and colleagues (1995). On the other
hand, both indicators of specific, academic self-esteem have significant positive effects on
growth of GPA. Self-assessed ability significantly increases GPA in both the first and second
college years. Self-assessed smartness exhibits a positive effect in the first year of college, no
significant effect in the second year, and a significant negative effect in the senior year. Higher
self-assessments of smartness early in college yield positive grades returns, whereas by the
senior year those who judge themselves progressively smarter (compared with the average Duke
student) actually perform less well in terms of GPA, net of other variables in the equation.
Changes in time spent studying outside of class do not yield significant GPA gains in any
college year. This could be because our measure of time use is not specific enough. It could
reflect the fact that time spent studying is fairly stable over the college years for most students
(as distinct from between-student differences in study time). Recall, time management
efficiency had a fairly strong positive effect on the intercept for GPA, but small negative effects
on change in GPA over college.
22
Finally, we investigated whether changes in within-college capital variables might
operate differently for black students compared with other students. This seemed plausible given
the rather dramatic increase in black student GPA over the college career. None of the
coefficients for interaction terms for black by each within-college capital variable were
significant at conventional levels (results not shown). Changes in within-college capital appear
to operate similarly for black students and those from other racial ethnic groups.
Table 4 summarizes the black-white achievement gap by college year, displaying the
unadjusted grade gap, the adjusted gap as predicted with the growth curve model, the difference
between unadjusted and adjusted grade gaps, and difference as a percentage of the unadjusted
grade gap. The percentage offers one indication of the extent to which the model (and fixed and
changing capital variables) accounts for the gap. We focus on the black-white comparison as
this one has been most prominent in the literature and in policy debate. The unadjusted gap
declines steadily through the junior year of college, and then dramatically shrinks from the end
of the junior to the end of the senior year of college. Under the growth curve model, one-third to
45 percent of the gap is accounted for by the model by the junior year of college. The model is
fairly efficacious in helping us understand the gap and its narrowing. By the end of the senior
year, the model is much less efficacious, the gross gap has closed considerably, and the model
accounts for less than 10 percent of the gap. In one sense, the dramatically smaller gap provides
much less for the model to explain.
Fixed Effects Model
Random effects growth curve models have the limitation of assuming that unmeasured
characteristics of individuals are uncorrelated with measured X’s in the estimation equation, and
that all stable characteristics of individuals that determine GPA have been included in the model.
23
Fixed effects models do not require the first assumption and control for the unmeasured
heterogeneity that might be implicated in the second assumption. In the extreme, if the apparent
racial ethnic gap in academic performance were all due to unmeasured factors, then there would
be little basis for our paper as the gap would disappear with control for unmeasured
heterogeneity (Allison 1994; Halaby 2004).
Table 3 presents a fixed-effects model that investigates this possibility. In the model we
include only racial ethnic category by time interactions, year, and changing X variables,
including a measure that treats major as time-varying. Several important results appear. First,
the black by year interaction (b = .06) is highly significant and is not dissimilar from the effect of
being black on GPA slope in the random effects growth curve model (b = .04). Second, the
significant coefficient for black by time indicates that the racial gap in academic performance is
not a simple function of unmeasured heterogeneity. A real gap persists even after removing the
effects of all stable individual characteristics, which is implied in a fixed effects model. Third,
two of six estimates of effect for changing X capital variables are significant and positive,
consistent with theorizing. Self-assessed academic skills have a small, positive significant effect
on GPA. This effect was not significant or negative in the growth curve model. Work hard
attributions have a significant positive effect on GPA, consistent with what we found in the
growth curve model. Other changing X capital variables that were significant in the growth
curve model are generally positive but not significant in the fixed effects model. In part, this
may be because fixed effects methods ignore the between-person variation and focus only on the
within-person variation, and thus, the standard errors can be considerably higher than those
produced by random effects models (Allison 1995).
24
In summary, the estimates from the fixed effects model give us greater confidence in the
estimates from the growth curve model.
DISCUSSION
The academic performance gap has received substantial scholarly attention. Most studies
of human capital investigate pre-college explanations for the gap, or in a few panel studies, fixed
independent variables. Our paper addresses this lacuna by investigating within-college variations
in human capital in three areas: academic and intellectual skills; global and specific self-esteem,
academic self-confidence, and student identity; and academic effort and engagement. How do
variations during college in these capital areas affect the trajectories of academic performance as
measured by semester-by-semester grade point average?
The racial ethnic gap in grades is at its largest in the first college semester and narrows
progressively over the remainder of college for all groups. In particular, black students’ gain in
GPA substantially narrows the gap with the highest performing racial ethnic groups.
We found that variations in four of nine indicators of within-college capital were
significantly associated with increases in GPA. Variations in work hard attributions, the
importance of a good student identity, and two measures of specific self-esteem -- self-assessed
ability and self-assessed smartness -- had statistically significant but generally small effects on
GPA, concentrated in the first and second college years as opposed to the senior year. In the
growth curve model, two additional within-college capital factors, self-assessed academic skills
and active learning behaviors had small significant effects but in ways opposite our theorizing.
A fixed effects model found one of these, self-assessed academic skills, did have significant
positive effects on GPA change, consistent with theorizing. The hours spent studying measure
25
had no significant effects on GPA in either model, likely attributable to our lack of a sufficiently
fine-grained measure. The fixed effects model clearly shows that we are not simply analyzing
unmeasured heterogeneity in the form of omitted, stable, person-specific characteristics. Within
college capital variations do help us understand the performance gap but do not explain a large
share of the gap. They are a small to modest factor.
Our findings have a number of limitations. The data refer to but one institution. The
findings likely generalize to other elite private institutions, but how far beyond that is an open
question. Second, the growth curve model invokes a strong assumption for multivariate
normality associated with full information maximum likelihood estimation. The distribution of
the dependent variable violates that assumption to some extent. However, our experimentation
with other functional forms, such as cumulative percentile rank in class, did not produce much
difference in findings. Third, several of our measures for time use and academic engagement
(i.e., active learning behaviors) are less than optimal. More fine-grained measures might
produce stronger results. More generally, future research might explore other indicators of
within-college capital and other indicators of the quality of collegiate educational experiences for
different racial ethnic groups.
An even larger question begs for attention. To wit: We now know that pre-college
factors explain at most a modest portion of the performance gap. Our results suggest within-
college factors, while a number of them are statistically significant, explain only a small portion
of the academic performance profiles for different racial ethnic groups. Other within-college
processes must be at work. What are they and how do they operate? Elsewhere we examined
the possible effects of processes associated with a stereotype threat explanation. Our results
suggested that, to the extent it operated, it was a small factor in explaining the gap with CLL data
26
(XXXXX 2006). Other prominent possibilities include forms of cultural and social capital in
college including diversity in networks, and academic climate as experienced by different racial
and ethnic groups. We plan to explore these possibilities in the future with the CLL data.
27
Appendix: Drop-out Bias, Response Bias and Missing Data
Registrar’s Office data provided information on students who were not enrolled at the
end semester in each survey year. Non-enrollment might occur for multiple reasons including
academic or disciplinary probation, medical or personal leave of absence, dismissal or voluntary
(including a small number of transfers) or involuntary withdrawal. At the end of the first college
year, fewer than one percent of students (n = 12) were not enrolled, about three percent by the
end of the second year (n=48) and just over five percent (n= 81) by the end of the senior year.
We combined all of these reasons and tested for differences in selected admissions file
information of those enrolled versus not enrolled at the end of each survey year. The test
variables included racial ethnic group, SAT verbal and mathematics score, high school rank
(where available), overall admission rating (a composite of five different measures), parental
education, financial aid applicant, public-private non-religious-private religious high school and
US citizenship. Of over 40 statistical tests only two produced significant differences (p<.05): at
the end of the first year, dropout had SAT-verbal scores of 734 versus 680 for non-dropouts, by
the end of the fourth college year those who had left college had an overall admissions rating of
46.0 (on a 0-60 scale) while those in college had an average rating of 49.7. No other differences
were significant. We conclude that our data contain very little drop-out bias.
We conducted similar tests for respondents versus non-respondents for each wave for the
same variable set plus college major (4 categories, engineering, natural science/mathematics,
social science, humanities), whether or not the student was a legacy admission, and GPA in the
semester previous to the survey semester. Seven variables show no significant differences or
only a few small sporadic differences (one wave but not others), including racial ethnic category,
HS rank, admissions rating, legacy, citizenship, financial aid applicant, and major group.
28
Several other variables show more systematic differences. 1. Non-respondents at every wave
have lower SAT scores (math.: 9-15 points, roughly one-tenth to one-fifth of a standard
deviation; verbal: 18-22 points, roughly one-third of a standard deviation). 2. Non-respondents
have slightly better educated parents at waves one and three, but not waves two and four. 3.
Non-respondents at every wave are less likely to be from a public high school and somewhat
more likely to be from a private (non-religious) high school. 4. Finally, non-respondents have
somewhat lower GPA in the previous semester compared with respondents (by about one-quarter
of a letter grade).
These differences are somewhat inconsistent or non-synchronous in that they include
lower SAT and GPA for non-respondents, but higher parental education and private (more
expensive) high schools. In general, the non-response bias is largest in the pre-college wave and
smaller in the in-college waves even though the largest response rates are in the pre-college
wave. In general, we judge the non-response bias as relatively minor or small on most variables
and perhaps modest on SAT measures.
29
REFERENCES
Allison, Paul D. 1994. “Using Panel Data to Estimate the Effects of Events.” Sociological Methods and Research 23: 174-199.
------. 2005. Fixed Effects Regression Models for Longitudinal Data Using SAS. Cary, NC: SAS Publishing.
Astin. Alexander W. 1993. What Matters in College? Four Critical Years Revisited. San Francisco: Jossey-Bass.
Becker, Gary S. 1975. Human Capital: A Theoretical and Empirical Analysis, with Special Reference to Education. New York: National Bureau of Economic Research.
Bandura, Albert. 1982. “The Self and Mechanisms of Agency.” Pp. 3-39 in Psychological Perspectives on the Self, vol. 1, edited by J. Suls. Hillsdale, NJ: Erlbaum.
Bollen, Kenneth A. and Patrick Curran. 2006. Latent Curve Models: A Structural Equation Perspective. Hoboken, NJ: Wiley.
Bourdieu, Pierre. 1986. “The Forms of Capital.” Pp. 241-258 in Handbook of Theory and Research for the Sociology of Education, edited by J. Richardson. Westport, CT: Greenwood Press.
Bowen, William G. and Derek Bok. 1998. The Shape of the River: Long Term Consequences of Considering Race in College and University Admissions. Princeton, NJ: Princeton University Press.
Burke, Peter J. 2004. “Identities and Social Structure: The 2003 Cooley-Mead Award Address.” Social Science Research 67: 5-15.
Bryk, Anthony S. and Stephen W. Raudenbush. 1992. Hierarchical Linear Models: Applications and Data Analysis Methods. Thousand Oaks: Sage.
Coleman, James S. 1988. “Social Capital in the Creation of Human Capital.” American Journal of Sociology 94 Supplement: S95-S120.
Curran, Patrick J. and Andrea M. Hussong. 2002. “Structural Equation Modeling of Repeated Measures Data: Latent Curve Analysis.” Pp. 59-86 in Modeling Intraindividual Variability with Repeated Measures Data: Methods and Applications. Edited by D.S. Moskowitz and S.L. Hershberger. Mahwah, NJ: Lawrence Erlbaum Associates.
Curran, Patrick J., Bengt O. Muthén, and Thomas C. Harford. 1998. “The Influence of Changes in Marital Status on Developmental Trajectories of Alcohol Use in Young Adults.” Journal of Studies on Alcohol 59: 647-658.
Enders, Craig K and Deborah L Bandalos. 2001. “The Relative Performance of Full Information Maximum Likelihood Estimation for Missing Data in Structural Equation Models.” Structural Equation Modeling 8: 430–457.
Flowers, L. 2000. “Cognitive Effects of College: Differences between African-American and Caucasian Students.” Ph.D. dissertation, University of Iowa, Iowa City.
Halaby, Charles N. 2004. “Panel Models in Sociological Research: Theory into Practice.” Annual Review of Sociology 30: 507-544.
Jencks, Christopher and Meredith Phillips (Eds.). 1998. The Black-White Test Score Gap. Washington, D. C.: Brookings Institution Press.
Juster, F. Thomas and Frank P. Stafford. 1991. “The Allocation of Time: Empirical Findings, Behavioral Models, and Problems of Measurement.” Journal of Economic Literature 29: 471-522.
30
XXXXX. 2008. “A Social Portrait of Legacies at an Elite Private University.” Unpublished manuscript, Department of Sociology, Duke University, Durham, NC.
Massey, Douglas S., Camille Z. Charles, Garvey F. Lundy, and Mary J. Fischer. 2003. The Source of the River: The Social Origins of Freshmen at America’s Selective Colleges and Universities. Princeton, NJ: Princeton University Press.
Morgan, Stephen L. and Jal D. Mehta. 2004. “Beyond the Laboratory: Evaluating the Survey Evidence for the Disidentification Explanation of Black-White Differences in Achievement.” Sociology of Education 77: 82-101.
Muthén Bengt and Linda Muthén. 2007. M Plus version 5. Los Angeles: Muthén and Muthén.Myerson, Joel, M. Rank, F. Raines and M. Schnitzler. 1998. “Race and General Cognitive
Ability: The Myth of Diminishing Returns to Education.” Psychological Science 9: 139-142.
Osborne, J. W. 1995. “Academics, Self-Esteem, and Race: A Look at the Underlying Assumptions of the Disidentification Hypothesis.” Personality and Social Psychology Bulletin 21: 449-455.
Pascarella, Ernest T. and Patrick T. Terenzini. 2005. How College Affects Students, Volume 2: A Third Decade of Research. San Francisco: Jossey-Bass.
Rau, William. 2001. “Response: To Replicate or Not to Replicate: Is that Schuman’s Question.” Sociology of Education 74: 75-77.
Rau, William and Ann Durand. 2000. “The Academic Ethic and College Grades: Does Hard Work Help Students to ‘Make the Grade’?” Sociology of Education 73: 19-38.
Reitzes, Donald C. and Peter J. Burke. 1980. “College Student Identity Measurement and Implications.” Pacific Sociological Review 23: 46-66.
Rosenberg, Morris. 1989. Society and the Adolescent Self-Image, Revised Edition. Middletown, CT: Wesleyan University Press.
Rosenberg, Morris, Carmi Schooler, Carrie Schoenbach and Florence Rosenberg. 1995. “Global Self-Esteem and Specific Self-Esteem: Different Concepts, Different Outcomes.” American Sociological Review 60: 141-156.
Royston, Patrick. 2005. “Multiple Imputation of Missing Values: Update.” The Stata Journal 5:1-14.
Rubin, Donald. 1987. Multiple Imputation for Non-Response in Surveys. New York: Wiley and Sons.
Thorndike, R. and J. Andrieu-Parker. 1992. “Growth in Knowledge: A Two-Year Longitudinal Study of Changes in Scores on the College Basic Subjects Examination.” Paper presented at the Annual Meetings of the American Educational Research Association.
Schuman, Howard. 2001. “Comment: Students’ Effort and Reward in College Settings.” Sociology of Education 74: 73-74.
Schuman, Howard, Edward Walsh, Camille Olson and Barbara Etheridge. 1985. “Effort and Reward: The Assumption that College Grades are Affected by Quantity of Study.” Social Forces 63: 945-966.
XXXXX. 2006. “Race, Stereotype Threat, and Classroom Performance: Tests of Social Psychological Mechanisms.” Unpublished manuscript, Department of Sociology, Duke University, Durham, NC.
XXXXX. 2005. “The Black-White Achievement Gap in the First College Year: Evidence from a New Longitudinal Case Study.” Pp. 187-216 in Research in Social Stratification and Mobility 22, edited by D. Bills. Amsterdam: Elsevier.
31
StataCorp. 2007. Stata Statistical Software: Release 10. College Station, TX: StataCorp LP.Stinebrickner, Ralph and Todd R. Stinebrickner. 2003. “Time-Use and College Outcomes.”
Journal of Econometrics 121: 243-269.Maes, Barbara, Lisa Alstadt, Laura Underwood and Michael Boivin. 1996. “Evaluating
Changes in Social Attitudes, Character Traits, and Liberal-Arts Abilities during a Four-Year Program at a Christian College.” Research on Christian Higher Education 3: 115-128.
Van Buuren, S., H.C. Boshuizen, and D.L. Knock. 1999. “Multiple Imputation of Missing Blood Pressure Covariates in Survival Analysis.” Statistics in Medicine 18:681-694.
32
Figure 1. Semester Grade Point Averages by Racial Ethnic Group
2.80
2.90
3.00
3.10
3.20
3.30
3.40
3.50
3.60
3.70
1st Semester 2nd Semester 1st Semester 2nd Semester 1st Semester 2nd Semester 1st Semester 2nd Semester
First Year Second Year Third Year Fourth Year
WhiteBlackLatinoAsianBi-Multiracial
33
Figure 2. Growth Curve Model
32
α
β
GPAYear 2
GPA Year 1
GPAYear 3
GPAYear 4
Z1 Z2 Z4
X1
1
34
Table 1. Measures and Descriptive Statistics by Racial Ethnic Group, Campus Life and Learning Project Mean (Standard Deviation) by Racial Ethnic GroupVariable Metric/Notes White Black Latino Asian Bi-MultiracialRace Dummy variable for groups; US Census questions N = 528 227 214 229 57Female 1 = Female .48 .71 .48 .45 .64
0 = Male (.50) (.46) (.50) (.50) (.49)Mother's Education 1 = High school graduate or less 3.43 2.93 3.16 3.19 3.07
2 = Some college/vocational school (.99) (1.21) (1.15) (1.05) (1.18)Father's Education 3 = College graduate 4.01 3.17 3.60 4.02 3.46
4 = Some graduate school/Master's Degree (1.06) (1.39) (1.27) (1.08) (1.25)5 = Higher professional degree (PhD, JD, MD)
US Citizen 1 = US Citizen (native born and naturalized) .98 .93 .93 .72 .880 = Other (.16) (.26) (.26) (.45) (.33)
SAT-Mathematics Scholastic aptitude test (max. = 800) 723.03 635.46 682.83 760.20 698.65(57.30) (60.16) (58.48) (41.40) (71.02)
Good Student Identity 1 = Not at all important 4.30 4.67 4.52 4.37 4.57 (High School) 2 = Somewhat important (.80) (.59) (.83) (.93) (.80)
3 = Important4 = Very important5 = Extremely important
Self-Perceived Ability Sum of two items for abilities in last challenging mathematics and English course (max. = 10)
8.24 7.98 8.27 8.35 8.30 (High School) (1.20) (1.11) (1.11) (1.12) (1.06)High School Confidence: 1 = Not at all confident 3.68 3.43 3.66 3.74 3.48 Mathematics 2 = Somewhat confident (1.01) (1.07) (1.08) (1.05) (.95)High School Confidence: 3 = Confident 2.28 2.33 2.21 2.46 2.07 English 4 = Very confident (1.03) (1.06) (1.03) (1.06) (.99)
5 = Extremely confidentTime Management 1 = Not at all successful 2.78 2.45 2.54 2.52 2.75 (First Year) 2 = Somewhat successful (.93) (.95) (.97) (.89) (.89)
3 = Successful4 = Very successful5 = Extremely successful
35
Final Major Field: Dummy variable for groups; omitted category: natural sciences/engineering Natural Sciences .34 .22 .25 .55 .33
(.47) (.41) (.44) (.50) (.48) Social Sciences .45 .59 .59 .36 .42
(.50) (.49) (.49) (.48) (.50) Humanities .19 .16 .14 .07 .23
(.39) (.37) (.34) (.26) (.42)Academic Skills Self-assessed academic skills; 8-item scale
(max. = 40) First Year 29.28 27.83 28.68 28.91 29.37(3.98) (3.43) (3.83) (3.75) (3.93)
Second Year 29.41 28.55 29.04 28.71 29.77(4.01) (3.38) (3.61) (4.03) (3.69)
Fourth Year 32.14 31.51 31.78 31.15 32.32(3.80) (4.31) (3.95) (3.64) (3.51)
Academic Self Confidence Confidence in most challenging class First Year 1 = Not at all confident 2.54 2.24 2.41 2.60 2.70
2 = Very confident (.97) (.88) (.97) (1.04) (1.00) Second Year 3 = Confident 2.55 2.28 2.54 2.59 2.58
4 = Somewhat confident (1.04) (.91) (.96) (.99) (1.01) Fourth Year 5 = Extremely confident 2.80 2.65 2.85 2.64 3.16
(1.02) (1.00) (1.02) (.98) (1.07)Word Hard Attributions Succeed in most challenging class because worked
hard? First Year .80 .76 .80 .76 .861 = Yes (.40) (.43) (.40) (.43) (.35)
Second Year 0 = No .80 .84 .78 .79 .84(.40) (.37) (.41) (.41) (.37)
Fourth Year .83 .79 .81 .79 .89(.38) (.41) (.39) (.41) (.32)
Active Learning Behaviors Scale for number of active learning behaviors and activities used to address challenges in most challenging class (max. = 6)
First Year 2.26 2.55 2.44 2.34 2.45(1.33) (1.52) (1.46) (1.26) (1.58)
Second Year 2.21 2.63 2.53 2.44 2.37(1.23) (1.40) (1.22) (1.24) (1.09)
36
Good Student Identity Importance of good student identity to overall identity First Year 4.02 4.40 4.22 4.20 4.55
1 = Not at all important (.91) (.71) (.93) (.98) (.66) Second Year 2 = Somewhat important 3.89 4.23 4.02 4.16 4.17
3 = Important (.99) (.83) (.92) (1.00) (.93) Fourth Year 4 = Very important 3.96 4.07 3.97 4.12 4.06
5 = Extremely important (.94) (.89) (.99) (.99) (1.03)Global Self Esteem Sum of 3 items (max. = 15) First Year Extent to which respondent agrees that: 10.41 10.59 10.39 9.60 9.66
On the whole, satisfied with self (2.72) (2.80) (2.66) (2.50) (2.54) Second Year Do not feel useless at times (reflected) 10.64 11.09 10.96 9.99 10.31
Do not wish could have more self-respect (reflected)
(2.93) (2.97) (2.68) (2.55) (2.79) Fourth Year 11.00 11.42 10.95 10.50 10.60
(2.91) (2.80) (2.75) (2.64) (2.71)Hours Spent Studying Hours spent during a typical week studying or doing
homework; recoded from 6 discrete categories to a continuous scale by recoding each value to the midpoint of the category range.
First Year 10.89 10.43 10.17 11.39 11.86(5.11) (5.14) (5.16) (5.10) (4.93)
Second Year 10.73 10.59 10.01 10.72 10.86(4.96) (4.98) (4.95) (5.06) (5.08)
Fourth Year 10.01 9.06 9.30 8.95 9.52(5.24) (5.11) (5.24) (5.29) (4.82)
Self-Assessed Ability Ability comparisons to other students in most challenging class First Year 3.26 2.74 3.09 3.40 3.43
1 = Very much below average (.86) (.81) (.78) (.90) (.76) Second Year 2 = Below average 3.28 2.80 3.16 3.42 3.19
3 = Average (.90) (.88) (.84) (.93) (.91) Fourth Year 4 = Above average 3.43 3.14 3.27 3.44 3.45
5 = Very much above average (.80) (.85) (.80) (.86) (.76)Self-Assessed Smartness Smartness comparisons to average Duke student First Year 1 = Not nearly as smart as average 3.33 2.99 3.14 3.54 3.25
2 = Somewhat less smart than average (.87) (.77) (.85) (.83) (.78) Second Year 3 = As smart as the average Duke student 3.38 3.10 3.24 3.52 3.45
4 = Somewhat smarter than average (.87) (.71) (.79) (.82) (.77) Fourth Year 5 = Much smarter than average 3.57 3.16 3.47 3.68 3.62 (.80) (.77) (.85) (.76) (.85)
37
Table 2. Effects of Selected Independent Variables on the Intercept and Slope Parameters of a Linear Growth-Curve Model Predicting Student Grade Point Average across the College Career Intercept Parameter Slope Parameter Time Invariant Independent Variables Coefficient Robust SE Coefficient Robust SE Race/Ethnicity (ref: White)
Black -.28 ** .05 .04 * .02Latino -.12 ** .04 .02 .01Asian .02 .04 -.03 .02Bi-Multiracial -.13 * .05 .04 .02
Female .11 ** .03 .00 .01Mother's Education -.03 .02 .01 .01Father's Education .02 .02 .00 .01US Citizen -.01 .05 .02 .02SAT-Mathematics .00 ** .00 .00 .00Good Student Identity (High School) .00 .02 .01 .01Self-Perceived HS Ability .06 ** .02 .00 .01HS Confidence: Mathematics -.08 ** .02 .01 .01HS Confidence: English .02 .02 .00 .01Time Management (Year One) .13 ** .02 -.01 .01Major Field (ref: Natural Sciences)
Social Sciences .09 ** .03 -.01 .01Humanities .07 .05 .00 .02
Intercept .84 ** .28 .35 ** .11
First Year GPA Second Year GPA Fourth Year GPATime Varying Independent Variables Coefficient Robust SE Coefficient Robust SE Coefficient Robust SESelf-Assessed Academic Skills .00 .00 -.01 * .00 -.01 * .00Academic Self Confidence .01 .02 .01 .02 .00 .02Work Hard Attributions .10 ** .04 .06 .04 -.01 .05Active Learning Behaviors -.03 ** .01 -.01 .01Good Student Identity .04 * .02 .03 * .02 .02 .02Global Self Esteem .00 .01 .01 .01 .00 .01Hours Spent Studying .00 .00 .00 .00 .00 .00Self-Assessed Ability .05 * .02 .06 ** .02 .02 .02Self-Assessed Smartness .05 * .02 .02 .02 -.05 * .02
Residual VariancesCoefficient Robust SE RMSEA (Root Mean Square Error
of Approximation)First Year GPA .10 ** .01 .043Second Year GPA .09 ** .01Third Year GPA .08 ** .01 SRMR (Standardized Root Mean
Square Residual)Fourth Year GPA .07 ** .01 .036Intercept .11 ** .01Slope .01 ** .00Note: Weighted estimates; N = 1,255* p < .05, ** p < .01
38
Table 3. Fixed Effects Estimates of the Effects of Race/Ethnicity and Time Varying Characteristics on Student Grade Point Average across the College Career Coefficient Robust SERace/Ethnicity
Black x Year .06 ** .02Latino x Year .02 .01Asian x Year -.04 ** .02Bi-Multiracial x Year .02 .02
Year .08 ** .01Major Field
Social Sciences .02 .04Humanities .04 .05Undeclared .03 .04
Self-Assessed Academic Skills .01 * .00Academic Self Confidence .01 .01Work Hard Attributions .05 * .02Good Student Identity .02 .01Global Self Esteem .01 .01Hours Spent Studying .00 .00Self-Assessed Ability .02 .02Self-Assessed Smartness -.01 .02
Constant 2.65 ** .12
Note: Weighted estimates; N = 3,137 observations/1,119 subjects* p < .05, ** p < .01
39
Table 4. Unadjusted and Adjusted Black-White Achievement Gap by Year in College
College Year Unadjusted Gap Adjusted Gapa Difference % of Gross Gap
1 .422 .282 .140 332 .391 .245 .146 373 .375 .205 .170 454 .179 .165 .014 8
a As predicted using the growth curve model
40