Take a dozen eggs. Better still, take several dozen eggs and compare them to another several dozen eggs. Eggs are eggs, and an omelette make.
However, from the individual differences perspective, humans differ. Brighter kids learn faster, about 5 times faster than their slower classmates. Take a whole school district and you will find a few children who learn 7 times faster, hence
Here’s a deal: we will improve our experimental designs if they will measure, even very briefly, the ability and personality of their experimental subjects.Even a simple brief vocabulary test, plus a digit span test or speeded coding task would provide useful information, and if parents could be persuaded to do the same we would have a handle on a major source of unexamined variance in experimental designs.
As Sara says: There is a world outside of experimental designs
USING INTELLIGENCE TO PREDICT RESPONSE-TO-INTERVENTION: AN APPLICATION OF INTEGRATIVE DATA ANALYSIS IN PROJECT KIDS
Sara A. Hart Florida State University, firstname.lastname@example.org.
There has been a growing body of work, which suggests that the individual traits that a child brings into an intervention project have an interactive effect on literacy learning. Even within intervention studies shown to be impactful at the mean level, there are individual differences in how children responded to the intervention.
I contend that there are numerous (typically unmeasured) sources of these individual differences, and for this talk I will present data examining the role of both crystallized and fluid intelligence in predicting individual differences in response-to-intervention, with data pooled across multiple projects allowing for generalization beyond any given intervention protocol. Integrative Data Analysis (IDA; Curran & Hussong, 2009) was used to create a pooled source of Project KIDS raw data of 545 kindergarten and first grade children (age M = 5.6yrs) who had previously participated in one of three literacy-based randomized control trial interventions in the treatment group.
IDA allows for raw data from each project to be combined and heterogeneity, such as age and project, controlled for. Reading was measured as pre- and post-intervention scores on the Woodcock Johnson Tests of Achievement Letter-Word Identification (LWID) subtest, crystallized intelligence was measured using a pre-test mean raw score across the KBIT-2 Verbal Knowledge and Riddles subtests, and fluid intelligence was measured using a pre-test raw score from the KBIT-2 Matrices subtest.
As a first step of IDA, a moderated nonlinear factor analysis was used to create scale scores which are project invariant for the constructs of interest. I then used Proc Mixed to calculate covariance adjusted scores to model change from pre-test to post-test for LWID, operationalizing “response-to-intervention”. Quantile regression was then used to model both crystallized and fluid intelligence predicting response to-intervention.
The models indicated that both crystallized and fluid intelligence were statistically significant predictors across the distribution of response-to-intervention, although for both, the effect was statistically greater for the students who made the greatest gains due to the intervention.
These results indicate that brighter children do even better in an intervention that is impactful for most students. Although certainly not surprising for the audience of ISIR, child traits such as intelligence are not often included in determining response-to-intervention in education studies, and I argue that it is important moderator that should be considered. Beyond these specific findings, I will discuss how we will use these pooled data to exploring many other sources of moderation of response-to-intervention, including other cognitive traits, behavioral traits, the environment and family history. This work will expand the understanding of how and why some children are more successful when receiving gold standard educational interventions.