If you've been following the news lately, you may have heard a lot of debate about the academic profiles of college athletes. In a recent report published by CNN, Mary Willingham, a learning specialist at UNC, argued that from 2004 to 2012 a large number of athletes were reading far below a college level. University officials have publicly refuted Willingham’s conclusions citing, among other issues, a significant misinterpretation of the data and insufficient data for drawing conclusions about students’ reading levels.
As you try to make sense of this debate, you might wonder
- Why are these findings so important?
- How are reading skills assessed?
- What does "reading at grade level" really mean?
Reading to learn is a critical component of academic success and lifelong learning. That's true of everything from understanding directions to learning independently with text-based resources. For this reason, reading skills have proven highly predictive of academic achievement and attrition; thus, measuring and monitoring students’ reading development is crucial.
Now, let's wade into reading assessment!
How are reading skills assessed?
To sufficiently measure progress, we need to know what comprises reading proficiency. As outlined by the National Reading Panel, reading involves five cognitive components:
- Vocabulary acquisition
- Comprehension strategies
- Fluency instruction
- Phonemic awareness
- Phonics instruction
Environmental attributes (e.g., access to print resources, at-home instruction), prior knowledge, cognitive functioning, motivation, and interest must also be considered. Therefore, cognitive development, exposure to text, prior experiences, and interests are just a few factors that influence a developing reader. Simply developing an impressive vocabulary or having access to a library of books does not mean a student will be a great reader.
What does "reading at grade level" really mean?
Well, it depends. When drawing conclusions from a reading assessment, one must consider the following:
- What and how were components of reading assessed?
- How were (or should) the results be interpreted?
The most common reading assessments focus primarily on the more cognitive components of reading. Such assessments (e.g., state-developed standardized tests) generally consist of questions or subtests designed to measure specific constructs (vocabulary knowledge, decoding skills, comprehension, print knowledge, etc.). These assessments can help indicate a student's reading-related competencies, but do not take into account affective or environmental factors.
Some reading assessments are informal; however, for accountability purposes, criterion- or norm-referenced standardized tests are more commonly used. Criterion-referenced tests measure students’ mastery of a given concept or skill. In this case, reading at grade level would suggest the student demonstrated sufficient mastery for reading-related concepts as established by the test maker. On the other hand, norm-referenced tests compare students’ performance to those of a norming sample, a group of similar individuals (e.g., students of the same age/grade) assessed using the same test. So when a third-grade student is said to be reading at grade level, it means the student’s score fell approximately at the average for third-grade students in the norming sample.
As a result, norm-referenced scores actually say little about whether the student has achieved a particular degree of proficiency. Rather, they reflect how the student performed compared to other test-takers.
Is the reading-at-grade-level delineation even fair?
The complex nature and limitations of measurements can make it difficult to draw reasonable conclusions. For example, a proficient reader might do poorly on a particular assessment simply because he was not interested in the text. Tests that are too comprehensive can cause fatigue, while tests that are too short may not paint the whole picture. More importantly, the performance of a first-grade student turning seven years old late in the school year could be compared against another first-grade student who turned seven the previous September.
All of which begs the question, is the reading-at-grade-level delineation even fair?
When analyzing reading data, one must consider the following:
- Breadth: What components were assessed? What components were not considered? Were all subtests of the assessment administered?
- Depth: How were the components assessed? Were the questions written at a reasonable level for the population?
- Interpretation: How were the results interpreted? Were they norm-referenced, criterion-referenced, etc.? Who comprised the norming sample? How were the cut-scores established?
As this overview makes clear, reading assessments are—like people who read—complex. People should dig deeper than the headlines or sound bites on this topic before drawing conclusions.