Why a New Baseline
As last year’s statewide assessment results are being analyzed across the nation by educators preparing for the beginning of the new school year, I urge caution in how the results are interpreted. Many are using the phrase “new baseline” to describe these results after the pandemic, but it is important to explicitly recognize why it is a new baseline because it is not just due to test administration interruptions.
While it may be tempting to draw a dashed line over the missing year(s) of statewide testing to connect results from Spring 2019 to the latest results, that is a false connection. The assessment results from spring 2021 (or 2022, depending upon when state testing fully resumed in your region) are a new baseline because of the myriad of new variables that influence post-pandemic results that did not exist during the last regular year of large-scale testing. We have to be mindful about how we use these results because the world changed around the assessments, our schools, and our students.
There are far too many new and poorly understood variables to account for during the interruption to adequately interpret student performance changes, therefore any comparison to pre-pandemic student performance in order to infer a trajectory or calculate any kind of growth post-pandemic must be done with great caution, if at all. Attempting to compare to pre-pandemic performance is looking backward in a way that is less helpful, and is likely to lead practitioners to paralysis by analysis.
Data collected during the pandemic was done inconsistently and many groupings are not mutually exclusive, further complicating grouping students from the most recent administrations in any meaningful ways for analysis on the large-scale. For example, access to technology, internet connectivity, curriculum adaptability to remote instruction, instructional practices, and adult support (be it from school or outside of the school) are so complex and varied, researchers will be examining cross-sections of student performance in relation to their experience of the pandemic, both in and outside of school, for decades to come. Additionally, sound analysis should be expected to be highly contextual and require considerable knowledge of local conditions during the pandemic for meaningful interpretation in order to shed light on the true impact of these unprecedented interruptions and carry limitations such as sample size and generalizability.
I urge practitioners to use the most recent large-scale academic assessment results as one data point, among many, that indicate student academic strengths and weaknesses post-pandemic. Then dig into the real work and leave these baseline results behind. It is our prerogative as educators to meet our students where they are today, and match their needs with proven, high-impact instructional strategies coupled with quality curricular materials, underpinned by social-emotional supports to ensure every student progresses in their own academic journey following these unprecedented events. The annual, statewide academic assessment results will become more informative as this baseline year gets further behind us and there are more years of results that can be reliably compared to gauge progress.
While progress often feels painfully slow day-to-day, education is the epitome of a long-game.