The FAST measures were developed for the primary purpose of being a Universal Screening tool; a tool to be used to identify students who might be at risk for not meeting end of year outcomes and as a tool that can assist schools in evaluating the early literacy “Health and Well-Being” of the system. As such, it is important that these measures be administered according to the published standardized scoring and administration rules so any resulting data can be accurately interpreted. Additionally, it is important to understand the limitations of these measures so that inappropriate instructional conclusions are minimized.
Issue: Student Preparation for Screening – Long Term
Issue: Student Preparation for Screening – Immediately Preceding the Screening
Issue: Invalidating a Universal Screening Event and Retesting Students
Issue: student preparation for screening – long term
Do: Provide robust, evidence based universal tier instruction (targeted and intensive instruction as is necessary) using practices and materials that are grounded in the Iowa Core and Iowa Early Learning Standards.
Do not: Provide routine instruction on the specific isolated skills/tasks that are being measured by the FAST assessments. Examples include, but are not limited to: instruction in reading nonsense words, sight words (other than instructions on these words that are naturally occurring and part of the materials and resources currently used) and letter names apart from letter sounds.
Why not: The specific FAST measures and items were selected as a way for educators to quickly and accurately make judgments regarding students’ progress toward more global end of year outcomes as well as to evaluate the sufficiency of universal tier instruction for the class/grade level. These measures contain only a small sampling of skills, are not a complete representation of all skills that are to be considered during instruction and were selected based upon their unique ability to predict end of year performance. During daily instruction, caution must be exercised in spending too much instructional time on isolated skills (specifically those which are contained on the FAST assessment) versus literacy instruction that employs the use of authentic reading tasks covering a wide range of grade appropriate skills. In the example noted above regarding the teaching of nonsense words, not only do we violate many best practices of early literacy instruction, but we also run the risk of further confusing some of our most at-risk readers in using their skills to decode unknown real words. Furthermore, when we engage is such narrow instructional practices, we limit our ability to generalize student performance on this assessment to a broader range of skills in the area of focus.
Do not: Allow student access to assessment materials for any reason other than when being assessed during universal screening or progress monitoring situations.
Why not: These measures serve in the same capacity as temperature, weight, blood pressure, etc… would serve during a physical wellness check (i.e., to predict physical well-being using quickly administered assessments). If students have been given an opportunity to specifically practice (aside from normally occurring instruction using authentic tasks) the items contained on the FAST measures, educators might not be able to accurately make judgments about a student’s progress toward end of year outcomes nor will they be able to accurately evaluate the sufficiency of core instruction. The most likely outcome would be an under identification of students needing support and an over estimation of the sufficiency of score. This would be similar to sucking on an ice cube previous to getting our temperature taken using an oral thermometer. We might be “sick” but the indicator used has been inappropriately affected and thus, our conclusions and next steps might be altered.
Issue: student preparation for screening – immediately preceding the screening
Do: Encourage the student to put forth their best performance.
Do not: Encourage the student to read as fast as they can.
Why not: When students read as “fast as they can” they are generally not practicing good reading behaviors. Such practices tend to increase student error rate and significantly reduce the student’s ability to comprehend and derive meaning from the text – the main goal of reading! Another unintended consequence of this type of encouragement is that it might artificially increase the number of words that students read in one minute and therefore increase the number and percent of false negative interpretations (not predicted to be at risk from the screener, but failing to meet end of year outcomes). As a result, students who might be in need of additional instruction and more frequent monitoring might not be offered these services. Although the school may look better in the moment (i.e., have more students at or above benchmark), this data may be misleading and ultimately not provide the school with data that will be the catalyst for “getting even better.”
Issue: invalidating a universal screening event and retesting students
Do: Take into account any “environmental and situational” issues such as distractions, illness, mood, rapport, etc… when analyzing a universal screening event for validity (i.e, believability of scores) should it be determined that these issues may have adversely effected the performance of the student. In making this decision you are encouraged to consult with your internal coach, compare previous universal screening status to current status, as well as use other collected student performance data.
Do not: Unilaterally decide to invalidate a student’s score because you feel as though the student could do better if given another chance.
Why not: Given the current FAST assessment procedures, any student who is retested for any reason is retested using the same probes. Thus, upon the second administration using the same probes, there is a high likelihood that the student’s performance may improve simply because of the immediate practice effects. Therefore, to the extent possible, try to initially assess students in an environment that is free from distractions and if you sense that a student is sick or otherwise in a “state of mind” that is not conducive to providing their best performance, it might be best to wait and assess the student on a different day. Retesting should be a very rare occurrence and should never be made unilaterally. It is suggested that schools develop a policy regarding this issue.
Issue: fluency interventions
Do: Provide students with an opportunity to experience a wide variety of rich connected text, frequently incorporating vocabulary study and other “making meaning” activities.
Do not: Only engage in “repeated reading” sessions where the only goal is read faster and farther than the time before.
Why not: CBMr, which is essentially a measure of a student’s ability to accurately and effortlessly read connected text, is considered a general outcome measure in the area of reading. This means that for a student to be able to read connected text in an accurate and effortless manner they are required to be able to efficiently and effectively orchestrate and employ a large number of subskills (decoding, knowledge of vocabulary, use of background knowledge, prosody, etc…) during a reading activity. The more successful they are in this endeavor the better the fluency. Students who have better developed subskills generally have more reserve cognitive capacity to devote to understanding the meaning of the text and therefore tend to read in a much more accurate and fluent manner. On the other hand, students who have lesser developed subskills (such as decoding) generally devote more of their cognitive energy to the more primitive use of these subskills, have less ability to focus on understanding the meaning of text and therefore read in a less accurate and fluent manner. The very act of reading a selection of connected text several times in a short period of time has its benefits in training the student to understand what it “feels” like to read a passage in an effortless manner and can provide great practice with prosody and learning the text specific vocabulary and decoding patterns. However, instruction that works to build a wider variety of skills that can be generalized across time and selection must also be provided. Thus, gains in skills in these broader areas will typically result in a student who can read novel text in a more accurate and effortless manner with greater understanding.
Issue: universal screening and progress monitoring measures being administered by trained, Iowa TIER certified staff
Do: Train all staff (certified and classified) in the standardized administration and scoring of all of the FAST measures that they will be expected to administer.
Do: Utilize the web-based training materials on the Iowa TIER Knowledge Base to assist in the training of current and new staff.
Do: Frequently revisit this issue of standardized administration and scoring of the FAST measures with existing staff as loss of fidelity can occur over time.
Do not: Assign staff, with little or no training in the administration and scoring of the FAST measures, the responsibility of being a part of a universal screening team or being responsible for progress monitoring.
Do not: Assume that staff will remember the “finer points” of the administration and scoring from one year/testing period to the next.
Why not: Although the FAST measures have documented technical adequacy in their ability to be used for universal screening and progress monitoring purposes, like all other educational assessments, they are not perfect and without error (false positives and false negative). Thus, the goal is to administer and score each measure to each student according to the established standardized administration and scoring rules so as to further minimize any measurement errors so any resulting interpretations can be as reliable and valid as possible. Standardized means giving the test the same way each time to each student so any performance differences between students is most likely due to differences in skills and not to differences in administration and scoring. Thus, schools should regularly emphasize the importance of fidelity of administration and scoring. Weaving quick-checks into staff meetings before each benchmark period is a great way to help ensure this fidelity and adherence to standardization.