The article, “Using Literacy Boost to Inform a Global, Household-Based Measure of Children’s Reading Skills” (Manuel Cardoso and Amy Jo Dowd) looks at UNICEF’s efforts at strengthening education data collection through household surveys with respect to children’s reading skills. In response to the Sustainable Development Goals’ (SDGs) call for a greater focus on inclusiveness, equity and quality in education, for which comparatively little data currently exists, UNICEF attempted to design an effective measure of learning outcomes in order to help monitor education-related SDGs.
Against this background, work began in 2014 on the development of a new measure to assess children’s reading skills. Consequently, UNICEF convened a technical advisory group composed of leading experts from various institutions (Pratham, Education for All Global Monitoring Report, Research Triangle Institute, Save the Children, South Methodist University and UNESCO Institute for Statistics) to provide technical advice and support towards the development of a methodology for capturing data on early reading skills among children aged 7–14 years as a new module of UNICEF’s Multiple Indicator Cluster Survey (MICS).
As a household survey, the MICS aims to capture learning outcomes for children in school, out-of-school and from primary and secondary level to assess the feasibility of the instrument in these different groups.
The technical advisory group studied the strengths and weaknesses of several options for a reading skill assessment that could be administered in a time frame of two minutes. On the basis of its findings, the group recommended measures along the lines of Save the Children’s Literacy Boost initiative. This resulted in a collaboration between UNICEF and Save the Children to investigate how well a streamlined version of the Literacy Boost practice can enable UNICEF to develop MICS for accurate measure of reading outcomes.
Save the Children’s Literacy Boost initiative aims to improve learning outcomes for primary-aged children in and out of school by focussing on 5 core reading skills: letter knowledge, phonemic awareness, vocabulary, reading fluency, and comprehension. Based on the secondary analysis of existing school-based assessments of reading at Grade 2 and 3 for Bangladesh, Burundi, India, Kenya, Lao People’s Democratic Republic (PDR), Philippines and Vietnam; taking into account the interview time constraints imposed by MICS and considerations regarding scoring in the field, UNICEF finalised 4 indicators for measuring reading abilities of children:
- Oral Reading Accuracy with a view to assess a child’s print decoding skills where a short story (60-70 words) is used to check a child’s oral reading accuracy (>= 90 % words correct).
Indicators 2 and 3 test Reading Comprehension abilities:
- Literal Comprehension tasks require a reader to recover information given in the passage
- Inferential Comprehension tests whether the child is able to connect facts in the text in order to answer questions
- Overall Indicator which calculates the percentage of children having achieved proficiency in all three tasks
Compared to Save the Children’s 8-10 question approach, the proposed MICS adopts a 2-3 question approach to identify, through the above-mentioned 4 indicator measure, whether a child is a reader with comprehension. The present study attempts to find out the level of agreement between the two approaches i.e., whether the results obtained from the proposed short MICS method is similar to the findings of the more elaborate Literacy Boost approach.
A comparison of the results of the two approaches, using the afore-mentioned multi-country data sets from the school-based assessments of reading at Grade 2 and 3, shows that an initially proposed 2-question (1 literal and 1 inferential) MICS method tends to overestimate the number of children identified as readers with comprehension. It was found that 81% of the children considered to be readers by ‘Save the Children’ approach were also identified this way by MICS. However, Save the Children’s more extensive measure could consider only 56% of those identified as readers by MICS 2-question approach as such. But when UNICEF re-designed the MICS to propose a 3-question approach (2 literal and 1 inferential questions) the level of agreement between the two approaches increased to a considerably satisfactory 75%. The overall results found a fair amount of consistency between the 3-question proposed MICS approach and its more extensive version, thereby making the streamlined approach acceptable.
Outlining the immediate next steps the researchers pointed to the need to develop general guidelines and specific tools to carry out pilot field testing of the proposed MICS measure in at least two countries; and following it up with evaluation studies for the same.
Read the full research paper here.