The literacy technique developed for use in international literacy testing, and then carried into programs, is a constructed reading method. It is a set of processes, referred to as constructs, developed for testing. The technique known as information-processing directs people to read and respond to texts in very particular ways. Those processes are put to use in all test tasks in all domains developed for international literacy testing, including the more recently developed problem-solving in technology-rich environments.
Information-processing requires a reader to superficially scan texts for bits of information, and then make a match between the text and test question. Additional elements, or what test developers call complexity variables, are carefully constructed to make the matching process more difficult. One of the key elements used to determine the difficulty of test items is something called a distractor. You may be familiar with this testing device, which is also used in multiple-choice testing. The distractor is used to confuse and side-track the test-taker, presenting very similar plausible information. In multiple-choice testing, two possible answers are listed, but only one is correct. In international literacy testing and some spin-offs, the distractors are in the text itself, and then reflected in the answer choices.
First, to fully understand the information-processing technique it’s useful to know what it doesn’t measure. It is only superficially related to the vast field of cognitive studies and its use of an information-processing metaphor to understand the brain and the processing of sensory information, including text. While test developers borrowed the term information-processing, they did not draw on reading research within the field of cognitive studies to build their model. In addition, the method was not derived from an in-depth examination of what people actually do when reading various sorts of texts. Once an initial model of was developed, test developers did enter a workplace to see if the model would connect to reading activity in some way. They looked only for the bits of reading related activity that fit the model, lifted the activity out of its context, and proclaimed that the model did indeed reflect reading activity at work.
Test developers described their interest in creating a measure of cognitive processing rather than literacy knowledge. They were not concerned with measuring people’s use of language conventions when reading. But were more interested in measuring people’s ability to manipulate their thinking in a textual context. Test designers were figuring out a way to measure cognitive abilities and functioning in the context of an increasingly textualized society. Arguably, they created what is essentially an IQ test for the information society. (I am only starting to investigate the longstanding interest in intelligence and social engineering that is carried into the OECD project. Others have completed related research here, here and here that investigates the OECD’s social management and social ordering interests, and the technologies designed to accomplish those interests such as international literacy testing.)
To construct their model, the designers of the international literacy survey drew from functional literacy tests in use in the 1970s. They took two things from those early functional literacy tests: 1) some general design principles regarding the use of snippets of texts and documents people may encounter in their lives, and 2) an analysis of the types of errors people made on the tests.
Designers were particularly interested in errors involving a misunderstanding or mis-match between the test questions and the required responses. They were not interested in linguistic errors, like word choice. The mis-match errors were referred to as processing errors, which then became the categories or complexity variables used to build degrees of difficulty into the test. Test-takers are directed to locate information, cycle through the text to search for a match, integrate one piece of information with another and generate information not found in the text in order to respond to a question. The information-processing technique is derived from the mistakes and misunderstandings people made when taking a functional literacy test.
An element of the skills versus task debate, the devaluation of school-based approaches and texts, connects directly to one of the early goals of test developers: to develop a testing method that was distinct from models of testing (and teaching) commonly used in the school system. Creating a test that disregards the presence of various linguistic abilities such as decoding, vocabulary, grammatical knowledge, summarizing what was read and finding the main idea, etc—the very skills that make up a reading comprehension approach used in schools becomes a problem when the methods re-enter an education system through various spin-offs and related learning activities.
The error derived information-processing technique is carried into the Essential Skills framework, the OALCF, spin-off tests like the ESEE and OALCF Milestones, and learning activities. Although adaptations and modifications are made, enough basic design principles and methods remain to create a unique form of testing and reading. In order for learners to complete test questions, they must learn the reading and testing technique. We have extensive data that describes the disruption, confusion, and frustration this has caused in the context of the OALCF Milestones.
The emphasis on the value of a task-based approach in the OALCF and subsequent de-emphasis and devaluation of schooling approaches and reading comprehension has led to the creation of a perverse reading pedagogy, one that is based on an error analysis and is designed to interfere with a person’s ability to connect with a text and derive meaning from the text.
The skills versus tasks debate, which is the direct result of years of government funded curriculum development and reform initiatives in the LBS system, has subsumed discussions of what people actually do when engaged in a variety of literacy activities in various settings for various purposes. No one seems to talk about literacy development from the standpoint of what people do (within formally organized programs and outside of them) as they carry on with their day-to-day lives in a highly textualized society.
In Part 3, I explore how the difference between a task-based approach and a skills-based approach plays out in an example test task.
In Part 4, I look at the development and implications of a perverse pedagogy.