Using an international literacy testing spin-off to measure program performance in Ontario’s colleges and universities

The effort to use an adult literacy assessment spin-off in an education system has moved into the big leagues here in Ontario. The Higher Education Quality Council (HEQCO), an agency funded by the Ministry of Advanced Education and Skills Development (which also funds LBS), just announced the start of an ambitious testing project using Education and Skills Online in Ontario’s colleges and universities.

HEQCO’s news release and a previous article highlight the motivations and aims of what is called the Essential Adult Skills Initiative (EASI). Initially, 350 students entering various college programs in October will be tested, followed up with a second round of testing, with a different set of graduating students in February 2017.  University testing is scheduled to follow. This is not so much an individual learner gains assessment project, but more of a quality assurance project for postsecondary institutions.

Results will be used to gauge the ability of postsecondary institutions to provide “critical employability skills,” along with the “effectiveness” of students’ programs in supplying those skills. Under the guise of nebulous terms like core skills, quality enhancement and evidence-based is an endeavour to articulate and measure postsecondary learning outcomes in order for the government to devise a new outcomes-based funding formula.

It’s all very familiar to those working in LBS who have been on the front-lines of the Ontario government’s efforts to tie funding to test results using an international adult literacy testing spin-off. While the current effort to use an international adult literacy spin-off in LBS is focused on a different tool (and presents unique challenges as a result) there are enough similarities between Education and Skills Online and the Essential Skills for Employment and Education (ESEE)—since they draw on the same test design principles and methods—to be able to predict the challenges. Project coordinators at HEQCO could learn a lot from the experiences of LBS learners and educators, particularly those in college upgrading programs.

During the past few years, HEQCO has been engaged in some extensive work to articulate a comprehensive set of postsecondary outcomes. However, this initial attempt to pilot what they have deemed an appropriate outcomes measurement tool demonstrates the many potential pitfalls and misuses of an outcomes project. The Essential Adult Skills Initiative is operating under a couple of major misconceptions about what is actually being measured and the limitations of what can be inferred from the results.

The main misconception, which I wrote about in my previous post, is that the test results tell us something about an individual’s employability. Using the results to make inferences and predictions about people’s abilities on the job or even in their daily lives has been soundly denounced by the American test designers and repudiated by the current manager of the international testing project at the Organization for Economic Cooperation and Development (OECD). Here’s a run-down of what they have said about this practice (details and links are here):

  • William Thorn who oversees the Programme for the International Assessment of Adult Competencies or PIAAC at the OECD said the statements that connect the results with predictions of one’s overall capabilities and potential productivity are “manifestly false,” and merely a “supplementary interpretation” of the test results. In addition, the levels used to convey results are “not normative” and do not represent standards. They are merely “heuristic devices.”
  • Test designers, including the lead designer Irwin Kirsch from Educational Testing Services (ETS), have said the results can’t be used to describe “what literacy skills are essential for individuals to succeed”; and the data can’t be used to “say what specific level of prose, document, or quantitative skill [recategorized as literacy, numeracy, and problem-solving in technology rich environments] is required to obtain, hold, or advance in a particular occupation…”.
  • The misuse of the results, combined with the misunderstood constructs, has contributed to what one reading researcher, Thomas Sticht, calls “maliteracy practice” and the “defamation and gross misrepresentation of adult literacy competence.”

Closer to home, an Ontario based researcher , Tannis Atkinson, has described how the results are used as “a mechanism for identifying problematic individuals and sub-populations,” leading to disturbing and detrimental social categorization projects and related policy practices.

Despite the repudiation, denunciation and criticism of the misuse of the results of international literacy tests and their spin-offs the statements endure. In a recently published report Smarten Up: It’s Time to Build Essential Skills from the Canada West Foundation, authors Janet Lane and T. Scott Murray write the following (see page 29):

It is estimated that fully 49 per cent of the adult Canadian population aged 16 and older have only Level 1 and 2 skills. Half of our adult population lacks the literacy skills to compete fully and fairly in the emerging knowledge-intense global economy.

This report, along with its recycled statements, also appear in the Ministry of Advanced Education and Skills Development policy vision document Building the Workforce of Tomorrow: A Shared Responsibility. The aspect of their vision that is focused on measuring learning outcomes to “increase the employment readiness” of postsecondary graduates and measuring adult literacy and numeracy rates, is essentially based on a myth and falsehood.

The second major misconception is that results from Education and Skills Online actually provide useful information about literacy and numeracy development, which can then be used to inform instruction. As a quality assurance project, a key aim is to “assess the effectiveness of [students’] programs, identify weaknesses and address them.” However, Education and Skills Online is devoid of pedagogically useful information.

The test design, originally developed in the 1980s for population testing in the US and later international testing, doesn’t actually incorporate any theoretical principles or knowledge of reading or mathematical development into its construct of literacy, numeracy and problem-solving in technology rich environments. The construct developed for the testing initiative is intended to get at cognitive processing using text and carefully constructed textual manipulations as a stimulus. An individual’s literacy and numeracy development is not directly tested. Indeed, one has to be able to read and perform basic calculations to take the test, but their textual skills and knowledge are simply borrowed in order to get at some sort of understanding of one’s ability to locate and assemble bits of decontextualized information, quick thinking and multitasking.

While this locating-information construct may work for population testing and correlating variables such as employment status, income, health indicators and occupation to scores–although many literacy and reading researchers say it doesn’t (see here, here and here, for example)—it becomes very problematic when used within an education system to gauge individual and program performance. What pedagogically sound and supportive interpretations can be made when one is testing the use of a superficial and disengaged reading process that disregards deep and careful reading, and the ability to analyse, synthesize and make meaning of the text? Postsecondary educators are potentially being held accountable for a reading pedagogy that is counterproductive, to say the least, and potentially detrimental. Do employers need those who move through information in a superficial and disengaged manner? Is this the “core skill” they are after?

Based on international adult literacy testing projects of the past 20 years, we can likely predict who will falter when taking the test and be deemed to have “weaknesses” or be labeled as “deficient”: students with less test-taking experience and a more tentative literacy and language repertoire to draw on, often because English or French is not their first language; those not able to work through challenges in a testing situation, perhaps due to previous negative experiences in the school system; and students who have been away from formal education or out of work for extended periods.

All of these students could be doing well in their classes and programs, where they have time, space and support to acquire particular literacy practices that are fundamentally important to their programs, professions and careers. But then test results may or may not align with their grades. What will get reported then? (The headlines are predictable: Universities don’t teach the basics or Colleges inflate grades). More worrisome, how will results be correlated with demographic information and particular student groups? How does this lead to a meaningful discussion of improvements in the system? Although there are no plans to publicly rank the institutions, yet, which ones will be near the bottom?

Then, if results are actually tied to funding, what policy distortions and perverse pedagogical efforts will be introduced to improve test scores? Who will spend more time learning the testing pedagogy in order to improve their scores, which ultimately takes away from valuable time needed to learn the actual literacy/numeracy skills and knowledge of their particular programs, academic disciplines and careers?

The time, effort and money being spent on a project that degrades the system and is potentially detrimental to students is disheartening to say the least.

Advertisement

2 thoughts on “Using an international literacy testing spin-off to measure program performance in Ontario’s colleges and universities

  1. Christine – it will be interesting to see what counts as peformance in the “pay-for-performance” model announced as The Government of Canada’s first social finance project.

    Investors will be reimbursed and may receive up to an additional 15% as a return on investment, if a demonstrated skills gain is achieved.

    References are made to better employment outcomes, accelerate inclusive growth, and diminish social and economic disparities, but I suspect demonstrated gains in those are not required for a return on investment.
    http://www.collegesinstitutes.ca/news-centre/news-release/ground-breaking-social-finance-pilot-assists-unemployed-canadians/

    Alan

    Like

    1. Thanks for sharing yet another example of the (mis)use of international literacy testing and its perpetuation of the falsehood that a certain score or level will change one’s life chances on its own. I followed your link and then looked at the partners’ sites. I agree with you—it seems clear that only a score increase will trigger a return on investment:
      “If participants in the ESSF project achieve a demonstrated skills gain, the initial investments will be reimbursed, and investors may receive up to an additional 15% as a return on investment.”
      Looks like there will be a pre-test using a TOWES tool, 24-60 hours of training, followed by a post-test.
      There is no mention of employment outcomes (do people actually get jobs?) nor learning outcomes (will anyone get a recognized credential?). So what’s in it for the students at the four participating colleges (Douglas College in BC, Confederation College in Sudbury, Collège Lionel-Groulx in Quebec and Saskatchewan Polytechnic)? Why in the world are social investors led to believe that a score increase on its own will “diminish social and economic disparities”? There are no real performance measures in ESDC’s scheme, only an abstract score without actual value. Not only that, but the pressure for students to perform and measure up is huge. Investors front the costs of the program, students must then demonstrate a score increase in order for the Government of Canada to pay back the investors. No score increase, no return on investment. The students get nothing out of this, yet carry the burden for the overall outcome of the project. Arguably, the investors don’t get much out of this either. Even if they do see a return (after all, if you teach to the test, you’ll likely see a score increase) what does it matter? Will the increased score lead directly to a job or higher wages? Will their investment actually assist anyone?

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s