Messages of failure and not being “normal” are built into the ESEE

In a previous post, I wrote about the  difficulty of the Essential Skills for Education and Employment (ESEE), an assessment being piloted as part of the Learner Gains Research Project (LGRP). The research involves 1800 learners who are to take the ESEE at the beginning and end of their time in a program to see if the test could be used in all LBS programs (by some/most/all learners?) to generate accountability measures aligned with international literacy testing scores.

Based on what I’ve learned, things aren’t going so well for many learners participating in the LGRP. I’ve heard from a couple of study participants through the blog who said the test is simply too difficult, takes way too much time (at least twice the recommended time, and sometimes longer), and is simply inappropriate for LBS learners—even those in college programs who tend to have the highest levels of education.

I was also involved in a small study designed to collect some feedback from educators and program coordinators involved in the LGRP. People I spoke too also said the test is too difficult, much too long and completely inappropriate. It isn’t connected to the curricula in use (the K-12 curriculum and predominant approaches to teaching literacy). It isn’t even directly connected to the OALCF, which educators are discovering when they get different results when using the OALCF Milestones and the ESEE.

I have learned that some learners are simply refusing to take the post-test after negative experiences with the pre-test. I’ve also been told that some programs are losing learners. After taking the pre-test, they simply leave. Other learners stay and have shared mostly negative responses with instructors and coordinators, including frustration, incredulity and tears. One instructor said some of the learners she works with were “decimated” after taking the test.

Learners are put in an untenable situation. In addition to the very difficult texts to be read (Grades 11-12 on average), are disturbing messages of failure and inadequacy built into the ESEE.

Students who complete the test see the messages in the test’s summary report (one is available here on page 37) and reproduced below. At the end of the test, a report displays the results and an accompanying explanation. The explanation is needed because the test results aren’t meaningful on their own. Attempts to make the score meaningful are bewildering, demeaning and baseless.

Here is what the learner sees:

You received two Essential Skills scores:

1.     A score in the form of a level, such as Level 1. These scores start at Level 1 and can go as high as Level 4.

2.     A number score found in brackets, such as (250). Number scores normally range between 200 and 300. It varies, but most jobs require reading skills at 250 or higher.

The first part of the summary contains a statement that is simply bewildering: test-takers receive “a score in the form of a level.” So do they receive a score or a level?  The next sentence only adds to the confusion: “These scores start at Level 1 and can go as high as Level 4.” Huh?

The second part explains the meaning of a number score using the example of 250. The sentence reads: “Number scores normally range between 200 and 300.” A couple of things are happening here. First, a 100 point scale is presented along with a reference to a halfway point—250. This is readily interpreted to mean that 200-300 is basically the same as 0-100. Supporting the interpretation is the 250 example. The score is mentioned twice and seems to have significance. It falls precisely at the halfway mark of 200-300, just like 50 falls bewteen 0-100. A score of 250 is perceived to be the same as 50%. Students then conclude that anything less than 250 is a failure.

One instructor I spoke to said she attempted to explain that the scoring system works differently, but students aren’t convinced. After all, they are making reasonable assumptions based on the information provided and their experience with testing.

Back to the part that may have jumped out at you: “Number scores normally range between 200 and 300.” Those who score less than 200 or at the low-end of 200 will not only interpret the score as a dismal failure, but could then perceive themselves to be outside the normal range!  In other words, they could interpret a score of 210 as receiving 10% on a test, and more dismally, as not being normal. It’s not apparent where this statement about normal ranges came from. This is not a norm-referenced test.

The statement is particulary unjustified when one considers test developers’ own findings, based on initial piloting of the ESEE completed last year. The average score of 503 LBS learners who completed the reading section was only 207 (page 15 of this report). They had a pretty good idea that the average LBS learner would not score anywhere near 250.

There is one more part to the statement: “Most jobs require reading skills at 250 or higher.” Who the heck says so? One-third of learners in programs are employed, most participating in the ESEE are (based on some preliminary feedback I received) scoring in the low 200s, and this is exactly on par with previous results. So why the unsubstantaited and baseless 250 cut-off?

Place yourself in the learner’s chair, staring at the computer screen after receiving a score of 207 or so. You’ve just spent two to three hours or more doing a test for reasons that aren’t very clear to you, except you’ve been asked to do so. Then you see a statement that suggests you aren’t “normal” and you may not be capable of working. If this was you, what would you do at this point? Get up and leave, never to return to an LBS program? Cry in frustration?

Yet, there’s more.

If a student is not able to complete any of the six inital locator questions, which are written at a senior high school reading level, he or she will see the following statement:

You did not correctly answer enough questions to proceed to Part 2. Your organization will speak with you about next steps.

Seriously.

Again, put yourself in the place of the learner. You are asked to take the test soon after walking into a program. Most adults who decide to register in a program have already had negative experiences in the school system. They may have thought long and hard about returning to an education program. In addition, they may have recently attempted to register in an adult secondary credit program or a college program, only to be told that their skills aren’t adequate. The individual with past negative experiences, likely with a recent experience of rejection, then sees the above statement.

There are two parts to the statement that can crush people. First, students receive yet another failure message: you didn’t “correctly answer enough questions” so you can’t proceed.

Then they read that someone in authority “will need to speak with you about next steps.”  It sounds like a call to the principal’s office. The statement insinuates that some sort of transgression has occurred. It also suggests the learner could be told that he or she may not be able to register or participate in the LBS program, the program of last resort. Why else would one need to “speak to someone about next steps” if they are already registered?

How would you feel? What would you do? Do you add one more message of inadequacy to the ones already received? Or, hopefully, brush it off and conclude the test is utter nonsense and a waste of your time? But then what?  Will the program make you go through this again? How much of this experience and messaging stays with you?

Program educators and coordinators also receive unsettling messages that could lead them to question the eligibility of adult learners who cannot complete the six-item locator test (even though the texts are written a senior high school level). Perhaps they don’t belong in the program. Perhaps the ministry will not fund the program for registering students who can’t get past the  screening test. One can’t help but wonder if this is the ministry’s ultimate goal.

Advertisement

7 thoughts on “Messages of failure and not being “normal” are built into the ESEE

  1. OMG, Christine Pinsent-Johnson… While you are entirely entitled to your own opinions, you seriously need to take a deep breath and muster a dose or two of objectivity. Your twice, thrice when not four times removed bits and pieces of anecdotal evidence from a few select people do not pass the sniff test of anything worth calling evidence-based. With every single one of your rants you sound more pamphleteer than the researcher you claim to be. And this is me the citizen commenting rather than the bureaucrat. Respectfully.

    Like

    1. What exactly are you disputing? Perhaps you see the results statements differently. Do you think they provide substantiated, relevant, easy-to-understand and supportive feedback to the learners and educators who work with them?
      My conclusions and analysis in this post are my own. My critical commentary is based on a content analysis that is informed by my experiences overseeing a small qualitative study related to the LGRP, my own understanding of international literacy test methods, feedback from blog readers and nearly two decades of experience working in LBS programs. My aim is to alert readers to serious issues with a test that is intended to be used to make decisions about program funding.
      Call my posts rants or pamphleteering, but in both posts that address the ESEE I intentionally examine aspects of the test that anyone can access, analyse, and then draw their own conclusions. I welcome all comments that are focused on the two main conclusions I have made so far.
      First, the reading items are written at Grades 11-12, a level that is simply inappropriate and unfair for broad use in LBS. (Has the ministry attempted to carry out its own readability analysis? If so, can we compare results?) Second, the results statements and related scoring thresholds are bewildering, unsubstantiated and potentially harmful to learners and individual programs. Why was a 250 cut-off established? Why is 200-300 considered a normal range? Why is the term normal even used? Why are test-takers led to believe they aren’t employable if they don’t score at 250? Why didn’t test developers adjust their results statements when they had data from hundreds of LBS learners that showed their average score was only 210?
      Referring to all aspects of the post as anecdotal information and questioning my intentions and credibility is a convenient way to dismiss and continue to ignore serious problems. While it may be easy to discredit what I say, what about those directly involved in the study? Why not rely on what they have to say? Has the ministry even asked for their feedback?
      If you provide your mailing address, I can send you my next pamphlet.

      Like

    2. Mario, if it wasn’t for Christine and the “few” practitioners who have the courage to call things the way they see them, no one would have any idea about what may be driving the developments in recent years or not. Your department and the SDNDF team had ample opportunities to engage with researcher like Christine and myself, we have asked for information to contextualize findings and we have offered our time to work with you. I can tell you that there are not only a “few” voices out there and I can tell you that the feedback we received about research like the SDNDF funded project about the role of the digital milestones is genuine and not singular. It was fortunate for the researchers that this project much like the OALCF evaluation led to discussions about wider issues with the assessments. We have been here and available to engage with you and contribute to the LBS Program, up to you to take us up on the offer.

      Like

  2. I have read and re-read this post and feel sadder every time I do – it sends me back to the LSAL at Portland State- all of the research reports are freely available at: http://lsal.pdx.edu/reports.html

    Here is what Stephen Reder concludes,

    According to LSAL research, the
    initial impact of adult literacy and essential skills programs
    is best measured in terms of changing literacy practices.
    Over time, these changes in practice will lead to increased
    proficiency levels and enhanced economic development.
    To support adults along these life-wide and lifelong
    pathways, we should be guided by William Butler Yeats:
    “Education is not filling a bucket, but lighting a fire.”

    Thank you for this timely look at the perhaps unintended, but nevertheless potentially negative consequences of testing

    Liked by 1 person

    1. Thanks for introducing Stephen Reder’s very important longitudinal study (10 years long!) that this is based on. I will be looking at a couple of things related to the Longitudinal Study of Adult Literacy (LSAL) in future posts, including some more information about the very important conclusions regarding the challenges in measuring literacy proficiency gains.

      Like

  3. Thanks for this, Christine. I share your concerns and outrage. When I am working with LBS programs I hear these but also hopes that this too will pass. I’m concerned that it won’t and that it further narrows what counts as literacy and thus excludes many people. I recall Premier Wynne’s speach as the minister of education at the CMEC conference some years ago. when she said that the low literacy skills are to blame for the economy’s bad performance. There were literacy learners in the room! And thete are many people who hold down jobs, pay taxes, and support their families doing work that they are appearantly not skilled for. If thete is a skills gap I am impressed that people find ways to perform on their job nontheless but these skills don’t seem to count for nothing.

    Like

    1. It seems most politicians and policy makers are easily duped by the literacy productivity myth. Completely false statements about one’s ability to “function” in society and contribute to the economy, based on international literacy testing results, have been used to perpetuate the myth. It now seems to be perfectly acceptable to use literacy scores and levels to judge individual worth, and even determine who is and isn’t worthy of government support and access to education and training programs. We can see this happening at the federal level as they recently disbanded all supports for adult literacy and accompanying support for the unemployed, and shifted their support, (using the Canada Jobs Grant) to those already working and with greater access to learning and training opportunities. If a citizen doesn’t produce a decent return on the investment of education and training dollars, then they aren’t deemed deserving of government spending. We’ve moved past the era of simply assigning blame and are now actively punishing those relegated into “low literacy levels” by withholding equitable access to education and training supports. Will the Ontario government follow a similar path as the feds? It does seem they are headed that way. They have embraced the misinformed and spurious connections between individual literacy abilities and economic growth, and are actively developing the means and measures to demonstrate it isn’t worth spending on those with less than a high school education and “low literacy levels.”

      Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s