Long-Term Impact Statements: Misunderstandings, Fundamental Flaws and More Design Inequities
Measures of long-term impacts drive the LBS system, as they are used to develop measures that will demonstrate overall system effectiveness and efficiency. They also guide decisions about the types of measures used.
There are four long-term impact statements. Two address employment and two indirectly address literacy development.
The two employment-related long-term impacts miss the mark, as their overt focus on employment and the labour market stifles opportunities for programs to demonstrate their effectiveness in helping adults access other education and training programs. The two literacy development statements are derived from the OECD’s international literacy testing project and are saturated with misunderstandings about testing.
Employment related long-term impacts miss the mark
The employment-related impacts are measured using data collected directly from learners at entry and exit. We can discern the key interests of the ministry when examining their annual data reports. I have no insights into how the reports are used within MAESD, and have to assume that they are distributed and discussed at high levels when reviewing the overall operation of the LBS system. A key principle is to compare basic inputs and outcomes when making judgements about program operations.
What set of data is examined at entry and exit in LBS? The category called labour force attachment, whether one is employed or a student and to what extent, is examined at both entry and exit, and by goal path. In essence, policymakers want to know if participation in LBS helps one become more attached to the labour force.
What the data reveal is that the LBS system has its greatest impact on supporting access to education, and not employment. In 2015-2016 one-third (33%) of learners were employed in some way when they entered the program, and 36% were employed in some way when they exited—a scant 3% increase that could be attributed to chance.
However, when looking at education, only 7% of learners were considered full or part-time students when they entered LBS, and 34% stated they were full or part-time students when they exited—an impressive five-fold increase. Also notable is a reduction of the number of learners who state they are no longer unemployed. Many of these learners likely became students.
Yes, one can argue that education is an aspect of employability, but having it left unnamed in the logic model devalues its role. It has also led policymakers to restrict the development of data collection tools and reporting that demonstrate how LBS programs provide a conduit to further education far more often than direct entry into the labour market.
The second employment-related impact—employment and training services aligned with labour market development needs and priorities—gets more at program efficiency. Here we see a doubling down of systemic and designed inequities. Not only does the statement miss the mark by subsuming and devaluing the importance of education, but an additional flawed measure has been introduced.
Analysts are now tracking program duration based on the number of weeks a learner is in a program, despite variations in the number of hours of instruction offered each week. On a more recent annual data report, program duration is cross-referenced with stream (i.e. Anglophone, Deaf, Native, Francophone), sector and goal path data. Not only is this a flawed comparison but it is yet another designed inequity since some college and school board programs can offer full-time courses with a greater number of weekly instructional hours, but other programs, often community-based, can offer only a limited number of weekly hours with volunteer support.
It’s interesting to note that the annual reports do not include relevant information related to learner barriers, such as years of education or age or income source when examining exits and outcomes. Learners are viewed the same way on the reports—from a non-reader to someone with a university education. (There is however a breakdown by stream.) In addition, all programs are also viewed the same way—from those dependent on volunteers, working a few hours per week, to those able to hire full-time staff with specific qualifications to deliver full-time courses, most often enrolling learners with the highest levels of education. Excluding learning barriers data when examining outcomes, and using inaccurate program duration data are designed inequities that have a direct impact on those with the least education and limited access to programs.
Literacy-related long-term impacts draw on several misunderstandings about testing
The literacy-related long-term impact statements in the logic model have been lifted out of overviews of the OECD’s international literacy testing project and its most recent round of testing, the Program for the International Assessment of Adult Competencies (PIAAC). One simply has to refer to general descriptions of PIAAC (found here and here) to see the direct connection.
|LBS long-term impact statements||PIAAC|
|Increased learners’ participation and engagement in community, social and political processes.||[A]dults who are highly proficient in the information-processing skills measured by the survey – literacy, numeracy and problem solving in technology-rich environments – are more likely to be employed and earn high wages. They are also more likely to report that they trust others, that they have an impact on the political processes, and that they are in good health.|
|Enhanced populations’ competencies and proficiency in key-information processing skills leading to improved economic and social-well-being and health.||The survey measures adults’ proficiency in key information-processing skills – literacy, numeracy and problem solving in technology-rich environments – and gathers information and data on how adults use their skills at home, at work and in the wider community.
These two long-term impacts are not currently being measured, despite on-going efforts to figure out a way to do so, using what is called a learner gains tool. Policymakers have run into some challenges attempting to use an existing PIAAC spin-off test developed by the OECD called Education and Skills Online , and have also faced challenges in their attempt to use a Canadian product called Essential Skills for Education and Employment. There are several issues using international literacy testing spin-offs in LBS or any educational setting:
- They weren’t designed to show incremental learner gains.
- The scale and five-level system isn’t designed to capture the abilities of those in LBS.
- A test-taker likely has to have 8-10 years of formal education to place at Level 1.
- Test-takers at Level 3 have completed college and university.
- The underpinning model of reading is different from the model of reading used in other educational tests and commonly used in instruction.
- A spin-off like Essential Skills for Education and Employment, which was developed without the expertise and methodological oversight of the OECD, likely isn’t aligned with international literacy levels.
The policy decision to align with international literacy testing levels and tools in order to generate the data to demonstrate effectiveness constitutes yet another designed inequity. It is also an unfair testing practice.
The second literacy-related statement is more worrisome. It makes the LBS system accountable for something it can’t control, setting it up for failure. The first part of the statement is aimed at measuring the impact that LBS will make on the overall literacy levels of Ontarians. The thinking, it seems, is that the LBS system will be able to show over time that it is able to contribute to an enhanced population’s competencies and proficiency in key-information-processing skills. This alone is impossible, but then an additional expectation is added: leading to improved economic and social well-being and health. This expectation is simply misinformed and unfounded.
Here are the problems.
First, LBS is one of three provincially funded programs directly supporting adult literacy development. The other two are ESL/FSL and Adult Credit. At 42,000 learners per year, LBS is the smallest of the three programs. LBS and the other programs combined work with about 300,000 adults per year. It’s not up to LBS alone to demonstrate an impact on the province’s overall proficiency levels, if (and this is a big if) it is even possible to do so. One comprehensive study reveals that it may take years to see long-lasting score increases on international tests and directly related spin-offs after initial program participation.
Second, in addition to participation in education, on-going use of one’s literacy in a job or in the community will have an impact on population level proficiency scores. There is research that demonstrates how a modest score increase after some education engagement was followed by a score drop-off once the educational program ended.
Third, the impact statement is based on a misunderstanding of the fundamental difference between causation and correlation. Yes, literacy proficiency is associated with income, education level, employment, health, and even voting, but that doesn’t mean literacy proficiency has a direct cause and effect role. In other words, an increased score in and of itself does not lead to improved economic and social well-being and health. (It’s far more likely for the opposite to occur: one’s literacy develops in response to greater education opportunities, access to challenging and stimulating jobs, and higher incomes that support access to books, travel, and cultural experiences.) The statement is simply wrong. Yet, it will be used to judge program effectiveness and the overall effectiveness of the LBS system. It also has tremendous influence in the logic model, shaping other outcomes and associated measures.
Having this statement in the LBS logic model is simply unfair, unfounded and unjust, as it holds the LBS system accountable for something it simply cannot control or influence in any way.
The long-term impact statements will lead to flawed and inaccurate judgements about the efficiency and effectiveness of the LBS system as a whole, potentially jeopardizing its viability. We can also start to see how the flawed and inaccurate judgments that occur at a local program level are connected at the system level, and all have been designed into the system via the logic model.
In the next and final post of the series I will look at how the logic model is designed as a closed system, containing its own internal logic and feedback loops. This means it simply isn’t capable of measuring what people actually do in LBS. It is designed to measure its own abstracted vision of LBS using flawed assumptions and understandings.