So why is it that the educational aims of the majority of learners in LBS are not valued on their own and made secondary to employment aims?
Why are some some programs pressured to not support independence goals and learners’ personal aspirations?
Why is the value of the independence category questioned?
Why do some learners have more value in the accountability system than others? And why are some kept off the books?
How did we end up with an assessment system that doesn’t provide useful feedback to instructors and learners?
How did we end up with a set of levels that were intended to align with international literacy testing but don’t? Why was this deemed a useful thing to do in the first place?
How did we end up with a curriculum framework that is disconnected from the way literacy is taught in the education system?
Why is research not used to support pedagogical knowledge? Why is there no on-going professional development to support teaching and learning, and no way for educators to share expertise and knowledge?
How did we get here? I want to try and get at the rationale behind some of the decisions made and pressures applied by policy staff that work against learners and local programs. Policy staff include those who design accountability and reporting documents, various forms, information and data collection processes and assessment protocols, along with those in the field who monitor programs, the Employment Training Consultants (ETCs).
Some policy staff may be perplexed at times by the work they are mandated to do, wondering if it makes sense for the learners and staff in programs. Others, particularly those who work in the field and have direct contact with local programs, attempt to mitigate some of the confusions and pressures. But their efforts are limited. All policy staff, whether in the field or in a Bay St. office, are coordinated by the overall mandate of the system described and defined in the Program Logic Model for LBS.
The logic model is a top-level policy document designed to coordinate the work of policymakers and local program staff in developing, collecting and analysing information and data that are used to demonstrate overall system effectiveness. The thinking is that it is a way to show that the investment being made in LBS has a concrete return, and tax dollars are being spent wisely.
In order to talk about real change in LBS and address many of the issues raised in the recent LBS evaluation, we have to understand the influence of the LBS logic model. Each of the questions I raised above has a response that can be traced back to the logic model. It is used to guide and coordinate the work of policy staff working in finance, data analysis, research, program evaluation, program delivery and field support. Policy staff aren’t making random decisions and acting on whims. Things don’t just happen, no matter how perplexing, contradictory or frustrating, but are designed into the system.

A logic model (according to Wikipedia) is the foundational tool used in managing programs and evaluating their outcomes and impacts. When we talk of program design, it is the logic model that defines and describes the design of the LBS system. Key design elements are organized around inputs and resources, activities, outputs, outcomes and impacts. We see all of these categories in use in the LBS logic model.
A logic model can be a useful organizational tool when it is inclusive and comprehensive, representing the shared understandings, knowledge and activities of an organizational system.
Concealed Organizational Oversight
However, things work differently within LBS and MAESD. Most (all?) of those working in local programs are not aware of its existence, even though it is the key policy document used to manage and monitor the LBS system and people’s work. It also isn’t open to discussion and review. Only a small part of the logic model was included in the recent LBS evaluation, and the evaluators did not have the mandate to examine the logic model as a whole.
Upon first read, (here is the link again) the LBS logic model appears as if it is comprehensive and inclusive. For example, this is the stated purpose of the LBS system:
To help adults in Ontario to develop and apply literacy, numeracy and digital skills to successfully transition to employment, postsecondary, apprenticeship, secondary school, and increased independence, including learners who may have a range of barriers to learning.
We also see a couple of broad statements about outcomes and impacts that appear to reflect an inclusive and equitable approach to adult literacy development. One outcome is
Learners successfully transition to employment, postsecondary, apprenticeship, secondary school, and increased independence.
And, one of the long-term impacts is to work towards
Increased learners’ participation and engagement in community, social and political process.
However, these seemingly inclusive statements need to be considered within the logic model as a whole, and we have to know where some of the statements came from to figure out what they are actually referring to.
Once we fully understand the meanings and references within the logic model, we can then start to make connections between on-the-ground activities, decisions, pressures and perplexing reporting requests and the way the logic model coordinates and influences day-to-day work—especially related to reporting—and many program development decisions.
In the next posts I will examine key statements within the logic model, reveal where some key statements came from, and look at what’s not and who’s not included in the LBS model.