The Power of the Logic Model in Coordinating the Work of Policymakers and Local Programs (Part 4)

Creating a Closed Feedback Loop and Excluding the Expertise and Knowledge of Those in LBS Programs

The LBS logic model is designed to measure its own perception of literacy development, one that is detached from what adult learners actually do in programs, and the teaching and program development work that program staff do. It is also isolated from international literacy testing, defeating the aim of policymakers to develop an aligned curriculum and assessment system.

It is a closed and unique system, containing its own internal logic and understanding of literacy. Metrics are not simply a support tool used to measure aspects of the system or provide some sort of representative understanding of the system; they define and organize the system itself. An input-output feedback loop has been created. Literacy development outputs, outcomes and impacts are lifted out of the same frameworks and methods used to establish the inputs, and neither the inputs nor outputs provide a useful representation of what is happening in programs.

LMCircle

At the same time, those who work in programs have been left out of the logic model and its development. It was not built with stakeholder input, and most stakeholders are not aware of its existence and influence. Opportunities to build a responsive and meaningful logic model or make changes are blocked. The logic model locks-in its own constructed vision of literacy development and locks-out people’s expertise and potential to work towards change.

How the LBS logic model excludes stakeholders and LBS knowledge and expertise

Take a look at the listed inputs for the system:

LMInputs.png

A fundamental distinction is established between the ministry and LBS. LBS is a system comprised of an organizational structure of delivery agencies (service providers) and support organizations. In contrast, MTCU (now MAESD) is a resource, including its staff. Ministry staff are written into the logic model as a valued resource, but LBS staff—the coordinators and educators—are left out.

MAESD logic model designers, did not need to consider LBS staff as a resource, since they were designing a textual system that would be used to amass various forms of data. In place of people in the LBS program are measurement frameworks and tools. Who needs people’s knowledge and expertise when systems can readily be designed to replace them? A well-designed system, so goes the thinking, is deemed more reliable and capable of providing objective and sound data to demonstrate effectiveness and efficiency than people’s experiences and knowledge.

The implications of writing LBS staff out of the logic model were made apparent in the recent LBS evaluation summary.  Evaluators concluded that “open, collaborative relationships are lacking.”  They explain further (on page 10):

Providers and support organizations often feel that the Ministry breaks promises (not introducing a funding model), keeps secrets (not releasing the 2011 evaluation), makes decisions without consulting the field (repurposing SDNDF funding), does not understand how LBS works on the ground (undertrained ETCs), and undervalues the program as a whole (funding that is declining in real terms).

In defence of ETC’s and other policymakers who attempt to support programs and address their concerns, they don’t intentionally set out to break promises, keep secrets, misunderstand program work and undervalue the program as a whole. These are the impacts we see as a result of working in a closed system that locks-in a constructed vision of literacy development and locks-out people’s experiences and expertise. The logic model creates a situation in which ministry staff do not have to consider the concerns and interests of people in LBS in order to fulfill the aims of the model. They don’t have to build “open and collaborative relationships,” and there are no mechanisms currently in place to do so.

People in LBS, including learners, are only useful as suppliers and gatherers of data. The only mechanisms in place that connect people in the ministry with people in programs are oversight and audit processes designed to ensure compliance. When policy staff and ETCs do attempt to build collaborative relationships with people in programs, they are working against the model and with some serious constraints.

The upcoming symposium, involving the ministry and representatives from all LBS programs, is an anomaly. It likely wouldn’t have occurred if not for some active lobbying efforts by LBS support organizations aimed at the highest levels within the ministry, by-passing those directly involved in LBS. The symposium is the first comprehensive effort by ministry staff to connect with the field since LBS became fully integrated into the Employment Ontario system several years ago. It remains to be seen if it will remain an isolated event or will mark the beginning of more sustained and comprehensive change that includes an open discussion of the logic model.

How the logic model and accompanying frameworks and measures construct a vision of literacy detached from the actualities of people

The three main measurement devices— the PMF, OALCF and EOIS-CaMS—replace the knowledge and expertise of people working and learning in LBS, providing ministry staff with their complete understanding of adult literacy and its development. The complexities of learning, program activity and learners are transposed into a different form using the OALCF curriculum framework and a series of assessments. EOIS-CaMS is used to collect the data and produce supplemental reports to support the PMF.

The OALCF curriculum framework and series of assessments respond to the following basic questions:

  1. What is literacy?
  2. How do we know people have some?
  3. How do we know programs are doing something related to supporting literacy?

The responses are an invention twice removed from any understanding of what people actually do in programs to develop literacy. Learners do the tests and supply the data because they are asked to, but the data are only capable of feeding the contrived system and are not capable of providing meaningful feedback to learners, program educators and coordinators. They also aren’t capable of providing useful information to policymakers to assist in their effort to gauge overall program effectiveness.

A model of literacy designed for international testing was used as a basis for designing the OALCF curriculum framework and one of the main assessments in use, the Milestones. It is also the basis for the yet to be developed learner gains assessment. (The third set of assessments, Culminating Tasks, are unique.)

All tests are limited in their ability to represent and reflect the complexities of literacy development and learning. But the international literacy tests were never designed with this use in mind and introduce even more limitations and problems. The international testing model was based on an analysis of errors made in previous tests, and not understandings of what people actually do when reading. Then, the OALCF and accompanying Milestones were designed to emulate that model. However, as a second generation product designed for a very different use, designers recognized the limitations of the original and incorporated several interpretations and new elements. As a result, the Milestones and their framework (OALCF curriculum framework) are a unique interpretation, one that doesn’t reflect international literacy testing and doesn’t reflect what people actually do when developing literacy.

A detached and unique vision of literacy combined with the exclusion of people’s expertise and input means the logic model is locked into its own illogic.

This isn’t working out so well for anyone, including the policymakers who need to justify LBS expenditures to politicians and the upper echelons of the ministry. When examining aspects of the PMF in the LBS evaluation, evaluators listed several concerns and issues (on page 10). These critical findings appear in the first column of the table below. In the second column, I explain how the logic model directly contributes to the identified issues.

Findings from LBS evaluation Logic model contributions
Current measures of learner progress are not suitable for all learners The measures are loosely based on an international literacy testing framework and set of methodological principles never intended for educational use; it is devoid of the elements needed to reflect a range of progress during a short period of time.
A flawed and underweighted measure of learner barriers has incentivized creaming The stated purpose of LBS to support those “who may have a range of barriers to learning” is subsumed by the target statements in the logic model that are focused on international literacy levels. The use of the international literacy levels is a barrier in itself as the levels do not include those with an elementary level of education or less.
Rigid application of SQS requirements has restricted the flexibility of programs to respond to community/learner needs Learner-centred programming has been displaced. Their concerns and interests are secondary to the system’s constructed vision of “labour market development needs.” LBS stakeholders are locked-out of the logic model, which means their voice is inconsequential. There are no mechanisms in place to address their  concerns and interests. There are only compliance mechanisms (SQS) in place to feed the locked-in system.
Unclear expectations, combined with high stakes measures, cause anxiety that leads to gaming behaviours, reducing the integrity and interpretability of EOIS-CaMS data The targets do not support the purpose and are in direct conflict. The two assessment frameworks (IALSS and OALCF) designed to demonstrate the targets are not aligned. Neither are designed for those with beginning abilities. Assessment results from unfair tests could be used to make funding decisions. A great deal of confusion and anxiety results.
The integrity of EOIS-CaMS data is further undermined by unclear definitions and inconsistent guidance.

Support organizations and some areas in the Ministry (e.g. program policy, design and development) do not have ready access to the EOISCaMS data for continuous improvement purposes.

Given these liabilities, service providers use the data almost entirely for compliance rather than to improve services

The only role that service providers have in the logic model is to gather data for compliance purposes. In addition, the logic model and its accompanying frameworks and measurement tools represent the ministry’s constructed vision of literacy development, and not what actually happens in programs. There is little feedback and data that can be used to improve services.
Implementation challenges have undermined goodwill between the field and the Ministry, further undermining stakeholders’ confidence in, and willingness to use, the performance data There will always be challenges using the data to inform program development since it primarily represents the ministry’s constructed vision of literacy development twice removed from any attempt to connect with how literacy is actually developed in programs.

Goodwill may be very challenging to develop in a system that has excluded its own stakeholders as a valued resource.

 


I hope this series has been useful. It’s time to envision a system and new logic model that does the following:

  1. Includes LBS stakeholders, their knowledge and expertise, as a valued resource.
  2. Re-establishes truly learner-centred programming in which adult learners are the experts in their own lives, and are given the opportunity to articulate their reasons for participating in a program that are respected and valued.
  3. Better articulates the unique role of LBS in Ontario’s education system, supporting those who encounter structural barriers to learning opportunities and personal learning challenges.
  4. Equitably values and supports all literacy learning purposes and re-visions personal purposes as the basis for literacy development.
  5. Supports the pursuit of personal passions and projects that reinforce and invigorate other learning for school, work, families and communities.
  6. Articulates an understanding of literacy development that draws on research about how adults actually learn and develop literacy, rather than a model of errors developed for international population testing.
  7. Supports a research-based approach to program development that draws on meaningful contextualised approaches (supporting inter-generational literacy, workplace literacy, digital literacy, academic literacy and community engagement), along with targeted cognitive learning approaches for those with particular challenges.
  8. Develops a program evaluation strategy that is equitable, responsive and fair, incorporating various measures and methods of data collection for various purposes, involving LBS stakeholder input. One size certainly does not fit all, no matter how hard one tries.
Advertisement

One thought on “The Power of the Logic Model in Coordinating the Work of Policymakers and Local Programs (Part 4)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s